We are starting Part 2 of the Series "A bref AWS PHP history". You can check Part 1, where I presented the PHP language as a reliable and good alternative for Serverless applications.
Part 2 is to show how CDK will describe more AWS resource dependencies; how policies and roles are involved in this process; how to test if they are applied as expected; and how PHP services will use those resources.
Some of those topics seem straightforward to some people, but I would like to avoid guessing that this is known to the audience since I have experienced some PHP developers struggling to put all these together for the first time due to the paradigm change. It should be fun.
Table of contents:
- What else are we doing?
- Describing more AWS services - Adding an S3 bucket
- Services permissions
- Testing CDK
- PHP and AWS Services
- Handlers
- Application, Domain, Infrastructure, etc
- Wrap-up
- P.S.: Stats
What else are we doing?
The Part 1 function was returning a Fibonacci result from an int. Very simple. We will keep it simple for now to focus on putting the PHP code into a lambda and allowing PHP code to interact with AWS Services.
The computing complexity is irrelevant because it could be very complex logic or very simple, and the topics we are discussing in this part of the series will use the same design.
The lambda will now use the result of the Fibonacci of a provided integer or a random integer from 400 to 1000 (to get a good image and not to overflow integer
). This integer is the number of pixels of an image from the bucket and an arbitrary request metadata we are creating. If the image does not exist, the lambda will fetch a random image from the web with that number of pixels, save it and generate the metadata.
Get the part-2 source-code on GitHub and the diff from part-1.
Describing more AWS services - Adding an S3 bucket
S3 buckets are simple yet compelling services for multipurpose workloads. It will be added to the series as a basic storage mechanism. The lambda function, now called GetFibonacciImage
function, will need some permissions to manage the bucket.
Starting from the bucket definition, CDK give fantastic constructs, and it goes like this:
const brefBucket = new Bucket(this, `${stackPrefix}Bucket`, {
autoDeleteObjects: true,
removalPolicy: RemovalPolicy.DESTROY,
});
By default, buckets will not be deleted during a CDK destroy because they need to be empty. So you will have a hanging bucket in your account. I don't want to keep those contents if the lambda no longer exists. Then autoDeleteObjects
and removalPolicy
options are selected to enable the destruction of the buckets and their contents if I execute a stack destroy.
We want to decouple the configuration from the implementation to have a more SOLID code. That way, we avoid hard-coded configuration, making our code cleaner and more robust. Then, the code is ready to work, no matter the bucket name.
The implementation code is aware that the name will come from an environment variable and will work with that (yes, if you think that test will be easy to write, you are right):
and
environment: {
BUCKET_NAME: brefBucket.bucketName,
}
Services permissions
There is a Lambda Function and an S3 Bucket. The described use case determines that the lambda needs read and write permissions to the bucket. And nothing more. It is a good practice to give the minimum necessary permission to a resource:
brefBucket.grantReadWrite(getLambda);
The result is a list of actions added to the policy recommended by AWS for operations requiring only read and write.
Action: [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:DeleteObject*",
"s3:PutObject",
"s3:PutObjectLegalHold",
"s3:PutObjectRetention",
"s3:PutObjectTagging",
"s3:PutObjectVersionTagging",
"s3:Abort*",
],
Testing CDK
Testing is a great feature of CDK, and we can see how tests can verify our changes with npm t
:
const functionName = 'GetFibonacciImage';
/* ... */
it('Should have a lambda function to get fibonacci', () => {
template.hasResourceProperties('AWS::Lambda::Function', {
Layers: [Cdk.CdkStack.brefLayerFunctionArn],
FunctionName: functionName,
});
});
And if only the permissions the lambda needs were granted:
it('Should have a policy for S3', () => {
template.hasResourceProperties('AWS::IAM::Policy', {
PolicyName: Match.stringLikeRegexp(`^${stackPrefix}${functionName}ServiceRoleDefaultPolicy`),
PolicyDocument: {
Statement: [{
Action: [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*",
"s3:DeleteObject*",
"s3:PutObject",
"s3:PutObjectLegalHold",
"s3:PutObjectRetention",
"s3:PutObjectTagging",
"s3:PutObjectVersionTagging",
"s3:Abort*",
],
}],
},
});
});
You may want to check cdk-stack.test.ts to see more details.
PHP and AWS Services
This is the part where we have fewer serverless needs impacting the code, as the PHP code will follow the same logic we might be using to communicate with AWS services on any other platform overall (there are always some specific use cases).
The reuse of the same existing logic is excellent. It leverages the decision to keep using PHP when moving that workload to Serverless, as the bulk of the knowledge and already proven code would remain as-is. We may escape the trap of classifying that PHP code as legacy as if it should be avoided, terminated or hated.
As a side note, a few external layers of our software architecture are touched if a good software architecture was applied before. Therefore, during the implementation of this architectural change, it should be quick to realise how beneficial and time-saving it is to have a well-architectured application with a balanced decision for patterns, principles, and designs to be applied, ultimately giving flexibility to the application and its features.
The handler is simplified now and should accommodate everything to a class in the direction of following SRP, a principle that we are bringing to the code during the code bites:
Handlers
return function ($request, $context) {
return \BrefStory\Application\ServiceFactory::createGetFibonacciImageHandler()
->handle($request, $context)
->toApiGatewayFormatV2();
};
To handle the request details, the Fibonacci code now lives in a proper event handler (implements Bref\Event\Handler
).
php/src/Event/Handler/GetFibonacciImageHandler.php
public function handle($event, Context $context): HttpResponse
{
$int = (int) (
$event['queryStringParameters']['int'] ?? random_int(
self::MIN_PIXELS_FOR_REASONABLE_IMAGE_AND_NOT_BIG_FIBONACCI,
self::MAX_PIXELS_FOR_REASONABLE_IMAGE_AND_NOT_BIG_FIBONACCI
)
);
$metadata = $this->photoService->getJpegImageFor($int);
$responseBody = [
'context' => $context,
'now' => $this->dateTimeImmutable()->format('Y-m-d H:i:s'),
'int' => $int,
'fibonacci' => $this->fibonacci($int),
'metadata' => $metadata,
];
$response = new JsonResponse($responseBody);
return new HttpResponse($response->getContent(), $response->headers->all());
}
We would also like to start testing the PHP code. As the Event Handler might be a new layer (although very similar to widely used controllers), php/tests/unit/Event/Handler/GetFibonacciImageHandlerTest.php test class was created for that. The part-2 will only focus on this test class to avoid overloading with too many changes, but we would usually have test coverage for all the code in the repository.
Applications, domains, infrastructure, etc
Finally, we are inside the layers where we are most used to. To fit our purposes, the Event Handler will depend on and call an Application layer service that will orchestrate all the steps to fetch the image metadata.
php/src/Application/PicsumPhotoService.php#L34-L42
public function getJpegImageFor(int $imagePixels): array
{
try {
return $this->getImageFromBucket($imagePixels);
} catch (NoSuchKeyException) {
// do nothing
}
return $this->fetchAndSaveImageToBucket($imagePixels);
}
The interesting thing to mention about using AWS Services is how simple S3Client is instantiated. There is a factory to create service:
php/src/Application/ServiceFactory.php#L22-L29
public static function createPicsumPhotoService(): PicsumPhotoService
{
return new PicsumPhotoService(
HttpClient::create(),
new S3Client(),
getenv('BUCKET_NAME'),
);
}
new S3Client
is all we need because the environment will use AWS credentials, provided to lambda at execution time, as an assumed role that will carry the policies we defined in the CDK constructs stack, i.e., with read and write permissions to the bucketgetenv('BUCKET_NAME')
, which is gracefully provided by CDK when creating our bucket with any dynamic name it pleases to
I asked ChatGPT about this class:
The
PicsumPhotoService
class seems to be following the Single Responsibility Principle (SRP) as it has only one responsibility, which is to provide methods for fetching and saving JPEG images from the Picsum website.The class has methods to fetch the image from an S3 bucket, and if it's not available, fetches it from the Picsum website, saves it to the S3 bucket, and creates and puts metadata for the image in the S3 bucket.
The class has a clear separation of concerns, where the S3Client and HttpClientInterface are injected through the constructor, and the different functionalities are implemented in separate private methods. Additionally, each method is doing a single task, which makes the code easy to read, test, and maintain.
Therefore, it can be concluded that the
PicsumPhotoService
class follows SRP.
Wrap-up
It would be simple like that. Check more details in the source code, install it and try it yourself. This project is ready to:
- Create a lambda function using Bref
- Create an S3 Bucket with read and write permissions to the lambda
- Test the stack Cloudformation code
- Separate the PHP logic
- Have PHP communicating with AWS Services
- Start PHP testing
P.S.: Stats
I did not plan to talk widely about stats now, but I think I can share the most two significant measures I had with this simple code so far.
[Update 22/03/23] Using https://k6.io/
1 - With a brand new stack and a cold lambda:
scenarios: (100.00%) 1 scenario, 200 max VUs, 2m30s max duration (incl. graceful stop):
* default: 200 looping VUs for 2m0s (gracefulStop: 30s)
data_received..................: 49 MB 409 kB/s
data_sent......................: 7.8 MB 65 kB/s
http_req_blocked...............: avg=2.36ms min=671ns med=2.27µs max=581.87ms p(90)=4.18µs p(95)=7µs
http_req_connecting............: avg=712.63µs min=0s med=0s max=193.34ms p(90)=0s p(95)=0s
http_req_duration..............: avg=531.51ms min=204.46ms med=485.24ms max=3.81s p(90)=517.98ms p(95)=534.3ms
{ expected_response:true }...: avg=513.6ms min=204.46ms med=485.07ms max=3.67s p(90)=516.62ms p(95)=531.5ms
http_req_failed................: 0.60% ✓ 272 ✗ 44761
http_req_receiving.............: avg=123.76µs min=13.77µs med=44.04µs max=16.78ms p(90)=71.27µs p(95)=85.71µs
http_req_sending...............: avg=14.79µs min=4.27µs med=12.43µs max=402.74µs p(90)=23.97µs p(95)=31.4µs
http_req_tls_handshaking.......: avg=1.37ms min=0s med=0s max=330.58ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=531.37ms min=204.36ms med=485.11ms max=3.81s p(90)=517.77ms p(95)=534.13ms
http_reqs......................: 45033 373.683517/s
iteration_duration.............: avg=533.96ms min=204.55ms med=485.34ms max=4.37s p(90)=518.07ms p(95)=534.4ms
iterations.....................: 45033 373.683517/s
vus............................: 200 min=200 max=200
vus_max........................: 200 min=200 max=200
running (2m00.5s), 000/200 VUs, 45033 complete and 0 interrupted iterations
2 - After the first initial execution, cold lambda and all available images already saved to the bucket, where we got ~3K more requests being served for the same time
scenarios: (100.00%) 1 scenario, 200 max VUs, 2m30s max duration (incl. graceful stop):
* default: 200 looping VUs for 2m0s (gracefulStop: 30s)
data_received..................: 53 MB 442 kB/s
data_sent......................: 8.4 MB 70 kB/s
http_req_blocked...............: avg=2.26ms min=631ns med=2.24µs max=612.22ms p(90)=4.04µs p(95)=6.47µs
http_req_connecting............: avg=663.23µs min=0s med=0s max=215.19ms p(90)=0s p(95)=0s
http_req_duration..............: avg=490.8ms min=199.95ms med=484.02ms max=3.17s p(90)=514.86ms p(95)=527ms
{ expected_response:true }...: avg=490.53ms min=199.95ms med=484.02ms max=2.4s p(90)=514.85ms p(95)=526.99ms
http_req_failed................: 0.01% ✓ 5 ✗ 48754
http_req_receiving.............: avg=108.86µs min=12.44µs med=42.68µs max=17.62ms p(90)=69.23µs p(95)=81.87µs
http_req_sending...............: avg=14.42µs min=3.9µs med=12.14µs max=786.01µs p(90)=23.03µs p(95)=30.35µs
http_req_tls_handshaking.......: avg=1.27ms min=0s med=0s max=332.34ms p(90)=0s p(95)=0s
http_req_waiting...............: avg=490.68ms min=199.9ms med=483.91ms max=3.17s p(90)=514.75ms p(95)=526.89ms
http_reqs......................: 48759 404.56812/s
iteration_duration.............: avg=493.16ms min=200.05ms med=484.11ms max=3.17s p(90)=514.96ms p(95)=527.1ms
iterations.....................: 48759 404.56812/s
vus............................: 200 min=200 max=200
vus_max........................: 200 min=200 max=200
running (2m00.5s), 000/200 VUs, 48759 complete and 0 interrupted iterations
Uma resposta em “A bref AWS PHP story – Part 2”
[…] A Bref AWS PHP story – Part 2 […]