Categorias
PHP Programação

A bref AWS PHP story – Part 3

We are starting Part 3 of the Series "A bref AWS PHP history". You can check Part 1, where I presented the PHP language as a reliable and good alternative for Serverless applications and Part 2 where we see the usage of CDK features in favour of a faithful CI/CD.

Part 3 is to show the upgrade path to Bref 2 and to achieve more coverage of the AWS resources. We will use DynamoDB, a powerful database for serverless architectures.

Some of those topics seem straightforward to some people, but I would like to avoid guessing that this is known to the audience since I have experienced some PHP developers struggling to put all these together for the first time due to the paradigm change. It should be fun.

Table of contents:

  1. What else are we doing?
  2. Describing more AWS services - Adding a DynamoDB table
  3. Bref upgrade
  4. Testing CDK
  5. PHP and AWS Services
  6. Wrap-up

What else are we doing?

In this section, we'll explore additional functionalities and enhancements to our serverless application. Building upon the foundation laid in Part 2, we'll introduce new features and integrations to further extend the capabilities of our AWS PHP application.

The Part 2 uses the result of the Fibonacci of a provided integer or a random integer from 400 to 1000 (to get a good image and not to overflow integer). This integer is the number of pixels of an image from the bucket and an arbitrary request metadata we are creating. If the image does not exist, the lambda will fetch a random image from the web with that number of pixels, save it and generate the metadata.

The computing complexity is irrelevant because it could be very complex logic or very simple, and the topics we are discussing in this part of the series will use the same design.

The lambda will now search the metadata in a DynamoDB table, saving the metadata when it does not exist. DynamoDB is largely used in Lambda code.

Get the part-3 source-code on GitHub and the diff from part-2.

Describing more AWS services - Adding a DynamoDB Table

DynamoDB plays a crucial role in serverless architectures, offering scalable and high-performance NoSQL database capabilities. In this section, we'll delve into the process of integrating DynamoDB into our AWS CDK stack, expanding our application's data storage and retrieval capabilities.

DynamoDB is a fully managed NoSQL database service provided by AWS, offering seamless integration with other AWS services, automatic scaling, and built-in security features. Its scalability, low latency, and flexible data model make it well-suited for serverless architectures and applications with varying throughput requirements.

    const table = new Table(this, TableName, {
      partitionKey: { name: 'PK', type: AttributeType.STRING },
      sortKey: { name: 'SK', type: AttributeType.STRING },
      removalPolicy: RemovalPolicy.DESTROY,
      tableName: TableName,
    });

Following the same principles for creating other AWS resources, we utilize the AWS CDK to define a DynamoDB table within our stack. Let's dive into the key parameters of the Table constructor:

  • partitionKey: This parameter defines the primary key attribute for the DynamoDB table, used to distribute items across partitions for scalability. In our example, { name: 'PK', type: AttributeType.STRING } specifies a partition key named 'PK' with a string type. The naming convention ('PK') is arbitrary and can be tailored to suit your application's needs.
  • sortKey: For tables requiring a composite primary key (partition key and sort key), the sortKey parameter comes into play. Here, { name: 'SK', type: AttributeType.STRING } defines a sort key named 'SK' with a string type. Like the partition key, the name and type of the sort key can be customized based on your data model.
  • removalPolicy: This parameter determines the behaviour of the DynamoDB table when the CloudFormation stack is deleted. By setting RemovalPolicy.DESTROY, we specify that the table should be deleted (destroyed) along with the stack. Alternatively, you can opt for RemovalPolicy.RETAIN to preserve the table post-stack deletion, which may be useful for retaining data.

By decoupling configuration from implementation, we adhere to SOLID principles, ensuring cleaner and more robust code. This approach fosters flexibility, allowing our code to seamlessly adapt to changes, such as modifications to the table name while maintaining its functionality.

The implementation code is aware that the name will come from an environment variable and will work with that (yes, if you think that test will be easy to write, you are right):

    const lambdaEnvironment = {
      TableName,
      TableArn: table.tableArn,
      BucketName: brefBucket.bucketName,
    };

Bref Upgrade

Bref, the PHP runtime for AWS Lambda, continually evolves to provide developers with the latest features and optimizations. In this section, we'll discuss the upgrade to Bref 2.0 and explore how it enhances the deployment process and performance of our serverless PHP applications.

In this section, we're upgrading our usage of Bref, a PHP runtime for AWS Lambda, to version 2.0. Bref simplifies the deployment of PHP applications to AWS Lambda, enabling us to run PHP code serverlessly.

The upgrade involves modifying our AWS CDK code to utilize the new features and improvements introduced in Bref 2.0. One notable improvement is the automatic selection of the latest layer of the PHP version, which simplifies the deployment process and ensures that our Lambda functions run on the most up-to-date PHP environment available.

  const getLambda = new PhpFunction(this, ${stackPrefix}${functionName}, {
    handler: 'get.php',
    phpVersion: '8.3',
    runtime: Runtime.PROVIDED_AL2,
    code: packagePhpCode(join(__dirname, ../assets/get), {
      exclude: ['test', 'tests'],
    }),
    functionName,
    environment: lambdaEnvironment,
  });
  • `PhpFunction` Constructor: We're using the `PhpFunction` constructor provided by Bref to define our Lambda function. This constructor allows us to specify parameters such as the handler file, PHP version, runtime, code location, function name, and environment variables.
  • `handler`: Specifies the entry point file for our Lambda function, where the execution starts.
  • `phpVersion`: Defines the PHP version to be used by the Lambda function. In this case, we're using PHP version 8.3.
  • `runtime`: Indicates the Lambda runtime environment. Here, `Runtime.PROVIDED_AL2` signifies the use of the Amazon Linux 2 operating system.
  • `code`: Specifies the location of the PHP code to be deployed to Lambda.
  • `functionName`: Sets the name of the Lambda function.
  • `environment`: Allows us to define environment variables required by the Lambda function, such as database connection strings or configuration settings.

By upgrading to Bref 2.0 and configuring our Lambda function accordingly, we ensure compatibility with the latest enhancements and optimizations provided by Bref, thereby improving the performance and reliability of our serverless PHP applications on AWS Lambda.

Testing CDK

Ensuring the correctness and reliability of our AWS CDK infrastructure is crucial for maintaining a robust serverless architecture. In this section, we'll delve into testing our CDK resources, focusing on the DynamoDB table we added in the previous section.

As described earlier, we utilized the AWS CDK to provision a DynamoDB table within our serverless stack. Now, let's ensure that the table is configured correctly and behaves as expected by writing tests using the CDK's testing framework.

First, let's revisit how we added the DynamoDB table:

const table = new Table(this, TableName, {
  partitionKey: { name: 'PK', type: AttributeType.STRING },
  sortKey: { name: 'SK', type: AttributeType.STRING },
  removalPolicy: RemovalPolicy.DESTROY,
  tableName: TableName,
});

In this code snippet, we define a DynamoDB table with specified attributes such as partition key, sort key, removal policy, and table name. Now, to ensure that this table is created with the correct configuration, we'll write tests using CDK's testing constructs.

Check the following thest:

test('Should have DynamoDB', () => {
  expectCDK(stack).to(
    haveResource(
      'AWS::DynamoDB::Table',
      {
        "DeletionPolicy": "Delete",
        "Properties": {
          "AttributeDefinitions": [
            {
              "AttributeName": "PK",
              "AttributeType": "S",
            },
            {
              "AttributeName": "SK",
              "AttributeType": "S",
            },
          ],
          "KeySchema": [
            {
              "AttributeName": "PK",
              "KeyType": "HASH",
            },
            {
              "AttributeName": "SK",
              "KeyType": "RANGE",
            },
          ],
          "ProvisionedThroughput": {
            "ReadCapacityUnits": 5,
            "WriteCapacityUnits": 5,
          },
          "TableName": "BrefStory-table",
        },
        "Type": "AWS::DynamoDB::Table",
        "UpdateReplacePolicy": "Delete",
      },
      ResourcePart.CompleteDefinition,
    )
  );
});

This test ensures that the DynamoDB table is created with the correct attribute definitions, key schema, provisioned throughput, table name, and other properties specified during its creation. By writing such tests, we validate that our CDK infrastructure is provisioned accurately and functions as intended.

PHP and AWS Services

Leveraging PHP in a serverless environment opens up new possibilities for interacting with AWS services. In this section, we'll examine how PHP code seamlessly integrates with various AWS services, following best practices for maintaining clean and modular code architecture.

This is the part where we have fewer serverless needs impacting the code, as the PHP code will follow the same logic we might be using to communicate with AWS services on any other platform overall (there are always some specific use cases).

The reuse of the same existing logic is excellent. It leverages the decision to keep using PHP when moving that workload to Serverless, as the bulk of the knowledge and already proven code would remain as-is. We may escape the trap of classifying that PHP code as legacy as if it should be avoided, terminated or halted.

As a side note, a few external layers of our software architecture are touched if a good software architecture was applied before. Therefore, during the implementation of this architectural change, it should be quick to realise how beneficial and time-saving it is to have a well-architectured application with a balanced decision for patterns, principles, and designs to be applied, ultimately giving flexibility to the application and its features.

The handler is simplified now and should accommodate everything to a class in the direction of following SRP, a principle that we are bringing to the code during the code bites:

Applications, domains, infrastructure, etc

Our `PicsumPhotoService` is still orchestrating the business logic. The Single Responsibility Principle and Inversion of Control are applied. We are injecting the specialized services in the constructor:

// readonly class PicsumPhotoService
    public function __construct(
        private HttpClientInterface $httpClient,
        private ImageStorageService $storageService,
        private ImageRepository $repository,
    )
    {
    }

Each specialized service has all its dependencies injected in the constructor as well. We can see the factory instantiation:

    public static function createPicsumPhotoService(): PicsumPhotoService
    {
        return new PicsumPhotoService(
            HttpClient::create(),
            new S3ImageService(
                new S3Client(),
                getenv('BucketName'),
            ),
            new DynamoDbImageRepository(
                new DynamoDbClient(),
                getenv('TableName'),
            ),
        );
    }

The `ImageStorageService` will handle all image operations, connecting to the AWS Service when appropriate and observing business logic details. This is a slim interface:

interface ImageStorageService
{
    public function getImageFromBucket(int $imagePixels): ?array;

    public function saveImage(int $imagePixels, mixed $fetchedImage): void;

    public function createAndPutMetadata(int $imagePixels, array $metadata): PutObjectOutput;
}

Instead of `: PutObjectOutput`, usually we would return a domain object, to not couple the interface with implementation details of using S3 Services, but for simplicity, I did not create a domain object here. It would be preferable though.

The `ImageRepository` will handle all metadata operations. It will save into a repository and observe logic details as well. Following the same principles, this is a slim interface:

interface ImageRepository
{
    public function findImage(int $imagePixels): ImageMetadataItem;

    public function addImageMetadata(ImageMetadataItem $imageMetadataItem): PutItemOutput;
}

The `ImageMetadataItem` is a representation of one of the domain objects we have in our codebase.

readonly class ImageMetadataItem
{
    public function __construct(public int $imagePixels, public array $metadata)
    {
    }

    public function toDynamoDbItem(): array
    {
        return [
            'PK' => new AttributeValue(['S' => 'IMAGE']),
            'SK' => new AttributeValue(['S' => "PIXELS#{$this->imagePixels}"]),
            'pixels' => new AttributeValue(['N' => "{$this->imagePixels}"]),
            'metadata' => new AttributeValue(['S' => json_encode($this->metadata)]),
            ...ConvertToDynamoDb::item($this->metadata),
        ];
    }

    /**
     * @param array $item
     */
    public static function fromDynamoDb(array $item): static
    {
        return new static(
            (int) $item['pixels']->getN(),
            (array) json_decode($item['metadata']->getS()),
        );
    }
}

If you check the implementation details, it operates transparently with all the services, business logic and AWS Services without any high couple with them. There are two utility functions:

  • toDynamoDbItem: to transform the object into a valid DynamoDb Item to be added
  • fromDynamoDb: to perform the opposite operation, transforming a DynamoDb Item into a domain object

The scope of the operation is very clear and does not bring the domain into dependency on those services, as the domain object can be used independently. It does not block any other way of dealing with it, giving the usage with other types of services, such as different databases or APIs. This is very important to the maintainability of the application without sacrificing the ease of readiness as it keeps the context of the utilities in the right place.

If you check all PHP code carefully, Bref is such a great abstraction layer that, removing the code from the handler file, any other line of code can be used as a lambda or a web application interchangeably without changing any line of code. This is very powerful, as you can imagine how you can leverage and migrate some of the existing code to lambda by just creating a handler that will trigger your existing code, if the code is well structured.

Wrap-up

It would be simple like that. Check more details in the source code, install it and try it yourself. This project is ready to:

  • Extend lambda function using Bref
  • Upgrade to use Bref 2.0
  • Create a DynamoDB table
  • Test the stack Cloudformation code
  • Separate the PHP logic
  • Have PHP communicating with AWS Services

Links:

Categorias
Programação Technology

GitHub Actions workflow for deploying a scheduled task using AWS ECS and EventBridge Scheduler

Cron jobs are repetitive tasks scheduled to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration. Some workloads that are still running on non-containerized platforms (VMs, bare metal, etc.) are suitable to be moved to Serveless with multiple alternatives, depending on the context of each task.

Considering AWS services, for most of the options EventBridge Scheduler will be used to manage tasks as it is capable of invoking lots of AWS services. One of them is invoking a containerized application, or ECS task.

Amazon EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage tasks from one central, managed service. Highly scalable, EventBridge Scheduler allows you to schedule millions of tasks that can invoke more than 270 AWS services and over 6,000 API operations. Without the need to provision and manage infrastructure, or integrate with multiple services, EventBridge Scheduler provides you with the ability to deliver schedules at scale and reduce maintenance costs.
-- https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html
(EventBridge Scheduler is recommended to be used instead of CloudWatch Scheduler with EventBridge rules)

I worked on an application running on EC2 to ECS that is still using its cron jobs. Cron jobs were migrated to EventBridge Scheduler. Our CI/CD uses GitHub Actions and Terraform. AWS provides actions that can create and deploy a ECS task definition (the container blueprint) to an ECS service, but there is no action to deploy an ECS task to the EventBridge Scheduler, as the cron task is not executed under a service.

To deploy the new code we have to write some code to the GitHub Action and I think it might benefit others in a similar context. We use Terraform as Infrastructure as a Code, so keep this in mind if you need to adapt to your IaaS solution.

There will be the full Yaml file here but I will comment parts of it separately afterwards.

name: Deploy Scheduled task XYZ

on:
  workflow_dispatch:
    inputs:
      imageHash:
        description: 'Image hash to deploy'
        required: true
        type: string
      environment:
        description: 'Environment to run tests against'
        type: environment
        required: true
        default: test

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  APP_NAME: application-name-here
  AWS_REGION: "ap-southeast-2"
  ECR_NAME: ecr-name-here
  ECS_CLUSTER: cluster-name-here
  IMAGE_NAME: application-image-name-here
  TASK_NAME: cron-task-name-here

permissions:
  id-token: write
  contents: read    # This is required for actions/checkout

jobs:
  deploy:
    name: Deploy to ${{ inputs.environment }}
    runs-on: ubuntu-latest
    environment: ${{ inputs.environment }}
    env:
      JOB_ENV: ${{ inputs.environment }}
    steps:
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets[format('iam_role_to_assume_{0}', inputs.environment)] }}
          role-session-name: github-ecr-push-workflow-${{ inputs.environment }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Verify image
        run: aws ecr describe-images --repository-name ${{ inputs.environment }}-${{ env.ECR_NAME }}-${{ env.APP_NAME }} --image-ids imageTag=${{ inputs.imageHash }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Download the task definition ${{ env.TASK_NAME }}
        run: aws ecs describe-task-definition --task-definition ${{ env.TASK_NAME }} --query taskDefinition > task-definition.json

      - name: Fill in the new image ID in the Amazon ECS task definition ${{ env.TASK_NAME }}
        id: task-def-cron
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
        with:
          task-definition: task-definition.json
          container-name: ${{ env.TASK_NAME }}
          image: ${{ env.ECR_REGISTRY }}/${{ env.JOB_ENV }}-${{ env.ECR_NAME }}-${{ env.APP_NAME }}:${{ inputs.imageHash }}

      - name: Deploy Amazon ECS task definition ${{ env.TASK_NAME }}
        id: deploy-cron
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def-cron.outputs.task-definition }}
          cluster: ${{ inputs.environment }}-${{ env.ECS_CLUSTER }}

      - name: Checkout infrastructure
        uses: actions/checkout@v4
        with:
          repository: orgnamehere/iaas-repo-here
          ref: main
          path: './working-path'
          token: ${{ secrets.PAT_TOKEN }}

      - name: Update schedule ${{ env.TASK_NAME }}
        working-directory: './working-path'
        env:
          GH_TOKEN: ${{ secrets.PAT_TOKEN }}
          INFRASTRUCTURE_FILE: 'path/to/your/module/terraform-file-here.tf'
          UNESCAPED_ARN: ${{ steps.deploy-cron.outputs.task-definition-arn }}
        run: |
          # Escape regexp non-safe characters from the ARN to prevent sed to fail
          export ESCAPED_ARN=${UNESCAPED_ARN//:/\\:}
          export ESCAPED_ARN=${ESCAPED_ARN//\//\\/}
          echo "Escaped ARN: $ARN"
          # Retrieve <appl-name>:<version> part of the ARN to use in PR 
          export ARRAY_ARN_PARTS=(${UNESCAPED_ARN//\// })
          export VERSION_PART=${ARRAY_ARN_PARTS[1]}
          export COMMIT_MESSAGE="DEPLOY: Deployment on ${{ inputs.environment }} - $VERSION_PART"
          # Use task definition version for branch name
          export BRANCH_NAME="deploy-${VERSION_PART//:/-}"
          git config user.email "[email protected]"
          git config user.name "Github Actions Pipeline"
          git checkout -b ${BRANCH_NAME}
          sed -i '/task_definition_arn /s/".*/'"\"${ESCAPED_ARN}"\"'/' $INFRASTRUCTURE_FILE
          git add ${{ env.INFRASTRUCTURE_FILE }}
          git commit -m "$COMMIT_MESSAGE"
          git push --set-upstream origin ${BRANCH_NAME}
          gh pr create --fill --body "- [x] $COMMIT_MESSAGE"

This pipeline will create the scheduled task definition, check out the IaaS repository, change the Scheduler task definition ARN and create a PR in the IaaS repository. We are using that also as a way to have approval to deploy to Production, but it can be automated if needed.

Let's comment on some parts:

on:
  workflow_dispatch:
    inputs:
      imageHash:
        description: 'Image hash to deploy'

It is good to separate the image creation from the deployment. This input is required assuming the image was created and published to the registry. This promotes reusability and flexibility.

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets[format('iam_role_to_assume_{0}', inputs.environment)] }}
          role-session-name: github-ecr-push-workflow-${{ inputs.environment }}
          aws-region: ${{ env.AWS_REGION }}

Environment is a required input and this pipeline can be executed against any environment you defined in your repository.

      - name: Deploy Amazon ECS task definition ${{ env.TASK_NAME }}
        id: deploy-cron
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def-cron.outputs.task-definition }}
          cluster: ${{ inputs.environment }}-${{ env.ECS_CLUSTER }}

Although the action name is "deploy task definition" it will create the task, but not deploy, as this is only possible when you provide a service input (by the time this article is being written). But we are not deploying to a service though, so the action will only create the task definition, but the EventBridge Scheduler remains calling the same task definition it was invoking before the creation of this task definition.

      - name: Checkout infrastructure
        uses: actions/checkout@v4
        with:
          repository: orgnamehere/iaas-repo-here
          ref: main
          path: './working-path'
          token: ${{ secrets.PAT_TOKEN }}

Using AWS CLI was an alternative we considered, but changing the target of the EventBridge scheduler becomes a little bit confusing and brings some cognitive complexity in case we need to change something. We decided then to fetch the IaaS repository and control the task definition version to be the target of the scheduler in Terraform code, so we could also be sure any dependency that the target change could have would be managed by Terraform, instead of another CLI change in this pipeline. We checkout the IaaS repo and save the path as ./working-path to keep the workspace clean. The name is your choice.

      - name: Update schedule ${{ env.TASK_NAME }}
        working-directory: './working-path'
        env:
          GH_TOKEN: ${{ secrets.PAT_TOKEN }}
          INFRASTRUCTURE_FILE: 'path/to/your/module/terraform-file-here.tf'
          UNESCAPED_ARN: ${{ steps.deploy-cron.outputs.task-definition-arn }}
        run: |
          # Escape regexp non-safe characters from the ARN to prevent sed to fail
          export ESCAPED_ARN=${UNESCAPED_ARN//:/\\:}
          export ESCAPED_ARN=${ESCAPED_ARN//\//\\/}
          echo "Escaped ARN: $ARN"
          # Retrieve <appl-name>:<version> part of the ARN to use in PR 
          export ARRAY_ARN_PARTS=(${UNESCAPED_ARN//\// })
          export VERSION_PART=${ARRAY_ARN_PARTS[1]}
          export COMMIT_MESSAGE="DEPLOY: Deployment on ${{ inputs.environment }} - $VERSION_PART"
          # Use task definition version for branch name
          export BRANCH_NAME="deploy-${VERSION_PART//:/-}"
          git config user.email "[email protected]"
          git config user.name "Github Actions Pipeline"
          git checkout -b ${BRANCH_NAME}
          sed -i '/task_definition_arn /s/".*/'"\"${ESCAPED_ARN}"\"'/' $INFRASTRUCTURE_FILE
          git add ${{ env.INFRASTRUCTURE_FILE }}
          git commit -m "$COMMIT_MESSAGE"
          git push --set-upstream origin ${BRANCH_NAME}
          gh pr create --fill --body "- [x] $COMMIT_MESSAGE"

This is where we use sed to search and replace the ARN in the terraform code. We scape the ARN before applying sed to not mess with the search regexp.

The terraform code expected to be changed will be something like this:

# main.tf
-        task_definition_arn     = "arn:aws:ecs:ap-southeast-2:123456789012:task-definition/task-definition-cron-name:57"
+       task_definition_arn     = "arn:aws:ecs:ap-southeast-2:123456789012:task-definition/task-definition-cron-name:58"

Links:

Categorias
Programação Solving Problems

Setting up maintenaince mode with Varnish

Varnish is "the free, open-source software that enables super fast delivery of HTTP or API based content", "an HTTP reverse proxy that works by caching frequently requested web pages, so they can be loaded quickly without having to wait for a server response.".

If you need some sort of an alternative cloud or servers in datacenter Varnish can act as CDN, Load Balancer and Api Gateway layers at the same time. It is very powerful when you have to manage those services instead of using a Cloud service. And this is not that uncommon.

Varnish

Consider the use case where you need a maintenance window for a product for which you need to be sure that you are suspending all connections to the backend servers in a consistent way. Performing a redirection in the CDN layer is the better choice.

The Varnish Configuration Language (VCL) is a domain-specific programming language used by Varnish to control request handling, routing, caching, and several other aspects.

-- https://www.varnish-software.com/developers/tutorials/varnish-configuration-language-vcl/

This is a very powerful language, that, for our use case, will allow creating a synthetic response to proxy the request to, instead of hitting the backends. There will be no need to create a web directory to be served for another web server, but just directly from Varnish.

# default.vcl
sub vcl_synth 
{
    # previous headers manipulation if you like
    # and other code that you need for synth if you like
    # (...)

    # Adding an x-cache header to indicate this is a synth response
    set resp.http.x-cache = "synth synth";

    # Maintenance - we are calling the status for this synth 911 because we can have different synths
    if ( resp.status == 911 ) {
        set resp.http.Content-Type = "text/html; charset=utf-8";
        # You can put absolutely what you want
        synthetic ({"
<html>
<head>
    <title>Maintenance mode - Try again later</title>
</head>
<body>
<h1>This website is under maintenance.</h1>
</body>
</html>
"});
        return (deliver);
    }
}

Then I can forward everything that requests my-domain.com to the maintenance

# includes/my-domain-hints.vcl

if ( req.http.host ~ "my-domain.com" ) {
    return(synth(911, ""));
    # All the other VCL configs are below here, but we are returning early above to the maintenance
    # (...)
}
Categorias
Solving Problems

Solving problems week 2: Cypress test automation, E2E, DevExp, code standards, rector

In this week's Solving Problems text, I have two topics: code quality and service reliability. I will share some ideas that can improve your set of tools that check if your codebase is healthy.

The problem with the Code Quality topic is how to keep your standards and some trivial issues out of the code without sacrificing your reviewer's time and patience.

And the problem with Service Reliability is how to be active in monitoring the most essential parts of your service without relying on human tests.

Automated tests

There are some red lights that you might be missing an important part of your Software Reliability: the QA team being a bottleneck due to human-resource timing constraints, too many PR reverts before deployment and lack of unit tests on each codebase. That usually comes with a very concerning outcome: customer support tickets. That is very bad for the product's reputation and a constant source of stress and bug lists to grow.

There are lots of quality checks that we could talk about, like enforcing unit testing coverage in the pipeline, applying feature flagging, improving the regression QA processing steps, etc. They are all pivotal and needed, but where to start? There is a first step, on similar scenarios, that I recommend prioritised, keeping the other processes running in parallel: automated end-to-end tests.

Testing plays a key role in development. By continuously monitoring application workflows and features, your tests can surface broken functionality before your customers do.

-- Best practices for creating end-to-end tests, DataDog

An Automated end-to-end test (we can also call it Smoke tests) must cover your most important scenarios and test them continuously and after any deployment. After you check those most four/five very important scenarios, you can keep improving the smoke tests based on the most used use cases. We have currently powerful tools to write them. Let's say cypress

describe('Verify dashboard', () => {
    const baseUrl = ${Cypress.env('baseUrl')};
    const env = Cypress.env('env');
    const dataFile: any = credentials_${env}.json;

    it('Verify raw Admin User profile', () => {
        cy.loginApplication();
        cy.fixture(dataFile).then(testData => {
            const profileUrl = testData['adminUser'].profileUrl;
            cy.visit(baseUrl + profileUrl);
        });
        cy.contains('Profile');
        cy.visit(${baseUrl}/logout);
    });
});

Developer experience

A quality code check that a mature codebase has is to lint and check code standards. As authors, we are used to waiting for CI to perform steps and be sure that the same successful result status we see when running the steps locally is also successful in CI. As reviewers, we are used to comments asking to check CI or asking to use the agreed pattern when the codebase doesn't have a good quality check step in place.

Both are part of the passive code review that steals our time from the essential problems we need to review in the code: architectural or business logic problems that are often missed by tired eyes. We can do better and use CI and the step tool in our favour.

This can be reproduced in any language, but focusing on PHP, we can use tools like Rector to check and fix those easy-to-spot problems. You can just set a step in our pipeline that will fix the errors and commit it again.

Some can say that it could be a pre-hook step, but this is usually skipped when takes more than 2s. I agree that this is easy to just run the fixes on the diff in our local, but we just have to do better if we automate the changes in case some developers just do not run it before pushing the commits or whatever reason.

This would be a very useful automation, that will run for every open Pull Request, committing code standards or lint issues. The pipeline will contribute a lot to your codebase with very little maintenance. Mind that this will execute Rector (therefore moving the code to the state your team agreed and not just code style) and Easy Code Standards (combine power of PHP_CodeSniffer and PHP CS Fixer in 3 lines)

name: Rector CI

on:
  pull_request: null

jobs:
  rector-ci:
    runs-on: ubuntu-latest
    # run only on commits on main repository, not on forks
    if: github.event.pull_request.head.repo.full_name == github.repository
    env:
      COMPOSER_AUTH: ${{ secrets.COMPOSER_AUTH }}
    steps:
      - uses: actions/checkout@v4
        with:
          # Solves the not "You are not currently on a branch" problem, see https://github.com/actions/checkout/issues/124#issuecomment-586664611
          ref: ${{ github.event.pull_request.head.ref }}
          # Must be used to trigger workflow after push
          token: ${{ secrets.GH_PAT_TOKEN }}

      - uses: shivammathur/setup-php@v2
        with:
          php-version: 8.1
          coverage: none
          extensions: <your-extensions-here>

      -   run: composer install --no-progress --ansi

      ## First run Rector without --dry-run, it would stop the process with exit 1 here
      -   run: vendor/bin/rector process --ansi

      - name: Check for Rector modified files
        id: rector-git-check
        run: |
          export CHANGES=$(if git diff --exit-code --no-patch; then echo "false"; else echo "true"; fi)
          echo "modified=$CHANGES" >> "$GITHUB_OUTPUT"

      - name: Git config
        if: steps.rector-git-check.outputs.modified == 'true'
        run: |
          git config --global user.name 'rector-bot'
          git config --global user.email '[email protected]'
          export LOG=$(git log -1 --pretty=format:"%s")
          echo "COMMIT_MESSAGE=${LOG}" >> "$GITHUB_ENV"

      - name: Commit Rector changes
        if: steps.rector-git-check.outputs.modified == 'true'
        run: git commit -am "[rector] ${COMMIT_MESSAGE}"

      ## Now, there might be coding standard issues after running Rector
      - run: composer run ecs:fix

      - name: Check for CS modified files
        id: cs-git-check
        run: |
          export CHANGES=$(if git diff --exit-code --no-patch; then echo "false"; else echo "true"; fi)
          echo "modified=$CHANGES" >> "$GITHUB_OUTPUT"

      - name: Git config
        if: steps.cs-git-check.outputs.modified == 'true'
        run: |
          git config --global user.name 'rector-bot'
          git config --global user.email '[email protected]'
          export LOG=$(git log -1 --pretty=format:"%s")
          echo "COMMIT_MESSAGE=${LOG}" >> "$GITHUB_ENV"

      - name: Commit CS changes
        if: steps.cs-git-check.outputs.modified == 'true'
        run: git commit -am "[cs] ${COMMIT_MESSAGE}"

      - name: Push changes
        if: steps.cs-git-check.outputs.modified == 'true'
        run: git push

Links:

Categorias
Solving Problems

Solving problems 1: ECS, Event Bridge Scheduler, PHP, migrations

I love Mondays and Business as Usual. Solving problems is a delightful day-to-day task. Maybe this is what working with software means in the end. Do not take me wrong, it opens the doors for greenfield projects and experimentation. While mastering the business I can experiment, change and rebuild.

The solving problems series is just a way to share small ideas, experiences and outcomes of solving daily problems as I go. I wonder if some tips or experiences shared can help you build better what you are working on right now.


During the last months, I have been migrating an important PHP service to ECS Fargate along with the runtime upgrade. The service is composed of a lot of parts and we have been architecting the migration so the operation causes no downtime to customers, even when they are over four different continents and many time zones.

One very important part of the service is already running in production for some months with success. We are preparing the next service.

For the migration plan, we deployed infrastructure ahead of starting moving traffic, planned to daily incremental traffic switch, like 5, 10, 25, 50, 75, and close monitoring. Also prepared a second plan to avoid rollback in case some performance issue arises. While monitoring we created backlog tickets with the observability outcomes.

During migration phases prepare yourself beforehand for the initial (1%, or 5%) traffic switch, so you can catch quickly those hidden use cases that only happen in production and act quickly. If you do so, other phases are just a matter of watching how scaling works.

Using containers (of course Kubernetes is a great alternative) is a fantastic opportunity to upgrade PHP runtimes efficiently at the same time where we use a much better platform that helps with delivery and developer experiences. The very first and most important step I recommend is to review how you deal with your secret and environment variables. This is pivotal for the success of a smooth migration.

We can expect that those type of applications has a fair amount of cron jobs associated with them. This is a great opportunity to follow the old saying "use the right tool for the right problem" and my suggestion would be to rewrite it, turning it into Lambda or Step Functions, as applicable to each of what the cron job is doing. This is closer to what and how a job should run.

It happens that not always we can start refactoring right away, and then I can say that my experiences with Event Bridge Scheduler triggering ECS tasks (previously cron jobs) are great. They are interestingly cheap alternatives while waiting for the refactoring project to take over. Don't take this as your permanent solution though, because it is not just right and a waste of resources and couple the cron job too much with parts of the application not really related.

We were reviewing the backlog and observability results of the last service. As we could prioritise and execute some backlog tickets, the dashboard and metrics highlighted that we had some room to review scaling and resource thresholds. We changed them carefully, resulting in a bill ~50% cheaper, CPU and memory resource stable and no performance degradation.

Some notes:

  • Investing in test automation is good for your developer experience, site reliability and revenue; also a great support for technology improvements
  • It is worth taking a look at the ALBRequestCountPerTarget metric if you have CPU-heavy processes as you can better control how ECS will handle scale policies, avoiding peak of CPU where the CPU average metric is not enough for scaling

Links:

Categorias
PHP Programação

A bref AWS PHP story – Part 2

We are starting Part 2 of the Series "A bref AWS PHP history". You can check Part 1, where I presented the PHP language as a reliable and good alternative for Serverless applications.

Part 2 is to show how CDK will describe more AWS resource dependencies; how policies and roles are involved in this process; how to test if they are applied as expected; and how PHP services will use those resources.

Some of those topics seem straightforward to some people, but I would like to avoid guessing that this is known to the audience since I have experienced some PHP developers struggling to put all these together for the first time due to the paradigm change. It should be fun.

Table of contents:

  1. What else are we doing?
  2. Describing more AWS services - Adding an S3 bucket
  3. Services permissions
  4. Testing CDK
  5. PHP and AWS Services
    1. Handlers
    2. Application, Domain, Infrastructure, etc
  6. Wrap-up
  7. P.S.: Stats

What else are we doing?

The Part 1 function was returning a Fibonacci result from an int. Very simple. We will keep it simple for now to focus on putting the PHP code into a lambda and allowing PHP code to interact with AWS Services.

The computing complexity is irrelevant because it could be very complex logic or very simple, and the topics we are discussing in this part of the series will use the same design.

The lambda will now use the result of the Fibonacci of a provided integer or a random integer from 400 to 1000 (to get a good image and not to overflow integer). This integer is the number of pixels of an image from the bucket and an arbitrary request metadata we are creating. If the image does not exist, the lambda will fetch a random image from the web with that number of pixels, save it and generate the metadata.

Get the part-2 source-code on GitHub and the diff from part-1.

Describing more AWS services - Adding an S3 bucket

S3 buckets are simple yet compelling services for multipurpose workloads. It will be added to the series as a basic storage mechanism. The lambda function, now called GetFibonacciImage function, will need some permissions to manage the bucket.

Starting from the bucket definition, CDK give fantastic constructs, and it goes like this:

cdk-stack.ts

    const brefBucket = new Bucket(this, `${stackPrefix}Bucket`, {
      autoDeleteObjects: true,
      removalPolicy: RemovalPolicy.DESTROY,
    });

By default, buckets will not be deleted during a CDK destroy because they need to be empty. So you will have a hanging bucket in your account. I don't want to keep those contents if the lambda no longer exists. Then autoDeleteObjects and removalPolicy options are selected to enable the destruction of the buckets and their contents if I execute a stack destroy.

We want to decouple the configuration from the implementation to have a more SOLID code. That way, we avoid hard-coded configuration, making our code cleaner and more robust. Then, the code is ready to work, no matter the bucket name.

The implementation code is aware that the name will come from an environment variable and will work with that (yes, if you think that test will be easy to write, you are right):

cdk-stack.ts

and

      environment: {
        BUCKET_NAME: brefBucket.bucketName,
      }

Services permissions

There is a Lambda Function and an S3 Bucket. The described use case determines that the lambda needs read and write permissions to the bucket. And nothing more. It is a good practice to give the minimum necessary permission to a resource:

cdk-stack.ts

    brefBucket.grantReadWrite(getLambda);

The result is a list of actions added to the policy recommended by AWS for operations requiring only read and write.

          Action: [
            "s3:GetObject*",
            "s3:GetBucket*",
            "s3:List*",
            "s3:DeleteObject*",
            "s3:PutObject",
            "s3:PutObjectLegalHold",
            "s3:PutObjectRetention",
            "s3:PutObjectTagging",
            "s3:PutObjectVersionTagging",
            "s3:Abort*",
          ],

Testing CDK

Testing is a great feature of CDK, and we can see how tests can verify our changes with npm t:

That there is a function

  const functionName = 'GetFibonacciImage';
  /* ... */
  it('Should have a lambda function to get fibonacci', () => {
    template.hasResourceProperties('AWS::Lambda::Function', {
      Layers: [Cdk.CdkStack.brefLayerFunctionArn],
      FunctionName: functionName,
    });
  });

And if only the permissions the lambda needs were granted:

  it('Should have a policy for S3', () => {
    template.hasResourceProperties('AWS::IAM::Policy', {
      PolicyName: Match.stringLikeRegexp(`^${stackPrefix}${functionName}ServiceRoleDefaultPolicy`),
      PolicyDocument: {
        Statement: [{
          Action: [
            "s3:GetObject*",
            "s3:GetBucket*",
            "s3:List*",
            "s3:DeleteObject*",
            "s3:PutObject",
            "s3:PutObjectLegalHold",
            "s3:PutObjectRetention",
            "s3:PutObjectTagging",
            "s3:PutObjectVersionTagging",
            "s3:Abort*",
          ],
        }],
      },
    });
  });

You may want to check cdk-stack.test.ts to see more details.

PHP and AWS Services

This is the part where we have fewer serverless needs impacting the code, as the PHP code will follow the same logic we might be using to communicate with AWS services on any other platform overall (there are always some specific use cases).

The reuse of the same existing logic is excellent. It leverages the decision to keep using PHP when moving that workload to Serverless, as the bulk of the knowledge and already proven code would remain as-is. We may escape the trap of classifying that PHP code as legacy as if it should be avoided, terminated or hated.

As a side note, a few external layers of our software architecture are touched if a good software architecture was applied before. Therefore, during the implementation of this architectural change, it should be quick to realise how beneficial and time-saving it is to have a well-architectured application with a balanced decision for patterns, principles, and designs to be applied, ultimately giving flexibility to the application and its features.

The handler is simplified now and should accommodate everything to a class in the direction of following SRP, a principle that we are bringing to the code during the code bites:

Handlers

php/handler/get.php

return function ($request, $context) {
    return \BrefStory\Application\ServiceFactory::createGetFibonacciImageHandler()
        ->handle($request, $context)
        ->toApiGatewayFormatV2();
};

To handle the request details, the Fibonacci code now lives in a proper event handler (implements Bref\Event\Handler).

php/src/Event/Handler/GetFibonacciImageHandler.php

    public function handle($event, Context $context): HttpResponse
    {
        $int = (int) (
            $event['queryStringParameters']['int'] ?? random_int(
                self::MIN_PIXELS_FOR_REASONABLE_IMAGE_AND_NOT_BIG_FIBONACCI,
                self::MAX_PIXELS_FOR_REASONABLE_IMAGE_AND_NOT_BIG_FIBONACCI
            )
        );

        $metadata = $this->photoService->getJpegImageFor($int);

        $responseBody = [
            'context' => $context,
            'now' => $this->dateTimeImmutable()->format('Y-m-d H:i:s'),
            'int' => $int,
            'fibonacci' => $this->fibonacci($int),
            'metadata' => $metadata,
        ];

        $response = new JsonResponse($responseBody);

        return new HttpResponse($response->getContent(), $response->headers->all());
    }

We would also like to start testing the PHP code. As the Event Handler might be a new layer (although very similar to widely used controllers), php/tests/unit/Event/Handler/GetFibonacciImageHandlerTest.php test class was created for that. The part-2 will only focus on this test class to avoid overloading with too many changes, but we would usually have test coverage for all the code in the repository.

Applications, domains, infrastructure, etc

Finally, we are inside the layers where we are most used to. To fit our purposes, the Event Handler will depend on and call an Application layer service that will orchestrate all the steps to fetch the image metadata.

php/src/Application/PicsumPhotoService.php#L34-L42

    public function getJpegImageFor(int $imagePixels): array
    {
        try {
            return $this->getImageFromBucket($imagePixels);
        } catch (NoSuchKeyException) {
            // do nothing
        }

        return $this->fetchAndSaveImageToBucket($imagePixels);
    }

The interesting thing to mention about using AWS Services is how simple S3Client is instantiated. There is a factory to create service:

php/src/Application/ServiceFactory.php#L22-L29

    public static function createPicsumPhotoService(): PicsumPhotoService
    {
        return new PicsumPhotoService(
            HttpClient::create(),
            new S3Client(),
            getenv('BUCKET_NAME'),
        );
    }
  • new S3Client is all we need because the environment will use AWS credentials, provided to lambda at execution time, as an assumed role that will carry the policies we defined in the CDK constructs stack, i.e., with read and write permissions to the bucket
  • getenv('BUCKET_NAME'), which is gracefully provided by CDK when creating our bucket with any dynamic name it pleases to

I asked ChatGPT about this class:

The PicsumPhotoService class seems to be following the Single Responsibility Principle (SRP) as it has only one responsibility, which is to provide methods for fetching and saving JPEG images from the Picsum website.

The class has methods to fetch the image from an S3 bucket, and if it's not available, fetches it from the Picsum website, saves it to the S3 bucket, and creates and puts metadata for the image in the S3 bucket.

The class has a clear separation of concerns, where the S3Client and HttpClientInterface are injected through the constructor, and the different functionalities are implemented in separate private methods. Additionally, each method is doing a single task, which makes the code easy to read, test, and maintain.

Therefore, it can be concluded that the PicsumPhotoService class follows SRP.

Wrap-up

It would be simple like that. Check more details in the source code, install it and try it yourself. This project is ready to:

  • Create a lambda function using Bref
  • Create an S3 Bucket with read and write permissions to the lambda
  • Test the stack Cloudformation code
  • Separate the PHP logic
  • Have PHP communicating with AWS Services
  • Start PHP testing

P.S.: Stats

I did not plan to talk widely about stats now, but I think I can share the most two significant measures I had with this simple code so far.

[Update 22/03/23] Using https://k6.io/

1 - With a brand new stack and a cold lambda:

scenarios: (100.00%) 1 scenario, 200 max VUs, 2m30s max duration (incl. graceful stop):
           * default: 200 looping VUs for 2m0s (gracefulStop: 30s)

     data_received..................: 49 MB  409 kB/s
     data_sent......................: 7.8 MB 65 kB/s
     http_req_blocked...............: avg=2.36ms   min=671ns    med=2.27µs   max=581.87ms p(90)=4.18µs   p(95)=7µs
     http_req_connecting............: avg=712.63µs min=0s       med=0s       max=193.34ms p(90)=0s       p(95)=0s
     http_req_duration..............: avg=531.51ms min=204.46ms med=485.24ms max=3.81s    p(90)=517.98ms p(95)=534.3ms
       { expected_response:true }...: avg=513.6ms  min=204.46ms med=485.07ms max=3.67s    p(90)=516.62ms p(95)=531.5ms
     http_req_failed................: 0.60%  ✓ 272        ✗ 44761
     http_req_receiving.............: avg=123.76µs min=13.77µs  med=44.04µs  max=16.78ms  p(90)=71.27µs  p(95)=85.71µs
     http_req_sending...............: avg=14.79µs  min=4.27µs   med=12.43µs  max=402.74µs p(90)=23.97µs  p(95)=31.4µs
     http_req_tls_handshaking.......: avg=1.37ms   min=0s       med=0s       max=330.58ms p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=531.37ms min=204.36ms med=485.11ms max=3.81s    p(90)=517.77ms p(95)=534.13ms
     http_reqs......................: 45033  373.683517/s
     iteration_duration.............: avg=533.96ms min=204.55ms med=485.34ms max=4.37s    p(90)=518.07ms p(95)=534.4ms
     iterations.....................: 45033  373.683517/s
     vus............................: 200    min=200      max=200
     vus_max........................: 200    min=200      max=200

running (2m00.5s), 000/200 VUs, 45033 complete and 0 interrupted iterations

2 - After the first initial execution, cold lambda and all available images already saved to the bucket, where we got ~3K more requests being served for the same time

scenarios: (100.00%) 1 scenario, 200 max VUs, 2m30s max duration (incl. graceful stop):
           * default: 200 looping VUs for 2m0s (gracefulStop: 30s)

     data_received..................: 53 MB  442 kB/s
     data_sent......................: 8.4 MB 70 kB/s
     http_req_blocked...............: avg=2.26ms   min=631ns    med=2.24µs   max=612.22ms p(90)=4.04µs   p(95)=6.47µs
     http_req_connecting............: avg=663.23µs min=0s       med=0s       max=215.19ms p(90)=0s       p(95)=0s
     http_req_duration..............: avg=490.8ms  min=199.95ms med=484.02ms max=3.17s    p(90)=514.86ms p(95)=527ms
       { expected_response:true }...: avg=490.53ms min=199.95ms med=484.02ms max=2.4s     p(90)=514.85ms p(95)=526.99ms
     http_req_failed................: 0.01%  ✓ 5         ✗ 48754
     http_req_receiving.............: avg=108.86µs min=12.44µs  med=42.68µs  max=17.62ms  p(90)=69.23µs  p(95)=81.87µs
     http_req_sending...............: avg=14.42µs  min=3.9µs    med=12.14µs  max=786.01µs p(90)=23.03µs  p(95)=30.35µs
     http_req_tls_handshaking.......: avg=1.27ms   min=0s       med=0s       max=332.34ms p(90)=0s       p(95)=0s
     http_req_waiting...............: avg=490.68ms min=199.9ms  med=483.91ms max=3.17s    p(90)=514.75ms p(95)=526.89ms
     http_reqs......................: 48759  404.56812/s
     iteration_duration.............: avg=493.16ms min=200.05ms med=484.11ms max=3.17s    p(90)=514.96ms p(95)=527.1ms
     iterations.....................: 48759  404.56812/s
     vus............................: 200    min=200     max=200
     vus_max........................: 200    min=200     max=200

running (2m00.5s), 000/200 VUs, 48759 complete and 0 interrupted iterations
Categorias
PHP Programação

A bref AWS PHP story – Part 1

The PHP language is a true and good alternative for Serverless applications. PHP is a fast and flexible programming language, and there are many business treasures inside PHP applications, business logic running well for years inside company codebases worldwide.

We don't need to look at PHP as a language that could not run inside a modernized stack. We can move some of this code without total refactoring to Serverless applications, benefiting from an already proven successful code. And we know we all have flows suitable to run as a lambda function.

And not only legacy code. New features are also perfect candidates to be run in PHP and lambdas due to the team's experience, consistency of the technology stack, speed, etc. PHP has served the world well and will remain operating well. PHP is alive.

Table of contents:

  1. The series
  2. Functions
  3. Code
    1. Requirements
    2. The lambda
  4. Wrap-up

The Serie

I am starting a series as a walkthrough for PHP into Serverless, specifically to run as lambdas functions.

We will use Bref, a composer package, to deploy PHP applications to AWS.

Bref (which means "brief" in french) comes as an open source Composer package and helps you deploy PHP applications to AWS and run them on AWS Lambda.
https://bref.sh/docs/

Bref relies on the Serverless framework and AWS access keys to deploy applications.
https://bref.sh/docs/installation.html

The Serverless framework is excellent, but I am more of a fan of AWS CDK. Mainly because it is designed to use an imperative programming framework that speeds up the required infrastructure with excellent constructs on different levels (reasonable defaults), and its output can be run against a test framework (predictability).

There are already some CDK constructs for PHP, but, as far as I see, they are intended to be used by Web Apps lambdas (i.e. using frameworks such as Laravel and Symfony). However, the purpose of this series is to run Event-Driven functions, so I will start using pure CDK constructs.

Functions

As a walkthrough, we will digest the series in affordable bites, starting from simple functions that we will improve as the series continues and we use more AWS resources.

Code

Let's start our PHP lambda function. First, it will begin as an HTTP-based lambda, expecting a request and returning a response. Then, it will execute a trivial piece of computing code: it will return fibonacci.

Get the part 1 source-code in GitHub.

Requirements

(optional) There is a Dockerfile and a docker-compose.yml file for your convenience if you prefer to use docker. It will require you to set AWS environment variables for use by the container.

The lambda

You can check the complete source code for part 1, and we will highlight essential parts from the CDK code, as the PHP code has a few different things from what we are used to code.

The stack creator

In our case, it will create the serverless Stack and related infrastructure, i.e., IAM, lambda function, and URL. If anything else we need, it would be defined and requested by this class.

bin/cdk-stack.ts

export class CdkStack extends Stack {

  // Get Bref layer ARN from https://runtimes.bref.sh/
  public static brefLayerFunctionArn = 'arn:aws:lambda:us-east-1:209497400698:layer:php-82:16';

  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const layer = LayerVersion.fromLayerVersionArn(this, 'php-layer', CdkStack.brefLayerFunctionArn);

    const getLambda = new LambdaFunction(this, 'get', {
      layers: [layer],
      handler: 'get.php',
      runtime: Runtime.PROVIDED_AL2,
      code: Code.fromAsset(join(__dirname, `../assets/get`)),
      functionName: 'part1-get',
    });

    const fnUrl = getLambda.addFunctionUrl({authType: FunctionUrlAuthType.NONE});

    new CfnOutput(this, 'TheUrl', {
      // The .url attributes will return the unique Function URL
      value: fnUrl.url,
    });
  }
}

Highlights

The bref php layer:

public static brefLayerFunctionArn = 'arn:aws:lambda:us-east-1:209497400698:layer:php-82:16';

Where you point your entry point and source code:

      handler: 'get.php',
      runtime: Runtime.PROVIDED_AL2,
      code: Code.fromAsset(join(__dirname, `../assets/get`)), // get.php file inside the zip file located at this path

Using AWS Lambda built-int function URL (we will change to API Gateway later if needed):

    const fnUrl = getLambda.addFunctionUrl({authType: FunctionUrlAuthType.NONE});

Output

You would see CDK outputting the lambda function URL you will use to run your application. Something like:

Outputs:
CdkStack.TheUrl = https://6eoftivwkq4ht65d2h2fwlmsga0vnpfs.lambda-url.us-east-1.on.aws/

The handler

The PHP entry point has usually named a handler to the code. Its responsibility would be to forward the request to a controller or service that will perform the business rules and prepare the response to be returned. This is an HTTP-based lambda; the response should be an HTTP-valid response.

Obs.: You can note by the words above that any existing code that fits in the lambda computing model can be the controller or service to be called by the handler. Theoretically, you only need to create the handler compatible with the lambda environment, instantiate your controller or service, pass whatever it requires as an argument, and then return the expected response.

php/handlers/get.php

<?php
return function ($request) {
    $int = (int) ($request['queryStringParameters']['int'] ?? random_int(1, 300));

    $responseBody = [
        'response' => 'OK. Time: ' . time(),
        'now' => date('Y-m-d H:i:s'),
        'int' => $int,
        'result' => fibonacci($int),
    ];

    $response = new \Symfony\Component\HttpFoundation\JsonResponse($responseBody);

    return (new \Bref\Event\Http\HttpResponse($response->getContent(), $response->headers->all()))->toApiGatewayFormatV2();
};

Highlights

All handlers receive a request object. This is how to access /?int=myValue query string param.

    $int = (int) ($request['queryStringParameters']['int'] ?? random_int(1, 300));

The call to the function fibonacci() is how we would call any other controller or service.

'result' => fibonacci($int),

Using the Symfony Response to validate and prepare a valid HTTP response:

$response = new \Symfony\Component\HttpFoundation\JsonResponse($responseBody);

AWS API Gateway requires a certain Response shape. To be sure to have a valid API Gateway response:

    return (new \Bref\Event\Http\HttpResponse($response->getContent(), $response->headers->all()))->toApiGatewayFormatV2();

And that is it. You can now use your lambda function URL as in the output of the CDK stack above and call it with or without the query string param ?/int=.

➜   curl https://6eoftivwkq4ht65d2h2fwlmsga0vnpfs.lambda-url.us-east-1.on.aws/
{"response":"OK. Time: 1674612343","now":"2023-01-25 02:05:43","int":273,"result":5.05988662735923e+56}%

➜   curl https://6eoftivwkq4ht65d2h2fwlmsga0vnpfs.lambda-url.us-east-1.on.aws/\?int\=500
{"response":"OK. Time: 1674612353","now":"2023-01-25 02:05:53","int":500,"result":1.394232245616977e+104}%

➜   curl https://6eoftivwkq4ht65d2h2fwlmsga0vnpfs.lambda-url.us-east-1.on.aws/\?int\=500
{"response":"OK. Time: 1674612356","now":"2023-01-25 02:05:56","int":500,"result":1.394232245616977e+104}%

The test

We can predict the resources we create via CDK and check if those resources are as expected. The output of the CDK is a CloudFormation template, which we can put under test. That is solid, as unexpected behaviour or changes will fail in our CI pipeline test step.

test/cdk.test.ts

test('Lambda created', () => {
  const app = new cdk.App();
    // WHEN
  const Stack = new Cdk.CdkStack(app, 'MyTestStack');
    // THEN
  const template = Template.fromStack(stack);

  template.hasResourceProperties('AWS::Lambda::Function', {
    Layers: [Cdk.CdkStack.brefLayerFunctionArn]
  });
});

Highlights

We are checking if there is a lambda function and if that function is using the expected specific bref layer:

  template.hasResourceProperties('AWS::Lambda::Function', {
    Layers: [Cdk.CdkStack.brefLayerFunctionArn]
  });

Wrap-up

We have created our Stack and our first simple HTTP-based PHP lambda function using CDK (with tests). Next, we will improve our lambda to use more AWS resources and communication with more complex application services.

Categorias
Banco de dados PostGreSQL

Recover and replace win1252 content to utf-8 for a PostgreSQL database

I shouldn't be the first to need to export data from a PostgreSQL database installed on Windows with a win1252 code to a database with UTF-8 code (in my case, on a Linux server).

It is not enough to transform the transfer file to UTF-8, as the characters of win1252 (left double quote, right double quote, single quote and dash) will be there, with a weird value in your database.

In my experience, I had to import data as-is and afterwards perform an update using the function to rewrite data for UTF-8 characters correctly.

The following code examples are for: 1 - transform to HTML characters; 2 - transform to single characters.

HTML:

-- DROP FUNCTION substitui_win1252_html(texto);

CREATE OR REPLACE FUNCTION substitui_win1252_html(texto text)
    RETURNS text AS
$$
BEGIN

    texto := replace(replace(replace(replace(replace(texto, '’', '&#8217;'), '“', '&#8220;'), '”', '&#8221;'), '•','&#8226;'), '–', '&#8211;')::text; 
    RETURN texto;

END;
$$
LANGUAGE plpgsql;

Simple:

-- DROP FUNCTION substitui_win1252(texto);

CREATE OR REPLACE FUNCTION substitui_win1252(texto text)
    RETURNS text AS
$$
BEGIN

    texto := replace(replace(replace(replace(replace(texto, '’', ''''), '“', '"'), '”', '"'), '•','&#8226;'), '–', '-')::text;  
    RETURN texto;

END;
$$
LANGUAGE plpgsql;

Don't worry about the squares that you might see. If you copy it to a good text editor, you will their actual value.

Categorias
Linux Operational Systems Technology

Arch and my Thinkpad P14s

Here I will list some of the items I used to have Arch running with the minimum I needed for my ThinkPad P14s. I have experienced a seamless experience with my old ThinkPad Carbon x1.

I relate this to the Realtek 8852AE, an 802.11ax device used by P14s, that does not work out-the-box as the one used by x1, although faster.

Hardware

Software

Categorias
PHP

Introducing value objects in PHP

Domain-Driven Design (DDD) is a software design philosophy with one crucial concept: the structure and language of software code (class names, class methods, class variables) should match the business domain. To attend to this concept, DDD presents Value Objects, which, in practice, represents an object similar to a primitive type but should be modelled after the domain's business rules.

What does it mean?

(Checkout code at https://github.com/rafaelbernard/blog-value-objects)

Value Objects are first described in Evans' Domain-Driven Design book, and further explained in Smith and Lerman's Domain-Driven Design Fundamentals course. It is an immutable type that is distinguishable only by the state of its properties. Unlike an Entity1, a class type with a unique identifier and remains distinct even if its properties are otherwise identical, two Value Objects with the same properties can be considered equal.

We would use Value Object classes to represent a type strictly and encapsulate the validation rules of that type. Think of age or a card from a poker deck. They can seem sufficiently represented by primitive types such as integer and string, but in reality, they are managed by strict business rules2. For example, you would like to ensure that a particular age value is a non-negative integer. Or a picked card has a value not greater than 10, not 1, i.e., there is a range of allowed numbers or unique letters combined within four allowed suits.

To translate it to code, you could think of a Person class as a guest application to map a person's name and age. Usually, we would say a name is a string and age is an int.

class Person
{
    private string $name;
    private int $age;

    public function __constructor(string $name, int $age)
    {
        $this->name = $name;
        $this->age = $age;
    }
}

Although we can think that the specification above is good and we are just instantiating a Person object with all the required attributes, we may have some problems with the implementation because negative values are also int and would be allowed:

$personOk = new Person('John Doe', 18);
$notARealPerson = new Person('Benjamin Button', -18); // -18 year? no way!

The code above will not fail and is very common that some business logic is performed to assert age will never be negative:

// if from a form
$age = (int) $_POST['age'];

if ($age < 0) {
    throw new \Exception('Age could not be negative');
}

// but you need to copy the check everywhere a Person class is used. What if someone overlooks it?
$person = new Person('John Doe', $age);

When using Value Objects, you leverage your class with the exact type you need, correctly applying business logic. I will give more details later, but a hint of how the Person class would be with a Value Object:

use App\Domain;

class Person
{
    private string $name;
    private Age $age;

    public function __construct(string $name, Age $age)
    {
        $this->name = $name;
        $this->age = $age;
    }
}

And now we are always sure that Person objects will have a valid age. Bear with me and understand how Age Value Object class should look.

Samples of Value Objects

You may have already realized that we have many cases where we need similar things on every application. They are used to validate patterns to an expected format, a set of possible values to simulate an enum (only present on PHP 8) or to extend this set of values to be validated against some more rules.

I will be using PHP 7.4 compatible code in this blog. If you have applications using older versions, changing them should not be too complicated. Let me know in the comments if you need samples about how to write to an older version.

Some examples:

Validating a format:

Our Age class should be implemented as:

<?php

namespace App\Domain;

class Age
{
    private int $value;

    /**
     * @param int $value
     */
    public function __construct(int $value)
    {
        if (!$value < 0) {
            throw new \UnexpectedValueException('Negative numbers are not a valid age.');
        }

        $this->value = $value;
    }

    public function value(): int
    {
        return $this->value;
    }
}

Or a richer example with card suits:

<?php

namespace App\Domain;

class CardSuit
{
    const HEARTS = 'H';
    const DIAMONDS = 'D';
    const CLUBS = 'C';
    const SPADES = 'S';

    const SUITS = [
        self::HEARTS,
        self::DIAMONDS,
        self::CLUBS,
        self::SPADES,
    ];

    private string $value;

    public function __construct(string $value)
    {
        if (!in_array($value, self::SUITS)) {
            throw new \InvalidArgumentException("`$value` is not a valid card suit.");
        }

        $this->value = $value;
    }

    public function __toString()
    {
        return $this->value;
    }

    public function value(): string
    {
        return $this->value;
    }
}

Validating a set and its rules:

Observe rules when checking grade with isAlumni() from a given ClassYear. Ideally, we would have a Grade class, but I kept it more straightforward to simplify understanding the check, and I have a similar example with poker cards classes above.

<?php

namespace App\Domain;

class ClassYear
{
    private int $value;

    /**
     * @param int $value
     */
    public function __construct(int $value)
    {
        // For class year, our business rules is from 1901-2999
        if (!preg_match('/^((19\d{2})|(2)\d{3})$/', $value)) {
            throw new \InvalidArgumentException('Invalid year');
        }

        $this->value = $value;
    }

    public static function fromNow(): self
    {
        return new self(date('Y'));
    }

    public function value(): int
    {
        return $this->value;
    }

    public function isAlumni(): bool
    {
        return date('Y') - $this->value >= 13;
    }
}

Enum-like validation:

Observe that it encapsulates some Enum values, but also a simple rule (isGreaterThan), and it enriches the type with a simple business rule support that the application can also use.

<?php

namespace App\Domain;

class CardRank
{
    const ACE = 'A';
    const KING = 'K';
    const QUEEN = 'Q';
    const JACK = 'J';
    const TEN = 'X';
    const NINE = '9';
    const EIGHT = '8';
    const SEVEN = '7';
    const SIX = '6';
    const FIVE = '5';
    const FOUR = '4';
    const THREE = '3';
    const TWO = '2';

    const RANKS = [
        self::ACE,
        self::KING,
        self::QUEEN,
        self::JACK,
        self::TEN,
        self::NINE,
        self::EIGHT,
        self::SEVEN,
        self::SIX,
        self::FIVE,
        self::FOUR,
        self::THREE,
        self::TWO
    ];

    const WEIGHTS = [
        self::ACE => 20,
        self::KING => 13,
        self::QUEEN => 12,
        self::JACK => 11,
        self::TEN => 10,
        self::NINE => 9,
        self::EIGHT => 8,
        self::SEVEN => 7,
        self::SIX => 6,
        self::FIVE => 5,
        self::FOUR => 4,
        self::THREE => 3,
        self::TWO => 2
    ];

    private string $value;

    public function __construct(string $value)
    {
        if (!in_array($value, self::RANKS)) {
            throw new \InvalidArgumentException("`$value` is not a valid card rank.");
        }

        $this->value = $value;
    }

    public function __toString()
    {
        return $this->value;
    }

    public function value(): string
    {
        return $this->value;
    }

    public function weight()
    {
        return self::WEIGHTS[$this->value];
    }

    public function isGreaterThan(CardRank $cardRank): bool
    {
        return $this->weight() > $cardRank->weight();
    }

    // Some static helper functions can be created to be more readable

    public static function two(): CardRank
    {
        return new self(self::TWO);
    }

    public static function ace(): CardRank
    {
        return new self(self::ACE);
    }
}

When to use it

Create and use the Value Objects whenever you see that it fits "encapsulate the business rules for a given type" and "it represents an object similar to a primitive type". This will directly understand the expected type of code and instantly validate capabilities.

Some examples:

// a
$person = new Person('John Doe', new Age(20));
public function saveGuest(Person $person);

// b
$currentUserClassYear = ClassYear::fromNow();
if (!$currentUserClassYear->isAlumni()) {
    // do something for a non-alumni user
}

// c
$pickedCard = new Card(new CardSuit(CardSuit::SPADES), new CardRank(CardRank::ACE));

// d
$aceSpade = new Card(CardSuit::spades(), CardRank::ace());
$twoSpade = new Card(CardSuit::spades(), CardRank::two());
if ($aceSpace->isGreaterThan($twoSpade)) {
    // do something when greater, such as sum the weight to count points
}

A code-base that uses Value Objects avoids repetition code to validate a type that is not represented by native types, improves readability, and keeps consistency about business rules (no one could overlook or forget to "copy the validation code check"). Value Objects has a built-in validation, making it superior to validation classes and creating a relationship with your business domain rules. It then keeps the code clean and lean.

You can see all related code at https://github.com/rafaelbernard/blog-value-objects.

Glossary

  1. Entity: An entity may be defined as a thing capable of an independent existence that can be uniquely identified. An entity is an abstraction from the complexities of a domain. When we speak of an entity, we normally speak of some aspect of the real world that can be distinguished from other aspects of the real world. [(from Wikipedia)]
  2. Business rule: A business rule defines or constrains some aspect of business and always resolves to either true or false. Business rules are intended to assert business structure or to control or influence the behavior of the business.
  3. Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data and code: data in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods).