Categorias
Programação

Domain-Driven Design – DDD

Domain-driven development (DDD) is an approach to software design that focuses on the core domain and the logic that drives a business. The idea is to model the software based on real-world business concepts, ensuring that the code closely reflects the domain it is meant to serve.

Key aspects of DDD include:

  1. Domain Model: A shared understanding of the business logic, defined in terms meaningful to domain experts and developers.

  2. Ubiquitous Language: A common language shared by technical and non-technical stakeholders to describe the domain, ensuring clarity and reducing miscommunication.

  3. Bounded Contexts: Distinct areas within a larger system where a specific domain model applies. Each context can evolve independently while being integrated with others.

  4. Entities and Value Objects: Entities have unique identities, while value objects are immutable and are defined only by their properties.

  5. Aggregates: Clusters of related objects treated as a unit, ensuring consistency in business operations.

  6. Repositories and Services: Repositories handle data access, while services implement business operations that don’t belong to a single entity.

DDD emphasizes collaboration between developers and domain experts to ensure software design mirrors business processes and terminology.

A particularly important part of DDD is the notion of Strategic Design - how to organize large domains into a network of Bounded Contexts. [1]

Why is this important for your business?

The design it proposes puts our focus on the core domain and the business logic, which makes our product relevant and where it differentiates from competitors. The DDD design boosts the understanding of what our application does instead of which technology (framework, dependencies) it uses.

Domain-Driven Design is an approach to software development that centers the development on programming a domain model that has a rich understanding of the processes and rules of a domain. The name comes from a 2003 book by Eric Evans that describes the approach through a catalog of patterns. Since then a community of practitioners have further developed the ideas, spawning various other books and training courses. The approach is particularly suited to complex domains, where a lot of often-messy logic needs to be organized. [1]

We will see more about how this translates to our code as we understand the key aspects to be expanded in future posts.

Related:
[1] Domain-Driven Design by Martin Fowler
[2] Domain-Driven Design on Wikipedia

Categorias
Tropeçando

Tropeçando 114

What's new in PHP 8.4

Don't miss these great features

  • Property hooks
  • new without parentheses
  • JIT changes RFC
  • Implicit nullable types
  • New HTML5 support
  • array_find

Serverless Ephemeral Environments with Serverful AWS Services

How to successfully use ephemeral environments with serverful resources, with example in the AWS CDK and Typescript.

Comparison of Serverless Development and Hosting Platforms

When designing solutions in the cloud, there is (almost) always more than one alternative for achieving the same goal.

One of the characteristics of cloud-native applications is the ability to have an automated development process (such as the use of CI/CD pipelines).

In this blog post, I will compare serverless solutions for developing and hosting web and mobile applications in the cloud.

Generating fake data using SQL

Fake data are very useful in development environment for testing your application or some query performances for example.

Client-Side Architecture Basics [Guide]

Though the tools we use to build client-side web apps have changed substantially over the years, the fundamental principles behind designing robust software have remained relatively the same. In this guide, we go back to basics and discuss a better way to think about the front-end architecture using modern tools like React, xState, and Apollo Client.

Categorias
Tropeçando

Tropeçando 113

Neon

Serverless PostgreSQL database with real zero-scaling. The fully managed serverless Postgres with a generous free tier. We separate storage and compute to offer autoscaling, branching, and bottomless storage.

Compute scales dynamically to ensure you're ready for peak hours. Compute scales to zero and cold storage offloads to S3 for cost efficiency. Create a fully managed serverless Postgres instance in seconds.

Make your app faster with PHP 8.3

PHP 8.3 is the latest version of PHP. It has exciting new features and major improvements in performance. By upgrading to 8.3, you can achieve a significant increase in speed. In this article, we dive into how PHP 8.3 can be a game changer. It can speed up your application's performance.

OWASP Top 10 Explained: SQL Injection

SQL Injection (SQLi) is a code injection technique that exploits a security vulnerability occurring in the database layer of an application.

The vulnerability is present when user inputs are either improperly filtered for string literal escape characters embedded in SQL statements or user input is not strongly typed and thereby unexpectedly executed.

This allows an attacker to manipulate SQL queries, enabling them to unauthorized access, modify, and delete data in the database. This can lead to significant breaches of confidentiality, integrity, and availability, ranging from unauthorized viewing of data to complete database compromise.

15 Quick Useful Tips for AWS CDK Engineers

In this short article, we will cover 15 useful tips with accompanying code snippets for AWS CDK users.

Implementing DTOs, Mappers & the Repository Pattern using the Sequelize ORM [with Examples] - DDD w/ TypeScript

There are several patterns that we can utilize in order to handle data access concerns in Domain-Driven Design. In this article, we talk about the role of DTOs, repositories & data mappers in DDD.

Categorias
PHP Programação

Lambda extension to cache SSM and Secrets Values for PHP Lambda on CDK

Introduction

Managing secrets securely in AWS Lambda functions is crucial for maintaining the integrity and confidentiality of your applications. AWS provides services like AWS Secrets Manager and AWS Systems Manager Parameter Store to manage secrets. However, frequent retrieval of secrets can introduce latency and additional costs. To optimize this, we can cache secrets using a Lambda Extension.

In this article, we will demonstrate how to use a pre-existing Lambda Extension to cache secrets for a PHP Lambda function using the Bref layer and AWS CDK for deployment.

On a high-level, these are the components involved:

Lambda Execution Components

Using the AWS Parameter and Secrets Lambda extension to cache parameters and secrets

The new AWS Parameters and Secrets Lambda extension provides a managed parameters and secrets cache for Lambda functions. The extension is distributed as a Lambda layer that provides an in-memory cache for parameters and secrets. It allows functions to persist values through the Lambda execution lifecycle, and provides a configurable time-to-live (TTL) setting.

When you request a parameter or secret in your Lambda function code, the extension retrieves the data from the local in-memory cache, if it is available. If the data is not in the cache or it is stale, the extension fetches the requested parameter or secret from the respective service. This helps to reduce external API calls, which can improve application performance and reduce cost.

Prerequisites

  • AWS Account
  • AWS CLI configured
  • AWS CDK installed
  • PHP installed
  • Composer installed

If you have Docker, all requirements are being installed by it.

Repository Overview

The code for this project is available in the following GitHub repository: rafaelbernard/serverless-patterns. The relevant files are located in the lambda-extension-ssm-secrets-cdk-php folder.

Step-by-Step Guide

1. Cloning the Repository

First, clone the repository and navigate to the relevant directory:

git clone --branch rafaelbernard-feature-lambda-extension-ssm-secrets-cdk-php https://github.com/rafaelbernard/serverless-patterns.git
cd serverless-patterns/lambda-extension-ssm-secrets-cdk-php

2. Project Structure

The project structure is as follows:

.
├── assets
│   └── lambda
│       └── lambda.php
├── bin
│   └── cdk.ts
├── cdk
│   └── cdk-stack.ts
├── cdk.json
├── docker-compose.yml
├── Dockerfile
├── example-pattern.json
├── Makefile
├── package.json
├── package-lock.json
├── php
│   ├── composer.json
│   ├── composer.lock
│   └── handlers
│       └── lambda.php
├── README.md
├── run-docker.sh
└── tsconfig.json

3. Setting Up the Lambda Function

The main logic for fetching and caching secrets is in php/handlers/lambda.php:

<?php

use Bref\Context\Context;
use Bref\Event\Http\HttpResponse;
use GuzzleHttp\Client;
use Symfony\Component\HttpFoundation\JsonResponse;

// Responsibilities are simplified into one file for demonstration purposes
// We would have have those methods in a Service class

function getParam(string $parameterPath): string
{
    // Set `withDecryption=true if you also want to retrieve SecureString SSMs
    $url = "http://localhost:2773/systemsmanager/parameters/get?name={$parameterPath}&withDecryption=true";

    try {
        $client = new Client();

        $response = $client->get($url, [
            'headers' => [
                'X-Aws-Parameters-Secrets-Token' => getenv('AWS_SESSION_TOKEN'),
            ]
        ]);

        $data = json_decode($response->getBody());
        return $data->Parameter->Value;
    } catch (\Exception $e) {
        error_log('Error getting parameter => ' . print_r($e, true));
    }
}

function getSecret(string $secretName): stdClass
{
    $url = "http://localhost:2773/secretsmanager/get?secretId={$secretName}";

    try {
        $client = new Client();

        $response = $client->get($url, [
            'headers' => [
                'X-Aws-Parameters-Secrets-Token' => getenv('AWS_SESSION_TOKEN'),
            ]
        ]);

        $data = json_decode($response->getBody());
        return json_decode($data->SecretString);
    } catch (\Exception $e) {
        error_log('Error getting secretsmanager => ' . print_r($e, true));
    }
}

return function ($request, Context $context) {
    $secret = getSecret(getenv('THE_SECRET_NAME'));
    $response = new JsonResponse([
        'status' => 'OK',
        getenv('THE_SSM_PARAM_PATH') => getParam(getenv('THE_SSM_PARAM_PATH')),
        getenv('THE_SECRET_NAME') => [
            'password' => $secret->password,
            'username' => $secret->username,
        ],
    ]);

    return (new HttpResponse($response->getContent(), $response->headers->all()))->toApiGatewayFormatV2();
};

4. Setting Up AWS CDK Stack

The AWS CDK stack is defined in cdk/cdk-stack.ts:

import { CfnOutput, CfnParameter, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { join } from "path";
import { packagePhpCode, PhpFunction } from "@bref.sh/constructs";
import { FunctionUrlAuthType, LayerVersion, Runtime } from "aws-cdk-lib/aws-lambda";
import { StringParameter } from "aws-cdk-lib/aws-ssm";
import { Policy, PolicyStatement } from 'aws-cdk-lib/aws-iam';
import { Secret } from 'aws-cdk-lib/aws-secretsmanager';

export class CdkStack extends Stack {
  constructor(scope: Construct, id: string, props?: StackProps) {
    super(scope, id, props);

    const stackPrefix = id;

    // May be set as parameter new CfnParameter(this, 'parameterStoreExtensionArn', { type: 'String' });
    const parameterStoreExtensionArn = 'arn:aws:lambda:us-east-1:177933569100:layer:AWS-Parameters-and-Secrets-Lambda-Extension:11';
    const parameterStoreExtension = new CfnParameter(this, 'parameterStoreExtensionArn', { type: 'String', default: parameterStoreExtensionArn });

    const paramTheSsmParam = new StringParameter(this, `${stackPrefix}-TheSsmParam`, {
      parameterName: `/${stackPrefix.toLowerCase()}/ssm/param`,
      stringValue: 'the-value-here',
    });

    // CDK cannot create SecureString
    // You would create the SecureString out of CDK and use the param name here
    // const paramAnSsmSecureStringParam = StringParameter.fromSecureStringParameterAttributes(this, `${stackPrefix}-AnSsmSecureStringParam`, {
    //   parameterName: `/${stackPrefix.toLowerCase()}/ssm/secure-string/params`,
    // });

    const templatedSecret = new Secret(this, 'TemplatedSecret', {
      generateSecretString: {
        secretStringTemplate: JSON.stringify({ username: 'postgres' }),
        generateStringKey: 'password',
        excludeCharacters: '/@"',
      },
    });

    // The param path that will be used to retrieve value by the lambda
    const lambdaEnvironment = {
      THE_SSM_PARAM_PATH: paramTheSsmParam.parameterName,
      THE_SECRET_NAME: templatedSecret.secretName,
      // If you create the SecureString
      // THE_SECURE_SSMPARAM_PATH: paramAnSsmSecureStringParam.parameterName,
    };

    const functionName = `${id}-lambda`;
    const theLambda = new PhpFunction(this, `${stackPrefix}${functionName}`, {
      handler: 'lambda.php',
      phpVersion: '8.3',
      runtime: Runtime.PROVIDED_AL2,
      code: packagePhpCode(join(__dirname, `../assets/lambda`)),
      functionName,
      environment: lambdaEnvironment,
    });

    // Add extension layer
    theLambda.addLayers(
      LayerVersion.fromLayerVersionArn(this, 'ParameterStoreExtension', parameterStoreExtension.valueAsString)
    );

    // Set additional permissions for parameter store
    theLambda.role?.attachInlinePolicy(
      new Policy(this, 'additionalPermissionsForParameterStore', {
        statements: [
          new PolicyStatement({
            actions: ['ssm:GetParameter'],
            resources: [
              paramTheSsmParam.parameterArn,
              // If you create the SecureString
              // paramAnSsmSecureStringParam.parameterArn,
            ],
          }),
        ],
      }),
    )

    templatedSecret.grantRead(theLambda);

    const fnUrl = theLambda.addFunctionUrl({ authType: FunctionUrlAuthType.NONE });

    new CfnOutput(this, 'LambdaUrl', { value: fnUrl.url });
  }
}

5. Deploying with AWS CDK

Make sure you have already AWS variables set and run below command to install required dependancies:

# Using docker -- check run-docker.sh
make up

or

# Using local
npm ci
cd php && composer install --no-scripts && cd -

After that, you will have all dependencies installed. Deploy it executing:

# Using docker
make deploy

or

# Using local
npm run deploy

6. Testing the Lambda Function

The CDK output will have the Lambda function URL, which you can use to test and retrieve the values:

Outputs:
LambdaExtensionSsmSecretsCdkPhpStack.LambdaUrl = https://keamdws766oqzr6dbiindaix3a0fdojb.lambda-url.us-east-1.on.aws/

You should see the secret value and parameter value returned by the Lambda function. Subsequent invocations should retrieve the values from the cache, reducing latency and cost.

{
  "status": "OK",
  "/lambdaextensionssmsecretscdkphpstack/ssm/param": "the-value-here",
  "TemplatedSecret3D98B577-4jOWSbUMCHmF": {
    "password": "!o9GpBzpa>dYdo.Gx3J2!<zd(s-Fg;ev",
    "username": "postgres"
  }
}

Performance benefits

A similar example application written in Python performed three tests, reducing API calls ~98%. I am quoting their findings, as the benefits are the same for this PHP Lambda:

To evaluate the performance benefits of the Lambda extension cache, three tests were run using the open source tool Artillery to load test the Lambda function.

config:
 target: "https://lambda.us-east-1.amazonaws.com"
phases:
  -
duration: 60
arrivalRate: 10
rampTo: 40
Test 1: The extension cache is disabled by setting the TTL environment variable to 0. This results in 1650 GetParameter API calls to Parameter Store over 60 seconds.

Test 2: The extension cache is enabled with a TTL of 1 second. This results in 106 GetParameter API calls over 60 seconds.
Test 3: The extension is enabled with a TTL value of 300 seconds. This results in only 18 GetParameter API calls over 60 seconds.

In test 3, the TTL value is longer than the test duration. The 18 GetParameter calls correspond to the number of Lambda execution environments created by Lambda to run requests in parallel. Each execution environment has its own in-memory cache and so each one needs to make the GetParameter API call.

In this test, using the extension has reduced API calls by ~98%. Reduced API calls results in reduced function execution time, and therefore reduced cost.

7. Clean up

To delete the stack, run:

make bash
npm run destroy

Conclusion

In this article, we demonstrated how to use a pre-existing Lambda Extension to cache secrets for a PHP Lambda function using the Bref layer and AWS CDK for deployment. By caching secrets, we can improve the performance and reduce the cost of our serverless applications. The approach detailed here can be adapted to various use cases, enhancing the efficiency of your AWS Lambda functions.

For more information on the Parameter Store, Secrets Manager, and Lambda extensions, refer to:

For more serverless learning resources, visit Serverless Land.

Categorias
Technology

Notes – ServerlessDays NZ 2024

Those are my notes for ServelessDays NZ - Auckland, at 24th May 2024.

Sheen Brisals - Think, Architect, and Build Serverless Applications as Set Pieces

During ServerlessDaysNZ Sheen Brisals gave the talk Think, Architect, Build, Sustain Serverless Application Set Pieces. It was full of important insights to Set Pieces and sustain Serverless Applications.

I particularly liked how he touched on the fact that legacy applications being rewritten to Serverless is a thing, as this is everywhere being part of lots of engineers' lives.

More than that, Brisals highlighted how patterns and pivotal for a maintainable and reliable application, despite the execution model:

  • Identify Domains so you can decouple a domain to rewrite it more effectively
  • Complexity is better abstracted, becoming simpler, when you know and apply good proven Patterns -- the exception is to invent a new one
  • Design Patterns, Architecture Patterns, Execution Model patterns, Software Design, etc, will improve the quality of your Application. As Serverless will likely push you to learn them, you have the opportunity to develop as an Architect
  • The Serverless should help you to think in the whole picture, as the settled pieces need communication between them, therefore optimising value to the end-user

Unfortunately, I was not selected to win the book Serverless Development on AWS, but for those who won, I wish they could learn a lot there. What a great indication of how good a fellow is Sheen. Giving away those books is a gigantic contribution to the community!

I am very pleased to know you in person, Sheen.

This presentation talked a lot with Michael Walmsley's. So nice.

Heitor Lessa - Let Them Retry: Idempotency for the Rest of Us

Despite being common to talk or to assess if a given application or infrastructure follows best practices and great architectural patterns, implementing this is a challenge for development teams for different reasons.

Heitor Lessa, in his talk "Let Them Retry: Idempotency for the Rest of Us", demonstrates how a tool that improves the Developer Experience bringing the implementation of the patterns close to the code is powerful to win adoption. PowerTools is a developer toolkit to accelerate development providing interfaces and abstractions to implement Serverless best practices.

Heitor used a sample code, emulating an existent codebase, from an application already working in Production. We had the opportunity to see the appeal of PowerTools. Usually, Idempotency (to handle duplicated transactions) is associated with a good amount of change in the code. Still, PowerTools was designed to introduce no or very few impacts to a code that is very dangerous to change. As building blocks, adding more complex functionalities, such as caching, payload tempering and failure mode.

The existence of tools like PowerTools reinforces how implementing good and proven software (and architectural) patterns is pivotal for a scalable and reliable application. The Serverless execution mode can mislead to relaxed code, but that would weaken the performance and stability of an application. The lesson is that working smarter is applying known solutions for specific problems.

PowerTools provides a wide range of functionalities, not surprisingly being able to match Well-Architected frameworks in their implementation: Secrets/System Manager Parameters, Event Source Data Classes, Validation, Feature Flag, Idempotency, Data Masking, Streaming, Middleware, JMESPath, Batch processing, Metrics, Tracing. We avoid writing boilerplates, repeated code and even the need to create a shared lib of constructs ourselves. The community is improving it.

PowerTools is a helpful tool to implement these features. This is an opportunity to learn and deep dive into best practices and designs. It also enhances how you observe and monitor your application. It is a serious tool to consider if you intend to leverage how your code is executed, deployed, monitored and performed.

In his talk, Heitor implemented, live in the meeting, Idempotency into a legacy code. He enriched it with failure modes, caching, payload tampering and order tolerance. So, PowerTools is also very easy and quick to use.

Best practices for everyone

  • Heitor Lessa

Michael Walmsley - Unleashing Serverless Scalability on AWS: Practical Strategies and Proven Patterns

Some started Michael Walmsley introduction saying "A fantastic human being...". And I will start from there as well because I have experienced that myself.

I bumped into Michael while walking to the conference venue. I first heard about it from a great friend, Joshua Katz, who was impressed with Michael. It was a very pleasant walk while sharing quick impressions of being AWS Community Builders and excitement about the conference.

It happens that Michael is now an AWS Hero with many years of experience to share. One of the first things he said in his talk was replaying Suzana Melo Moraes (you should listen to this girl - so inspiring), who has three years in tech, when she was saying that, mostly every day, she struggles with something usually starting from having no idea how to fix a particular problem she was assigned to solve. Michael sympathised, saying that, even after 30 years, there are days that things happen to him the same way. This happens in everyone involved in this field and it was so humbling coming from him.

As usual, Michael doesn't keep secrets by himself but shares insightful tips. His presentation was about Unleashing Serverless Scalability on AWS:

  • Start the design with the needed scalability in mind (can you see that links to Sheen Brassals talk?)
  • Master and understand well the limits, they are there for a reason and as early you design your application to work with them, better design your application and scalable-ready it is
  • Events, Messages, and Commands are the way of communication for Serverless and a must-know subject
  • Do not ignore Flow Control
  • Break your application limits before someone else does -- use performance tests in your favour
  • Study and use proven patterns (check https://serverlessland.com)

Brad Jacques - Delivering at pace while evolving a Serverless architecture

Brad Jacques delivered a talk titled "Delivering at pace while evolving a Serverless architecture" at ServerlessDays NZ. Brad covered a challenging project where file manipulation use cases were an important feature.

"Complexity is everywhere". Brad could not help it advise that a successful delivery starts from breaking the complexity into pieces, to plan ahead of time and to do the simple things first. He mentioned that the deadline was short, affirming it was the right strategy to evolve the architecture.

He also stressed the use of established patterns for success, such as breaking down complexity, identifying domains and context boundaries, and understanding limits and messaging.

It was also important how the work was planned with the team. Having a small committed team, fast feedback loops and continuous measurement were key to proving the solution was correct.

The summary is so great that I will copy it here entirely:

  • Do the simple thing first
  • Small teams with a fast feedback loop (showcase often)
  • Identify risk early, shift left, and spike
  • Continuously measure performance, and stress test
  • Isolate context boundaries
  • The solution must prove itself correct

Brad's insights were based on his experience with a new project for a major client at a consultancy company. However, it was clear that the principles and strategies he shared apply to any application, in any industry, and of any size.

His parting advice was to "evolve your architecture, measure, and make decisions throughout the process."

Categorias
PHP Programação

A bref AWS PHP story – Part 3

We are starting Part 3 of the Series "A bref AWS PHP history". You can check Part 1, where I presented the PHP language as a reliable and good alternative for Serverless applications and Part 2 where we see the usage of CDK features in favour of a faithful CI/CD.

Part 3 is to show the upgrade path to Bref 2 and to achieve more coverage of the AWS resources. We will use DynamoDB, a powerful database for serverless architectures.

Some of those topics seem straightforward to some people, but I would like to avoid guessing that this is known to the audience since I have experienced some PHP developers struggling to put all these together for the first time due to the paradigm change. It should be fun.

Table of contents:

  1. What else are we doing?
  2. Describing more AWS services - Adding a DynamoDB table
  3. Bref upgrade
  4. Testing CDK
  5. PHP and AWS Services
  6. Wrap-up

What else are we doing?

In this section, we'll explore additional functionalities and enhancements to our serverless application. Building upon the foundation laid in Part 2, we'll introduce new features and integrations to further extend the capabilities of our AWS PHP application.

The Part 2 uses the result of the Fibonacci of a provided integer or a random integer from 400 to 1000 (to get a good image and not to overflow integer). This integer is the number of pixels of an image from the bucket and an arbitrary request metadata we are creating. If the image does not exist, the lambda will fetch a random image from the web with that number of pixels, save it and generate the metadata.

The computing complexity is irrelevant because it could be very complex logic or very simple, and the topics we are discussing in this part of the series will use the same design.

The lambda will now search the metadata in a DynamoDB table, saving the metadata when it does not exist. DynamoDB is largely used in Lambda code.

Get the part-3 source-code on GitHub and the diff from part-2.

Describing more AWS services - Adding a DynamoDB Table

DynamoDB plays a crucial role in serverless architectures, offering scalable and high-performance NoSQL database capabilities. In this section, we'll delve into the process of integrating DynamoDB into our AWS CDK stack, expanding our application's data storage and retrieval capabilities.

DynamoDB is a fully managed NoSQL database service provided by AWS, offering seamless integration with other AWS services, automatic scaling, and built-in security features. Its scalability, low latency, and flexible data model make it well-suited for serverless architectures and applications with varying throughput requirements.

    const table = new Table(this, TableName, {
      partitionKey: { name: 'PK', type: AttributeType.STRING },
      sortKey: { name: 'SK', type: AttributeType.STRING },
      removalPolicy: RemovalPolicy.DESTROY,
      tableName: TableName,
    });

Following the same principles for creating other AWS resources, we utilize the AWS CDK to define a DynamoDB table within our stack. Let's dive into the key parameters of the Table constructor:

  • partitionKey: This parameter defines the primary key attribute for the DynamoDB table, used to distribute items across partitions for scalability. In our example, { name: 'PK', type: AttributeType.STRING } specifies a partition key named 'PK' with a string type. The naming convention ('PK') is arbitrary and can be tailored to suit your application's needs.
  • sortKey: For tables requiring a composite primary key (partition key and sort key), the sortKey parameter comes into play. Here, { name: 'SK', type: AttributeType.STRING } defines a sort key named 'SK' with a string type. Like the partition key, the name and type of the sort key can be customized based on your data model.
  • removalPolicy: This parameter determines the behaviour of the DynamoDB table when the CloudFormation stack is deleted. By setting RemovalPolicy.DESTROY, we specify that the table should be deleted (destroyed) along with the stack. Alternatively, you can opt for RemovalPolicy.RETAIN to preserve the table post-stack deletion, which may be useful for retaining data.

By decoupling configuration from implementation, we adhere to SOLID principles, ensuring cleaner and more robust code. This approach fosters flexibility, allowing our code to seamlessly adapt to changes, such as modifications to the table name while maintaining its functionality.

The implementation code is aware that the name will come from an environment variable and will work with that (yes, if you think that test will be easy to write, you are right):

    const lambdaEnvironment = {
      TableName,
      TableArn: table.tableArn,
      BucketName: brefBucket.bucketName,
    };

Bref Upgrade

Bref, the PHP runtime for AWS Lambda, continually evolves to provide developers with the latest features and optimizations. In this section, we'll discuss the upgrade to Bref 2.0 and explore how it enhances the deployment process and performance of our serverless PHP applications.

In this section, we're upgrading our usage of Bref, a PHP runtime for AWS Lambda, to version 2.0. Bref simplifies the deployment of PHP applications to AWS Lambda, enabling us to run PHP code serverlessly.

The upgrade involves modifying our AWS CDK code to utilize the new features and improvements introduced in Bref 2.0. One notable improvement is the automatic selection of the latest layer of the PHP version, which simplifies the deployment process and ensures that our Lambda functions run on the most up-to-date PHP environment available.

  const getLambda = new PhpFunction(this, ${stackPrefix}${functionName}, {
    handler: 'get.php',
    phpVersion: '8.3',
    runtime: Runtime.PROVIDED_AL2,
    code: packagePhpCode(join(__dirname, ../assets/get), {
      exclude: ['test', 'tests'],
    }),
    functionName,
    environment: lambdaEnvironment,
  });
  • `PhpFunction` Constructor: We're using the `PhpFunction` constructor provided by Bref to define our Lambda function. This constructor allows us to specify parameters such as the handler file, PHP version, runtime, code location, function name, and environment variables.
  • `handler`: Specifies the entry point file for our Lambda function, where the execution starts.
  • `phpVersion`: Defines the PHP version to be used by the Lambda function. In this case, we're using PHP version 8.3.
  • `runtime`: Indicates the Lambda runtime environment. Here, `Runtime.PROVIDED_AL2` signifies the use of the Amazon Linux 2 operating system.
  • `code`: Specifies the location of the PHP code to be deployed to Lambda.
  • `functionName`: Sets the name of the Lambda function.
  • `environment`: Allows us to define environment variables required by the Lambda function, such as database connection strings or configuration settings.

By upgrading to Bref 2.0 and configuring our Lambda function accordingly, we ensure compatibility with the latest enhancements and optimizations provided by Bref, thereby improving the performance and reliability of our serverless PHP applications on AWS Lambda.

Testing CDK

Ensuring the correctness and reliability of our AWS CDK infrastructure is crucial for maintaining a robust serverless architecture. In this section, we'll delve into testing our CDK resources, focusing on the DynamoDB table we added in the previous section.

As described earlier, we utilized the AWS CDK to provision a DynamoDB table within our serverless stack. Now, let's ensure that the table is configured correctly and behaves as expected by writing tests using the CDK's testing framework.

First, let's revisit how we added the DynamoDB table:

const table = new Table(this, TableName, {
  partitionKey: { name: 'PK', type: AttributeType.STRING },
  sortKey: { name: 'SK', type: AttributeType.STRING },
  removalPolicy: RemovalPolicy.DESTROY,
  tableName: TableName,
});

In this code snippet, we define a DynamoDB table with specified attributes such as partition key, sort key, removal policy, and table name. Now, to ensure that this table is created with the correct configuration, we'll write tests using CDK's testing constructs.

Check the following thest:

test('Should have DynamoDB', () => {
  expectCDK(stack).to(
    haveResource(
      'AWS::DynamoDB::Table',
      {
        "DeletionPolicy": "Delete",
        "Properties": {
          "AttributeDefinitions": [
            {
              "AttributeName": "PK",
              "AttributeType": "S",
            },
            {
              "AttributeName": "SK",
              "AttributeType": "S",
            },
          ],
          "KeySchema": [
            {
              "AttributeName": "PK",
              "KeyType": "HASH",
            },
            {
              "AttributeName": "SK",
              "KeyType": "RANGE",
            },
          ],
          "ProvisionedThroughput": {
            "ReadCapacityUnits": 5,
            "WriteCapacityUnits": 5,
          },
          "TableName": "BrefStory-table",
        },
        "Type": "AWS::DynamoDB::Table",
        "UpdateReplacePolicy": "Delete",
      },
      ResourcePart.CompleteDefinition,
    )
  );
});

This test ensures that the DynamoDB table is created with the correct attribute definitions, key schema, provisioned throughput, table name, and other properties specified during its creation. By writing such tests, we validate that our CDK infrastructure is provisioned accurately and functions as intended.

PHP and AWS Services

Leveraging PHP in a serverless environment opens up new possibilities for interacting with AWS services. In this section, we'll examine how PHP code seamlessly integrates with various AWS services, following best practices for maintaining clean and modular code architecture.

This is the part where we have fewer serverless needs impacting the code, as the PHP code will follow the same logic we might be using to communicate with AWS services on any other platform overall (there are always some specific use cases).

The reuse of the same existing logic is excellent. It leverages the decision to keep using PHP when moving that workload to Serverless, as the bulk of the knowledge and already proven code would remain as-is. We may escape the trap of classifying that PHP code as legacy as if it should be avoided, terminated or halted.

As a side note, a few external layers of our software architecture are touched if a good software architecture was applied before. Therefore, during the implementation of this architectural change, it should be quick to realise how beneficial and time-saving it is to have a well-architectured application with a balanced decision for patterns, principles, and designs to be applied, ultimately giving flexibility to the application and its features.

The handler is simplified now and should accommodate everything to a class in the direction of following SRP, a principle that we are bringing to the code during the code bites:

Applications, domains, infrastructure, etc

Our `PicsumPhotoService` is still orchestrating the business logic. The Single Responsibility Principle and Inversion of Control are applied. We are injecting the specialized services in the constructor:

// readonly class PicsumPhotoService
    public function __construct(
        private HttpClientInterface $httpClient,
        private ImageStorageService $storageService,
        private ImageRepository $repository,
    )
    {
    }

Each specialized service has all its dependencies injected in the constructor as well. We can see the factory instantiation:

    public static function createPicsumPhotoService(): PicsumPhotoService
    {
        return new PicsumPhotoService(
            HttpClient::create(),
            new S3ImageService(
                new S3Client(),
                getenv('BucketName'),
            ),
            new DynamoDbImageRepository(
                new DynamoDbClient(),
                getenv('TableName'),
            ),
        );
    }

The `ImageStorageService` will handle all image operations, connecting to the AWS Service when appropriate and observing business logic details. This is a slim interface:

interface ImageStorageService
{
    public function getImageFromBucket(int $imagePixels): ?array;

    public function saveImage(int $imagePixels, mixed $fetchedImage): void;

    public function createAndPutMetadata(int $imagePixels, array $metadata): PutObjectOutput;
}

Instead of `: PutObjectOutput`, usually we would return a domain object, to not couple the interface with implementation details of using S3 Services, but for simplicity, I did not create a domain object here. It would be preferable though.

The `ImageRepository` will handle all metadata operations. It will save into a repository and observe logic details as well. Following the same principles, this is a slim interface:

interface ImageRepository
{
    public function findImage(int $imagePixels): ImageMetadataItem;

    public function addImageMetadata(ImageMetadataItem $imageMetadataItem): PutItemOutput;
}

The `ImageMetadataItem` is a representation of one of the domain objects we have in our codebase.

readonly class ImageMetadataItem
{
    public function __construct(public int $imagePixels, public array $metadata)
    {
    }

    public function toDynamoDbItem(): array
    {
        return [
            'PK' => new AttributeValue(['S' => 'IMAGE']),
            'SK' => new AttributeValue(['S' => "PIXELS#{$this->imagePixels}"]),
            'pixels' => new AttributeValue(['N' => "{$this->imagePixels}"]),
            'metadata' => new AttributeValue(['S' => json_encode($this->metadata)]),
            ...ConvertToDynamoDb::item($this->metadata),
        ];
    }

    /**
     * @param array $item
     */
    public static function fromDynamoDb(array $item): static
    {
        return new static(
            (int) $item['pixels']->getN(),
            (array) json_decode($item['metadata']->getS()),
        );
    }
}

If you check the implementation details, it operates transparently with all the services, business logic and AWS Services without any high couple with them. There are two utility functions:

  • toDynamoDbItem: to transform the object into a valid DynamoDb Item to be added
  • fromDynamoDb: to perform the opposite operation, transforming a DynamoDb Item into a domain object

The scope of the operation is very clear and does not bring the domain into dependency on those services, as the domain object can be used independently. It does not block any other way of dealing with it, giving the usage with other types of services, such as different databases or APIs. This is very important to the maintainability of the application without sacrificing the ease of readiness as it keeps the context of the utilities in the right place.

If you check all PHP code carefully, Bref is such a great abstraction layer that, removing the code from the handler file, any other line of code can be used as a lambda or a web application interchangeably without changing any line of code. This is very powerful, as you can imagine how you can leverage and migrate some of the existing code to lambda by just creating a handler that will trigger your existing code, if the code is well structured.

Wrap-up

It would be simple like that. Check more details in the source code, install it and try it yourself. This project is ready to:

  • Extend lambda function using Bref
  • Upgrade to use Bref 2.0
  • Create a DynamoDB table
  • Test the stack Cloudformation code
  • Separate the PHP logic
  • Have PHP communicating with AWS Services

Links:

Categorias
Programação Technology

GitHub Actions workflow for deploying a scheduled task using AWS ECS and EventBridge Scheduler

Cron jobs are repetitive tasks scheduled to run periodically at fixed times, dates, or intervals. It typically automates system maintenance or administration. Some workloads that are still running on non-containerized platforms (VMs, bare metal, etc.) are suitable to be moved to Serveless with multiple alternatives, depending on the context of each task.

Considering AWS services, for most of the options EventBridge Scheduler will be used to manage tasks as it is capable of invoking lots of AWS services. One of them is invoking a containerized application, or ECS task.

Amazon EventBridge Scheduler is a serverless scheduler that allows you to create, run, and manage tasks from one central, managed service. Highly scalable, EventBridge Scheduler allows you to schedule millions of tasks that can invoke more than 270 AWS services and over 6,000 API operations. Without the need to provision and manage infrastructure, or integrate with multiple services, EventBridge Scheduler provides you with the ability to deliver schedules at scale and reduce maintenance costs.
-- https://docs.aws.amazon.com/scheduler/latest/UserGuide/what-is-scheduler.html
(EventBridge Scheduler is recommended to be used instead of CloudWatch Scheduler with EventBridge rules)

I worked on an application running on EC2 to ECS that is still using its cron jobs. Cron jobs were migrated to EventBridge Scheduler. Our CI/CD uses GitHub Actions and Terraform. AWS provides actions that can create and deploy a ECS task definition (the container blueprint) to an ECS service, but there is no action to deploy an ECS task to the EventBridge Scheduler, as the cron task is not executed under a service.

To deploy the new code we have to write some code to the GitHub Action and I think it might benefit others in a similar context. We use Terraform as Infrastructure as a Code, so keep this in mind if you need to adapt to your IaaS solution.

There will be the full Yaml file here but I will comment parts of it separately afterwards.

name: Deploy Scheduled task XYZ

on:
  workflow_dispatch:
    inputs:
      imageHash:
        description: 'Image hash to deploy'
        required: true
        type: string
      environment:
        description: 'Environment to run tests against'
        type: environment
        required: true
        default: test

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

env:
  APP_NAME: application-name-here
  AWS_REGION: "ap-southeast-2"
  ECR_NAME: ecr-name-here
  ECS_CLUSTER: cluster-name-here
  IMAGE_NAME: application-image-name-here
  TASK_NAME: cron-task-name-here

permissions:
  id-token: write
  contents: read    # This is required for actions/checkout

jobs:
  deploy:
    name: Deploy to ${{ inputs.environment }}
    runs-on: ubuntu-latest
    environment: ${{ inputs.environment }}
    env:
      JOB_ENV: ${{ inputs.environment }}
    steps:
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets[format('iam_role_to_assume_{0}', inputs.environment)] }}
          role-session-name: github-ecr-push-workflow-${{ inputs.environment }}
          aws-region: ${{ env.AWS_REGION }}

      - name: Verify image
        run: aws ecr describe-images --repository-name ${{ inputs.environment }}-${{ env.ECR_NAME }}-${{ env.APP_NAME }} --image-ids imageTag=${{ inputs.imageHash }}

      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2

      - name: Download the task definition ${{ env.TASK_NAME }}
        run: aws ecs describe-task-definition --task-definition ${{ env.TASK_NAME }} --query taskDefinition > task-definition.json

      - name: Fill in the new image ID in the Amazon ECS task definition ${{ env.TASK_NAME }}
        id: task-def-cron
        uses: aws-actions/amazon-ecs-render-task-definition@v1
        env:
          ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
        with:
          task-definition: task-definition.json
          container-name: ${{ env.TASK_NAME }}
          image: ${{ env.ECR_REGISTRY }}/${{ env.JOB_ENV }}-${{ env.ECR_NAME }}-${{ env.APP_NAME }}:${{ inputs.imageHash }}

      - name: Deploy Amazon ECS task definition ${{ env.TASK_NAME }}
        id: deploy-cron
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def-cron.outputs.task-definition }}
          cluster: ${{ inputs.environment }}-${{ env.ECS_CLUSTER }}

      - name: Checkout infrastructure
        uses: actions/checkout@v4
        with:
          repository: orgnamehere/iaas-repo-here
          ref: main
          path: './working-path'
          token: ${{ secrets.PAT_TOKEN }}

      - name: Update schedule ${{ env.TASK_NAME }}
        working-directory: './working-path'
        env:
          GH_TOKEN: ${{ secrets.PAT_TOKEN }}
          INFRASTRUCTURE_FILE: 'path/to/your/module/terraform-file-here.tf'
          UNESCAPED_ARN: ${{ steps.deploy-cron.outputs.task-definition-arn }}
        run: |
          # Escape regexp non-safe characters from the ARN to prevent sed to fail
          export ESCAPED_ARN=${UNESCAPED_ARN//:/\\:}
          export ESCAPED_ARN=${ESCAPED_ARN//\//\\/}
          echo "Escaped ARN: $ARN"
          # Retrieve <appl-name>:<version> part of the ARN to use in PR 
          export ARRAY_ARN_PARTS=(${UNESCAPED_ARN//\// })
          export VERSION_PART=${ARRAY_ARN_PARTS[1]}
          export COMMIT_MESSAGE="DEPLOY: Deployment on ${{ inputs.environment }} - $VERSION_PART"
          # Use task definition version for branch name
          export BRANCH_NAME="deploy-${VERSION_PART//:/-}"
          git config user.email "[email protected]"
          git config user.name "Github Actions Pipeline"
          git checkout -b ${BRANCH_NAME}
          sed -i '/task_definition_arn /s/".*/'"\"${ESCAPED_ARN}"\"'/' $INFRASTRUCTURE_FILE
          git add ${{ env.INFRASTRUCTURE_FILE }}
          git commit -m "$COMMIT_MESSAGE"
          git push --set-upstream origin ${BRANCH_NAME}
          gh pr create --fill --body "- [x] $COMMIT_MESSAGE"

This pipeline will create the scheduled task definition, check out the IaaS repository, change the Scheduler task definition ARN and create a PR in the IaaS repository. We are using that also as a way to have approval to deploy to Production, but it can be automated if needed.

Let's comment on some parts:

on:
  workflow_dispatch:
    inputs:
      imageHash:
        description: 'Image hash to deploy'

It is good to separate the image creation from the deployment. This input is required assuming the image was created and published to the registry. This promotes reusability and flexibility.

      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets[format('iam_role_to_assume_{0}', inputs.environment)] }}
          role-session-name: github-ecr-push-workflow-${{ inputs.environment }}
          aws-region: ${{ env.AWS_REGION }}

Environment is a required input and this pipeline can be executed against any environment you defined in your repository.

      - name: Deploy Amazon ECS task definition ${{ env.TASK_NAME }}
        id: deploy-cron
        uses: aws-actions/amazon-ecs-deploy-task-definition@v1
        with:
          task-definition: ${{ steps.task-def-cron.outputs.task-definition }}
          cluster: ${{ inputs.environment }}-${{ env.ECS_CLUSTER }}

Although the action name is "deploy task definition" it will create the task, but not deploy, as this is only possible when you provide a service input (by the time this article is being written). But we are not deploying to a service though, so the action will only create the task definition, but the EventBridge Scheduler remains calling the same task definition it was invoking before the creation of this task definition.

      - name: Checkout infrastructure
        uses: actions/checkout@v4
        with:
          repository: orgnamehere/iaas-repo-here
          ref: main
          path: './working-path'
          token: ${{ secrets.PAT_TOKEN }}

Using AWS CLI was an alternative we considered, but changing the target of the EventBridge scheduler becomes a little bit confusing and brings some cognitive complexity in case we need to change something. We decided then to fetch the IaaS repository and control the task definition version to be the target of the scheduler in Terraform code, so we could also be sure any dependency that the target change could have would be managed by Terraform, instead of another CLI change in this pipeline. We checkout the IaaS repo and save the path as ./working-path to keep the workspace clean. The name is your choice.

      - name: Update schedule ${{ env.TASK_NAME }}
        working-directory: './working-path'
        env:
          GH_TOKEN: ${{ secrets.PAT_TOKEN }}
          INFRASTRUCTURE_FILE: 'path/to/your/module/terraform-file-here.tf'
          UNESCAPED_ARN: ${{ steps.deploy-cron.outputs.task-definition-arn }}
        run: |
          # Escape regexp non-safe characters from the ARN to prevent sed to fail
          export ESCAPED_ARN=${UNESCAPED_ARN//:/\\:}
          export ESCAPED_ARN=${ESCAPED_ARN//\//\\/}
          echo "Escaped ARN: $ARN"
          # Retrieve <appl-name>:<version> part of the ARN to use in PR 
          export ARRAY_ARN_PARTS=(${UNESCAPED_ARN//\// })
          export VERSION_PART=${ARRAY_ARN_PARTS[1]}
          export COMMIT_MESSAGE="DEPLOY: Deployment on ${{ inputs.environment }} - $VERSION_PART"
          # Use task definition version for branch name
          export BRANCH_NAME="deploy-${VERSION_PART//:/-}"
          git config user.email "[email protected]"
          git config user.name "Github Actions Pipeline"
          git checkout -b ${BRANCH_NAME}
          sed -i '/task_definition_arn /s/".*/'"\"${ESCAPED_ARN}"\"'/' $INFRASTRUCTURE_FILE
          git add ${{ env.INFRASTRUCTURE_FILE }}
          git commit -m "$COMMIT_MESSAGE"
          git push --set-upstream origin ${BRANCH_NAME}
          gh pr create --fill --body "- [x] $COMMIT_MESSAGE"

This is where we use sed to search and replace the ARN in the terraform code. We scape the ARN before applying sed to not mess with the search regexp.

The terraform code expected to be changed will be something like this:

# main.tf
-        task_definition_arn     = "arn:aws:ecs:ap-southeast-2:123456789012:task-definition/task-definition-cron-name:57"
+       task_definition_arn     = "arn:aws:ecs:ap-southeast-2:123456789012:task-definition/task-definition-cron-name:58"

Links:

Categorias
Tropeçando

Tropeçando 112

Treezor: a serverless banking platform

This case study dives into how Treezor went serverless for their banking platform. From legacy code running on servers to a serverless monolith, and then event-driven microservices on AWS with Bref.

Treezor is a high available banking application running mostly in PHP.

Wait, is cloud bad?

Forrest Brazeal review 37signals (Basecamp) movement from the Cloud back to DataCenter, their use-case and some reasoning about the mentioned arguments for Data Center.

ECS Blue/Green deployment with CodeDeploy and Terraform

How to make Rector Contribute Your Pull Requests Every Day

Do you enjoy making code-reviews with hundreds of rules in your head and adding extra work to the pull-request author?

We don't, so we let Rector for us in active code review.

Docker for the late majority

This is a guide for people who would like a brief introduction to Docker and are too afraid to ask for one. I get it. Everyone around you already seems to know what they’re talking about. Looking ignorant is no fun.

10 Essential PHP.ini Tweaks for Improved Web Performance

If you're running a website or web application with PHP, you may have encountered issues with slow loading times, high memory usage, or other performance problems. Fortunately, there are several tweaks you can make to your PHP configuration file (php.ini) to optimize your scripts and improve your website's performance. In this article, I'll cover the top 10 most common changes you might need to make to your php.ini file for best performance.

Categorias
Programação Solving Problems

Setting up maintenaince mode with Varnish

Varnish is "the free, open-source software that enables super fast delivery of HTTP or API based content", "an HTTP reverse proxy that works by caching frequently requested web pages, so they can be loaded quickly without having to wait for a server response.".

If you need some sort of an alternative cloud or servers in datacenter Varnish can act as CDN, Load Balancer and Api Gateway layers at the same time. It is very powerful when you have to manage those services instead of using a Cloud service. And this is not that uncommon.

Varnish

Consider the use case where you need a maintenance window for a product for which you need to be sure that you are suspending all connections to the backend servers in a consistent way. Performing a redirection in the CDN layer is the better choice.

The Varnish Configuration Language (VCL) is a domain-specific programming language used by Varnish to control request handling, routing, caching, and several other aspects.

-- https://www.varnish-software.com/developers/tutorials/varnish-configuration-language-vcl/

This is a very powerful language, that, for our use case, will allow creating a synthetic response to proxy the request to, instead of hitting the backends. There will be no need to create a web directory to be served for another web server, but just directly from Varnish.

# default.vcl
sub vcl_synth 
{
    # previous headers manipulation if you like
    # and other code that you need for synth if you like
    # (...)

    # Adding an x-cache header to indicate this is a synth response
    set resp.http.x-cache = "synth synth";

    # Maintenance - we are calling the status for this synth 911 because we can have different synths
    if ( resp.status == 911 ) {
        set resp.http.Content-Type = "text/html; charset=utf-8";
        # You can put absolutely what you want
        synthetic ({"
<html>
<head>
    <title>Maintenance mode - Try again later</title>
</head>
<body>
<h1>This website is under maintenance.</h1>
</body>
</html>
"});
        return (deliver);
    }
}

Then I can forward everything that requests my-domain.com to the maintenance

# includes/my-domain-hints.vcl

if ( req.http.host ~ "my-domain.com" ) {
    return(synth(911, ""));
    # All the other VCL configs are below here, but we are returning early above to the maintenance
    # (...)
}
Categorias
Solving Problems

Solving problems week 2: Cypress test automation, E2E, DevExp, code standards, rector

In this week's Solving Problems text, I have two topics: code quality and service reliability. I will share some ideas that can improve your set of tools that check if your codebase is healthy.

The problem with the Code Quality topic is how to keep your standards and some trivial issues out of the code without sacrificing your reviewer's time and patience.

And the problem with Service Reliability is how to be active in monitoring the most essential parts of your service without relying on human tests.

Automated tests

There are some red lights that you might be missing an important part of your Software Reliability: the QA team being a bottleneck due to human-resource timing constraints, too many PR reverts before deployment and lack of unit tests on each codebase. That usually comes with a very concerning outcome: customer support tickets. That is very bad for the product's reputation and a constant source of stress and bug lists to grow.

There are lots of quality checks that we could talk about, like enforcing unit testing coverage in the pipeline, applying feature flagging, improving the regression QA processing steps, etc. They are all pivotal and needed, but where to start? There is a first step, on similar scenarios, that I recommend prioritised, keeping the other processes running in parallel: automated end-to-end tests.

Testing plays a key role in development. By continuously monitoring application workflows and features, your tests can surface broken functionality before your customers do.

-- Best practices for creating end-to-end tests, DataDog

An Automated end-to-end test (we can also call it Smoke tests) must cover your most important scenarios and test them continuously and after any deployment. After you check those most four/five very important scenarios, you can keep improving the smoke tests based on the most used use cases. We have currently powerful tools to write them. Let's say cypress

describe('Verify dashboard', () => {
    const baseUrl = ${Cypress.env('baseUrl')};
    const env = Cypress.env('env');
    const dataFile: any = credentials_${env}.json;

    it('Verify raw Admin User profile', () => {
        cy.loginApplication();
        cy.fixture(dataFile).then(testData => {
            const profileUrl = testData['adminUser'].profileUrl;
            cy.visit(baseUrl + profileUrl);
        });
        cy.contains('Profile');
        cy.visit(${baseUrl}/logout);
    });
});

Developer experience

A quality code check that a mature codebase has is to lint and check code standards. As authors, we are used to waiting for CI to perform steps and be sure that the same successful result status we see when running the steps locally is also successful in CI. As reviewers, we are used to comments asking to check CI or asking to use the agreed pattern when the codebase doesn't have a good quality check step in place.

Both are part of the passive code review that steals our time from the essential problems we need to review in the code: architectural or business logic problems that are often missed by tired eyes. We can do better and use CI and the step tool in our favour.

This can be reproduced in any language, but focusing on PHP, we can use tools like Rector to check and fix those easy-to-spot problems. You can just set a step in our pipeline that will fix the errors and commit it again.

Some can say that it could be a pre-hook step, but this is usually skipped when takes more than 2s. I agree that this is easy to just run the fixes on the diff in our local, but we just have to do better if we automate the changes in case some developers just do not run it before pushing the commits or whatever reason.

This would be a very useful automation, that will run for every open Pull Request, committing code standards or lint issues. The pipeline will contribute a lot to your codebase with very little maintenance. Mind that this will execute Rector (therefore moving the code to the state your team agreed and not just code style) and Easy Code Standards (combine power of PHP_CodeSniffer and PHP CS Fixer in 3 lines)

name: Rector CI

on:
  pull_request: null

jobs:
  rector-ci:
    runs-on: ubuntu-latest
    # run only on commits on main repository, not on forks
    if: github.event.pull_request.head.repo.full_name == github.repository
    env:
      COMPOSER_AUTH: ${{ secrets.COMPOSER_AUTH }}
    steps:
      - uses: actions/checkout@v4
        with:
          # Solves the not "You are not currently on a branch" problem, see https://github.com/actions/checkout/issues/124#issuecomment-586664611
          ref: ${{ github.event.pull_request.head.ref }}
          # Must be used to trigger workflow after push
          token: ${{ secrets.GH_PAT_TOKEN }}

      - uses: shivammathur/setup-php@v2
        with:
          php-version: 8.1
          coverage: none
          extensions: <your-extensions-here>

      -   run: composer install --no-progress --ansi

      ## First run Rector without --dry-run, it would stop the process with exit 1 here
      -   run: vendor/bin/rector process --ansi

      - name: Check for Rector modified files
        id: rector-git-check
        run: |
          export CHANGES=$(if git diff --exit-code --no-patch; then echo "false"; else echo "true"; fi)
          echo "modified=$CHANGES" >> "$GITHUB_OUTPUT"

      - name: Git config
        if: steps.rector-git-check.outputs.modified == 'true'
        run: |
          git config --global user.name 'rector-bot'
          git config --global user.email '[email protected]'
          export LOG=$(git log -1 --pretty=format:"%s")
          echo "COMMIT_MESSAGE=${LOG}" >> "$GITHUB_ENV"

      - name: Commit Rector changes
        if: steps.rector-git-check.outputs.modified == 'true'
        run: git commit -am "[rector] ${COMMIT_MESSAGE}"

      ## Now, there might be coding standard issues after running Rector
      - run: composer run ecs:fix

      - name: Check for CS modified files
        id: cs-git-check
        run: |
          export CHANGES=$(if git diff --exit-code --no-patch; then echo "false"; else echo "true"; fi)
          echo "modified=$CHANGES" >> "$GITHUB_OUTPUT"

      - name: Git config
        if: steps.cs-git-check.outputs.modified == 'true'
        run: |
          git config --global user.name 'rector-bot'
          git config --global user.email '[email protected]'
          export LOG=$(git log -1 --pretty=format:"%s")
          echo "COMMIT_MESSAGE=${LOG}" >> "$GITHUB_ENV"

      - name: Commit CS changes
        if: steps.cs-git-check.outputs.modified == 'true'
        run: git commit -am "[cs] ${COMMIT_MESSAGE}"

      - name: Push changes
        if: steps.cs-git-check.outputs.modified == 'true'
        run: git push

Links: