Categorias
PHP

PHP 8.1: more on new in initializers

I could not agree more with Brent when he says concerning the "new in initializers"[1] feature:

PHP 8.1 adds a feature that might seem like a small detail, but one that I think will have a significant day-by-day impact on many people.

When I see this new feature, lots of places that use Dependency Injection[3] come to my focus as candidates to be impacted, such as application or infrastructure service classes. As a result, we will write a much cleaner and leaner code without giving up on good practices to write modular, maintainable and testable software.

The Dependency Inversion Principle[4] gives us decoupling powers. But we know that many classes will receive the same concrete implementation most of (if not all) the time.

So this is very common to see some variation of this code:

$someDependencyToBeInjected = FactoryClass::create();
$someService = new SomeServiceClass($someDependencyToBeInjected);

Important note: I will ignore for now Service Containers and frameworks features that deal with service instantiation, auto wiring, etc.

Think of a database query service class: you depend on a connection object. Every time you need to instantiate your database service class, you need to prepare the connection dependency and inject it at the service class. Database connections are a great example when you use the same concrete implementation more than 90% of the time.

The same applies to a Service class that you use to handle business logic and depends on QueryService and CommandHandler interfaces to do its job.

Before PHP 8.1 we have code like this:

// service class to apply business logic
// most standard
class DefaultLeadRecordService implements LeadRecordService
{
    public function __construct(
        private LeadQueryService $queryService,
        private LeadCommandHandler $commandHandler
    ) {
    }
}

// infrastructure class to match the interface -- a DBAL concrete class
// sneakily allowing a "default" value, but also open to Dependency Injection
// but not that great
class DbalLeadQueryService implements LeadQueryService
{
    public function __construct(private ?Connection $connection = null)
    {
        if (!$this->connection) {
            $this->connection = Core::getConnection();
        }
    }
}

// instantiation would be something like -- given you have $connection already instantiated
$connection =  \Doctrine\DBAL\DriverManager::getConnection($connectionParams, $config);

$service = new \Blog\Application\DefaultLeadRecordService(
    new \Blog\Infrastructure\DbalLeadQueryService($connection),
    new \Blog\Infrastructure\DbalLeadCommandHandler($connection),
);

// if we allow construct get default value
$service = new \Blog\Application\DefaultLeadRecordService(
    new \Blog\Infrastructure\DbalLeadQueryService(),
    new \Blog\Infrastructure\DbalLeadCommandHandler(),
);

While on PHP 8.1, you will be able to write it like so:

class DefaultLeadRecordService implements LeadRecordService
{
    public function __construct(
        private LeadQueryService $queryService = new DbalLeadQueryService(),
        private LeadCommandHandler $commandHandler = new DbalLeadCommandHandler()
    ) {
    }
}

// we see there is still room for new features here
// still not that great
class DbalLeadQueryService implements LeadQueryService
{
    public function __construct(private ?Connection $connection = null)
    {
        // waiting when `new initializers` feature allows static function as default parameters
        if (!$this->connection) {
            $this->connection = Core::getConnection();
        }
    }
}

$service = new \Blog\Application\DefaultLeadRecordService();

One liner! That saves a lot of typing, and the code remains very well structured. This is the type of significant impact we will have on our day-to-day work. We will write a more straightforward, robust and meaningful code, and we will ship features faster with high-quality code.

If you want to see a full implementation of this, check the code at https://github.com/rafaelbernard/blog-php-81-new-initializers

Test, our faithful friend

Writing tests is a must-have for any repository where quality is a requirement. However, the "New in initializers" feature does not force us to give up on a complete suite of tests. We still have all powers of unit or integration tests.

For application code, we would write unit tests and all the expectations for concrete dependencies:

<?php

namespace Test\Unit\Blog\Application;

use Blog\Application\DefaultLeadRecordService;
use Blog\Domain\LeadCommandHandler;
use Blog\Domain\LeadQueryService;
use PHPUnit\Framework\MockObject\MockObject;
use Test\TestCase;

class DefaultLeadRecordServiceTest extends TestCase
{
    private const EMAIL = '[email protected]';

    private LeadQueryService|MockObject $leadQueryServiceMock;
    private LeadCommandHandler|MockObject $leadCommandHandlerMock;

    private DefaultLeadRecordService $service;

    protected function setUp(): void
    {
        parent::setUp();

        $this->leadQueryServiceMock = $this->getMockBuilder(LeadQueryService::class)->getMock();
        $this->leadCommandHandlerMock = $this->getMockBuilder(LeadCommandHandler::class)->getMock();

        $this->service = new DefaultLeadRecordService($this->leadQueryServiceMock, $this->leadCommandHandlerMock);
    }

    public function testCanAdd()
    {
        $this->leadQueryServiceMock
            ->expects(self::once())
            ->method('getByEmail')
            ->with(self::EMAIL)
            ->willReturn(false);

        $this->leadCommandHandlerMock
            ->expects(self::once())
            ->method('add')
            ->with(self::EMAIL)
            ->willReturn(1);

        $result = $this->service->add(self::EMAIL);

        self::assertEquals(1, $result);
    }

    public function testAddExistentReturnsFalse()
    {
        $this->leadQueryServiceMock
            ->expects(self::once())
            ->method('getByEmail')
            ->with(self::EMAIL)
            ->willReturn(['email' => self::EMAIL]);

        $this->leadCommandHandlerMock
            ->expects(self::never())
            ->method('add');

        $result = $this->service->add(self::EMAIL);

        self::assertFalse($result);
    }

    public function testCanGetAll()
    {
        $unsorted = [
            ['email' => '[email protected]'],
            ['email' => '[email protected]'],
            ['email' => '[email protected]'],
        ];

        $this->leadQueryServiceMock
            ->expects(self::once())
            ->method('getAll')
            ->willReturn($unsorted);

        $fetched = $this->service->getAll();

        $expected = $unsorted;
        asort($expected);

        self::assertEquals($expected, $fetched);
    }
}

Integration tests can be written for infrastructure code. For instance, we can use an SQLite database file to assert the logic for database operations.

Be aware that I am creating an SQLite temp database file on-demand for each test execution with $this->databaseFilePath = '/tmp/test-' . time(); and, thanks to the Dbal library, we can be confident that operations could work for any database.

-> It is highly recommended that, as an alternative, create a container with a seeded database that is compatible with your production database system.

<?php

namespace Test\Integration\Blog\Infrastructure;

use Blog\Infrastructure\DbalLeadQueryService;
use Doctrine\DBAL\Connection;
use Faker\Factory;
use Faker\Generator;
use Test\TestCase;

class DbalLeadQueryServiceTest extends TestCase
{
    private string $databaseFilePath;

    private Generator $faker;
    private Connection $connection;

    private DbalLeadQueryService $service;

    public function testCanGetAll()
    {
        $this->addEmail($email1 = $this->faker->email());
        $this->addEmail($email2 = $this->faker->email());
        $this->addEmail($email3 = $this->faker->email());

        $expected = [
            ['email' => $email1],
            ['email' => $email2],
            ['email' => $email3],
        ];

        $fetched = $this->service->getAll();

        self::assertEquals($expected, $fetched);
    }

    protected function setUp(): void
    {
        parent::setUp();

        $this->faker = Factory::create();

        $this->createLeadTable();

        $this->service = new DbalLeadQueryService($this->connection());
    }

    protected function tearDown(): void
    {
        parent::tearDown();

        $this->dropDatabase();
    }

    private function connection(): Connection
    {
        if (!isset($this->connection)) {
            $this->databaseFilePath = '/tmp/test-' . time();

            $config = new \Doctrine\DBAL\Configuration();
            $connectionParams = [
                'url' => "sqlite:///{$this->databaseFilePath}",
            ];

            $this->connection = DriverManager::getConnection($connectionParams, $config);
        }

        return $this->connection;
    }

    private function dropDatabase()
    {
        @unlink($this->databaseFilePath);
    }

    private function createLeadTable(): void
    {
        $this->connection()->executeQuery('CREATE TABLE IF NOT EXISTS leads ( email VARCHAR )');
    }

    private function addEmail(string $email): int
    {
        return $this->connection()->insert('leads', ['email' => $email]);
    }
}

Conclusion

PHP is evolving very quickly, with new features that enable more quality software, help developers and is even more committed to the fact that most of the web run flavours of PHP code. New features improve readability, software architecture, test coverage and performance. Those are all proof of a mature and live language.

Upgrade to PHP 8.1 and use "new in initializers" as soon as possible. You will not regret it.

If there is something you want to discuss more, let me know in the comments.

Links:

  1. New in initializers RFC
  2. Road to PHP 8.1
  3. Dependency Injection
  4. Dependency inversion principle
  5. Interface segregation principle
  6. Solid relevance
  7. SOLID principles
Categorias
PHP Programação

PHP Memory Usage and Performance Improvements Tips

Memory usage and performance improvements make everybody happier, from end-user to cloud and infrastructure engineers. And they are all right, and this is an optimization that we should try to achieve as much as possible.

I am also keeping this page for a reference to my future self because we cannot rely too much on our memory, and that will be a good reference I want to re-visit. I will make constant updates on this page.

Use objects with declared properties over array

Arrays have a larger footprint to avoid constant memory pointers reassignments. It then reserves large amounts of memory when more elements or indexes are added.

Image for array vs object memory usage

Be careful to self-referencing that would prevent garbage collector from work

Garbage collector is working as expected when the internal reference count (how may times a value is used) reaches zero:

$x = "foobar";    // refcount = 1
$y = $x;            // refcount = 2
unset($x);      // refcount = 1
unset($y);      // refcount = 0 -> garbage collector will be happy ==> Destroy!

But self-referencing can be tricky:

$x = [];            // refcount = 1
$x[0] =& $x;    // refcount = 2
unset($x);      // refcount = 1
                    // It will never come to zero due to cycle

The cycle collector will eventually destroy it, but it will hang on memory for a while anyway.

Sprintf vs double/single quote concatenation

A very common use case is string concatenation or interpolation when you want to add a variable into a static string. It is interesting to note that:

If you have PHP < 7.4, use double-quote interpolation or single quote concatenation over sprintf function.

<?php 

$this->start($loop);

ob_start();

for ($i = 0; $i < $this->loop; ++$i) {
    print 'Lorem '.$i.' ipsum dolor sit amet, consectetur adipiscing elit. Proin malesuada, nisl sit amet congue blandit';
}

ob_end_clean();

return $this->end();

If you have PHP greater than 7.4, use sprintf:

<?php 

$this->start($loop);

for ($i = 0; $i < $this->loop; ++$i) {
    $value = sprintf('Lorem %s ipsum dolor sit amet, consectetur adipiscing elit. Proin malesuada, nisl sit amet congue blandit', $i);
}

return $this->end();

PHP Benchmarking

PHPBench.com was constructed as a way to open people's eyes to the fact that not every PHP code snippet will run at the same speed. You may be surprised at the results that this page generates, but that is ok. This page was also created so that you would be able to find discovery in these statistics and then maybe re-run these tests in your own server environment to play around with this idea yourself, by using the code examples (these code examples are automatically generated and as the code in my .php files change, so do they).

PHP benchmarks and optimizations

Collection of tests and benchmarks for common operations in PHP. Tests run on several versions of PHP. There is an option to compare different solutions for the same problem to compare performances between them, such as checking values with isset against !empty.

Categorias
PostGreSQL

PostgreSQL – pg_upgrade from 10 to 12

I have some PostgreSQL databases running pretty well but we need to keep our software updated. This is a mandatory practice for a high-quality service. Those servers are running version 10 and they need to be upgraded to version 12. I have used pg_dump / pg_restore strategy for a long time, but this time I would rather use pg_upgrade.

Let's dive into how to do it.

Table of contents:

  1. Install
  2. InitDB
  3. Check upgrade consistency
  4. Set locale
  5. Upgrade
  6. Configurations

Install

The package postgresql12-server contains everything needed to run the server, but my databases use some extensions [1], then I will add postgresql12-devel and postgresql12-contrib to be able to compile and to install the extensions.

yum install postgresql12-server postgresql12-devel postgresql12-contrib

InitDB

After installation we need to setup new server with initdb:

~% /usr/pgsql-12/bin/postgresql-12-setup initdb
Initializing database … OK

Check upgrade consistency

We need to check compatibility. Turn to postgres user (su - postgres) and run the command:

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data --check

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               fatal

Your installation contains tables declared WITH OIDS, which is not supported
anymore. Consider removing the oid column using
    ALTER TABLE ... SET WITHOUT OIDS;
A list of tables with the problem is in the file:
    tables_with_oids.txt

Failure, exiting

As you may see, I got a fatal error, indicating that the upgrade is not possible. In my case, tables with OIDs are the culprit. In your case could be something else. In any case, we need to fix before upgrading.

I fixed tables removing OIDs on mentioned tables. And ran check again:

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

Yay!

Set locale

There is a tricky configuration that is not detected by pg_upgrade check but it is very important to me. I use C locale on my databases [2], then I need to perform an extra step. If this is your case, you may follow the same steps applying yours.

I need to stop postgresql10 and start postgresql12:

systemctl stop postgresql-10.service
systemctl start postgresql-12.service

Then I run locale change at my template1 then locale will be enabled when my database will be upgraded.

UPDATE pg_database SET datcollate='C', datctype='C' WHERE datname='template1';

And stop again: systemctl stop postgresql-12.service to be ready to upgrade.

Upgrade

Upgrade command is the same that we run before, without --check flag.

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Consider running analyze_new_cluster. Optional but nice to have.

vacuumdb: processing database "mydb": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "mydb": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "mydb": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done

Configurations

Before deleting your old cluster, remember to get some of your configurations.

mv /var/lib/pgsql/12/data/pg_hba.conf /var/lib/pgsql/12/data/pg_hba.conf.new
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/12/data/

I am not a big fan of writing directly to postgresql.conf file. Instead, I keep configuration files under version control and include the directory where those files are deployed. I treat them as code, then it becomes easier to maintain and manage.

Another advantage is that I don't have any mess about config file differences when a new version arises. I am automatically using new default configurations, my customized setting are loaded and I can quickly address any incompatibility caused by migration without touching the original conf file.

Let's go for it:

# Add settings for extensions here
include_dir = '/var/lib/pgsql/conf.d'   # include files ending in '.conf' from

Then you may delete your old cluster. 🙂

Links:

  1. Extensions: https://www.postgresql.org/docs/current/contrib.html
  2. Locale: https://www.postgresql.org/docs/current/locale.html
Categorias
PHP Programação Technology

PHP Test Coverage Using Bitbucket and Codacy

Wikipedia:

In computer science, code coverage is a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage.

Testing is an unavoidable process for building a trustful software. Unfortunately, in PHP world we have a massive number of legacy software still running today that are very valuable but born in an age where testing was skipped for various reasons.

As today we are refactoring those untested systems into tested ones or we are creating new projects already focusing on having tests, we can go one step further and measure the code coverage, leveraging bug protection and code quality.

You can use these steps for a legacy project, a new project, a well-covered project, a poorly covered project; no matter the state of your project.

We are considering a PHP project using Bitbucket Pipelines as our CI and Codacy for monitoring our test coverage reports but the main concepts could be easily used when using other tools.

Table of contents:

  1. Dependencies installation
  2. PHPUnit configuration
  3. Set up Codacy project API token
  4. Pipelines configuration

1. Dependencies installation

Use composer to install dependencies:

composer require --dev phpunit/php-code-coverage codacy/coverage

Installation results would be similar to:

Using version ^1.4 for codacy/coverage
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 3 installs, 0 updates, 0 removals
  - Installing symfony/process (v5.0.4): Downloading (100%)
  - Installing gitonomy/gitlib (v1.2.0): Downloading (100%)
  - Installing codacy/coverage (1.4.3): Downloading (100%)
Writing lock file
Generating autoload files
ocramius/package-versions:  Generating version class...
ocramius/package-versions: ...done generating version class

2. PHPUnit configuration

We need to configure at least whitelist and logging sections. They are required to code coverage.

Whitelist is the section that determines which files will be considered as your available code and how existent tests cover this code.

As I want that all my code loaded and analyzes by PHPUnit, I will set processUncoveredFilesFromWhitelist. Considering that all my code is under ./src folder:

<phpunit>
    <!-- ... -->
    <filter>
        <!-- ... -->
        <whitelist processUncoveredFilesFromWhitelist="true">
            <directory suffix=".php">./src</directory>
        </whitelist>
    </filter>
</phpunit>

Logging is where we configure logging of the test execution. Clover configuration is enough for now:

<phpunit>
    <!-- ... -->
    <logging>
      <log type="coverage-clover" target="/tmp/coverage.xml"/>
    </logging>
</phpunit>

You should now run your tests locally to ensure that you can fix everything that will be analyzed by PHPUnit. All errors have to be fixed. One example that might appear:

Fatal error: Interface 'InterfaceClass' not found in /var/www/src/Example/Application/ClassService.php on line 5

Oops.

<?php

namespace Example;

class ClassService implements InterfaceClass {
    /* */
}

I have a class extending from another but I missed the import for the parent class.

<?php

namespace Example;

use Example\Domain\InterfaceClass;

class ClassService implements InterfaceClass {
    /* */
}

And now I am good. After fixing all errors that might appear (and discovering some dead classes...), we test results and report generation message:

OK (10 tests, 17 assertions)

Generating code coverage report in Clover XML format ... done [5.71 seconds]

This has already configured code coverage. We will use Codacy as a tool to keep track of code coverage status, representing them in a beautiful dashboard and some other tools such as a check for new pull requests.

3. Set up Codacy project API token

For sending coverage results to Codacy, we need the project API token. This is located at Settings > Integrations tab.

If there is already a code, you can use it. Otherwise, generate one. A project token would look like something as:

a9564ebc3289b7a14551baf8ad5ec60a // not real

We will use this as an environment variable at Bitbucket. At your project in Bitbucket, go to Configurations > Pipelines > Repository Variables. In my case, I used:

Name: CODACY_PROJECT_TOKEN
Value: a9564ebc3289b7a14551baf8ad5ec60a

I want the value securely encrypted. Then I mark "Secure".

Right now we have Codacy token and the value enabled to use as an environment variable at our pipeline.

4. Pipelines configuration

For Pipelines now you should provide API token as an environment variable:

variables:
  - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN

Enable xdebug:

  - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug

And execute codacycoverage to send saved report to Codacy:

  - src/vendor/bin/codacycoverage clover /tmp/coverage.xml

Considering one of my legacy projects that I am adding code coverage, this could be my unit test step:

  • using alpine
  • source code (whitelisted) at site/src
  • logfile generated at /tmp/unit-clover.xml
  • g++ gcc make git php7-dev are required to install and enable xdebug
    - step: &step-unit-tests
        name: unit tests
        image: php:7.2-alpine
        variables:
          - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN
        caches:
          - composer
        script:
          - apk add --no-cache g++ gcc make git php7-dev
          - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug
          - site/src/vendor/bin/phpunit -c tests/Unit/phpunit.xml
          - site/src/vendor/bin/codacycoverage clover /tmp/unit-clover.xml

After being successfully executed by pipelines, you may see the results at your Codacy dashboard.

Now code with love.
And coverage.


Useful links:

Categorias
Programação

Environment Variables in Angular

Need to use different values depending on the environment you’re in? If you’re building an app that needs to use API host URLs depending on the environment, you may do it easily in Angular using the environmen.ts file.

We are considering Angular 8+ apps for this article.

Angular CLI projects already use a production environment variable to enable production mode when in the production environment at main.ts:

if (environment.production) {
  enableProdMode();
}

And you'll also notice that by default in the src/environment folder you have an environment file for development and one for production. Let's use this feature to allow us to use different API host URL depending if we're in development or production mode:

environment.ts:

export const environment = {
  production: false,
  apiHost: https://api.local.com
}

environment.prod.ts:

export const environment = {
  production: true,
  apiHost: https://api.production-url.com
};

And in our app.component.ts all we have to do in order to access the variable is the following:

import { Component } from '@angular/core';
import { environment } from '../environments/environment';

@Component({ ... })
export class AppComponent {
  apiHost: string = environment.apiHost;
}

Now in development mode the apiHost variable resolves to https://api.local.com and in production resolves to https://api.production-url.com. You may run ng build --prod and check.

Detecting Development Mode

Angular also provides us with an utility function called isDevMode that makes it easy to check if the app in running in dev mode:

import { Component, OnInit, isDevMode } from '@angular/core';

@Component({ ... })
export class AppComponent implements OnInit {
  ngOnInit() {
    if (isDevMode()) {
      console.log('Development!');
    } else {
      console.log('Cool. Production!');
    }
  }
}

Adding a Staging Environment

To add a new environment in Angular projects a new entry to configuration property should be added at angular.json file. Let's add a staging environment for example. Note that production property already exists.

"configurations": {
  "production": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.prod.ts"
      }
    ]
  },
  "stating": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.stating.ts"
      }
    ]
  }

And now we can add a staging environment file and suddenly be and build the project with ng build --configuration=staging on our CI (or deploy process) to deploy on staging environment:

environment.staging.ts

export const environment = {
  production: false,
  apiHost: https://staging.host.com
};
Categorias
PHP

High-performance Fibonacci numbers generator in PHP

Based on the article High-performance Fibonacci numbers generator in Go I wrote my version using PHP. Despite the differences between PHP and Go architectures reflected in response times, we can face a huge performance difference when using an optimized function. We may notice that we can have the same results, but the quality of the written code can change lots of things.

Recursive approach

function fibonacci(int $n):int {
  if ($n <= 1) {
    return $n;
  }

  return fibonacci($n-1) + fibonacci($n-2);
}

Benchmark and test

function test_fibonacci() {
  $data = [
    [0,0], [1,1], [2,1], [3,2], [4,3], [5,5], [6,8], [10,55], [42,267914296]
  ];

  foreach($data as $test) {
    $result = fibonacci($test[0]);
    if ($result !== $test[1]) {
      throw new \UnexpectedValueException("Error Processing Request. N: {$test[0]}, got: {$result}, expected: {$test[1]}", 1);
    }
  }

  echo "Tests - Success.".PHP_EOL;
}

/**
  * From https://gist.github.com/blongden/2352583
  */
function benchmark($x)
{
    $start = $t = microtime(true);
    $total = $c = $loop = 0;
    while (true) {
        $x();
        $c++;
        $now = microtime(true);
        if ($now - $t > 1) {
            $loop++;
            $total += $c;
            list($t, $c) = array(microtime(true), 0);
        }
        if ($now - $start > 2) {
            return round($total / $loop);
        }
    }
}
Benchmark 10 run: 163,754/sec or 0.0061067210571955ms/op
Benchmark 20 run: 1,351/sec or 0.74019245003701ms/op

As we can see, calculations of 20 Fibonacci numbers takes 123 times longer than 10 Fibonacci numbers. Not well performed at all! The explanation can be found in the linked article.

Sequential approach

function fibonacci_tuned(int $n):float {
  if ($n <= 1) {
    return $n;
  }

  $n2 = 0;
  $n1 = 1;

  for ($i = 2; $i < $n; $i++) {
    $n2_ = $n2;
    $n2 = $n1;
    $n1 = ($n1 + $n2_);
  }

  return $n2 + $n1;
}

function test_fibonacci_tuned() {
  $data = [
    [0,0], [1,1], [2,1], [3,2], [4,3], [5,5], [6,8], [10,55], [42,267914296]
  ];

  foreach($data as $test) {
    $result = fibonacci_tuned($test[0]);
    $float_test_value = (float) $test[1];
    if ($result !== $float_test_value) {
      throw new \UnexpectedValueException("Error Processing Request. N: {$test[0]}, got: {$result}, expected: {$float_test_value}", 1);
    }
  }

  echo "Tests - Success.".PHP_EOL;
}

Results:

Benchmark 10 tuned run: 3,345,999/sec or 0.00029886440492062ms/op
Benchmark 20 tuned run: 2,069,100/sec or 0.00048330191870862ms/op

As a much better scenario, calculate 20 numbers takes almost 2 times longer than 10 numbers. Makes sense. And performs well!

Considering the two approaches, the recursive approach runs 10 Fibonacci numbers operations 20 times longer than sequential one and 1,824 times longer for 20 Fibonacci numbers.

Fibonacci implementation in PHP can be found at https://github.com/rafaelbernard/php-fibonacci.

Categorias
Internet Segurança Spam

Um convincente e-mail fraudulento do Bitcoin extorquindo você

Esta publicação foi originalmente publicada em The convincing Bitcoin scam e-mail extorting you, por Mattias Geniar, em inglês. Mas o alerta vale ser traduzido para o português, tendo você assistido ou não do que a acusação se trata.

Mais uma vez vemos a criatividade de aplicadores de golpe. Fique atento. Fique alerta. A internet é uma terra tão selvagem quando as ruas em que andamos.


Há alguns meses recebi um e-mail que me deixou preocupado por alguns segundos. Parecia assim, e é bem provável que você tenha visto também.

From: Kalie Paci 
Subject: mattias - UqtX7m

It seems that, UqtX7m, is your pass word. You do not know me and you are probably thinking
why you are getting this mail, correct?

Well, I actually placed a malware on the adult video clips (porn) web-site and guess what,
you visited this site to have fun (you know what I mean). While you were watching videos,
your browser started operating as a RDP (Remote control Desktop) that has a keylogger which
gave me access to your display and also web camera. Immediately after that, my software
program collected your entire contacts from your Messenger, FB, and email.

What exactly did I do?

I created a double-screen video. First part displays the video you were viewing (you have
a nice taste lol), and second part displays the recording of your web camera.

What should you do?

Well, in my opinion, $1900 is a fair price for our little secret. You’ll make the payment
through Bitcoin (if you do not know this, search “how to buy bitcoin” in Google).

BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge
(It is cAsE sensitive, so copy and paste it)

Important:
You now have one day to make the payment. (I’ve a unique pixel in this message, and right
now I know that you have read this email message). If I don’t get the BitCoins, I will
send your video recording to all of your contacts including members of your family,
colleagues, and many others. Having said that, if I do get paid, I will destroy the video
immidiately. If you need evidence, reply with “Yes!” and I definitely will send your video
recording to your 11 friends. This is a non-negotiable offer, and so please don’t waste
my personal time and yours by responding to this mail.

Se você lê, parece spam - não é?

Bem, o que me preocupou por alguns segundos foi que a linha de assunto e o corpo continham uma senha real que eu usei um tempo atrás: UqtX7m.

Para receber um email com um - o que parece ser - segredo pessoal no assunto, chama a atenção. É inteligente no sentido de que você se sente violado e envergonhado pelas consequências. Parece legítimo.

Deixe-me dizer claramente: é uma farsa e você não precisa pagar ninguém.

Mencionei pela primeira vez no meu Twitter descrevendo o que parece ser a parte brilhante desse golpe:

  • E-mail + senhas, fáceis de obter (muitos vazamentos on-line)
  • Todo mundo assiste pornô
  • Ninguém quer que esta informação vaze
  • O mesmo e-mail genérico pode ser usado para todas as vítimas

Quem quer que esteja executando esse esquema pensou sobre a psicologia do golpe e encontrou o ponto ideal: ele chama sua atenção e deixa você preocupado.

Bem jogado. Mas não se apegue a isso e, mais importante: não pague nada.


Quanto à Combate à pornografia conheça Just1ClickAway. Busque se livrar deste mal.

Categorias
PHP

Codility – MissingInteger

I scored 100% in #php on @Codility!
https://codility.com/demo/take-sample-test/missing_integer/

Training ticket

Session
ID: training5FZX3Y-S7H
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 05:31 UTC
Started on: 2016-01-17 05:31 UTC
Finished on: 2016-01-17 05:35 UTC

Categorias
PHP

Codility – TapeEquilibrium

I scored 100% in #php on @Codility!
https://codility.com/demo/take-sample-test/tape_equilibrium/

Training ticket (real time - 1 hour)

Session
ID: trainingK8ND7B-7TN
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 05:21 UTC
Started on: 2016-01-17 05:21 UTC
Finished on: 2016-01-17 05:22 UTC

Categorias
PHP

Codility – PermMissingElem

I scored 100% in #php on @Codility!
https://codility.com/demo/take-sample-test/perm_missing_elem/

Training ticket

Session
ID: trainingWEF9F8-YEU
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 03:25 UTC
Started on: 2016-01-17 03:25 UTC
Finished on: 2016-01-17 03:40 UTC

Training ticket (real finishing time)

Session
ID: trainingCSVQV7-4KF
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 04:29 UTC
Started on: 2016-01-17 04:29 UTC
Finished on: 2016-01-17 04:30 UTC