PostgreSQL – pg_upgrade from 10 to 12

I have some PostgreSQL databases running pretty well but we need to keep our software updated. This is a mandatory practice for a high-quality service. Those servers are running version 10 and they need to be upgraded to version 12. I have used pg_dump / pg_restore strategy for a long time, but this time I would rather use pg_upgrade.

Let's dive into how to do it.

Table of contents:

  1. Install
  2. InitDB
  3. Check upgrade consistency
  4. Set locale
  5. Upgrade
  6. Configurations

Install

The package postgresql12-server contains everything needed to run the server, but my databases use some extensions [1], then I will add postgresql12-devel and postgresql12-contrib to be able to compile and to install the extensions.

yum install postgresql12-server postgresql12-devel postgresql12-contrib

InitDB

After installation we need to setup new server with initdb:

~% /usr/pgsql-12/bin/postgresql-12-setup initdb
Initializing database … OK

Check upgrade consistency

We need to check compatibility. Turn to postgres user (su - postgres) and run the command:

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data --check

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               fatal

Your installation contains tables declared WITH OIDS, which is not supported
anymore. Consider removing the oid column using
    ALTER TABLE ... SET WITHOUT OIDS;
A list of tables with the problem is in the file:
    tables_with_oids.txt

Failure, exiting

As you may see, I got a fatal error, indicating that the upgrade is not possible. In my case, tables with OIDs are the culprit. In your case could be something else. In any case, we need to fix before upgrading.

I fixed tables removing OIDs on mentioned tables. And ran check again:

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

Yay!

Set locale

There is a tricky configuration that is not detected by pg_upgrade check but it is very important to me. I use C locale on my databases [2], then I need to perform an extra step. If this is your case, you may follow the same steps applying yours.

I need to stop postgresql10 and start postgresql12:

systemctl stop postgresql-10.service
systemctl start postgresql-12.service

Then I run locale change at my template1 then locale will be enabled when my database will be upgraded.

UPDATE pg_database SET datcollate='C', datctype='C' WHERE datname='template1';

And stop again: systemctl stop postgresql-12.service to be ready to upgrade.

Upgrade

Upgrade command is the same that we run before, without --check flag.

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Consider running analyze_new_cluster. Optional but nice to have.

vacuumdb: processing database "mydb": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "mydb": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "mydb": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done

Configurations

Before deleting your old cluster, remember to get some of your configurations.

mv /var/lib/pgsql/12/data/pg_hba.conf /var/lib/pgsql/12/data/pg_hba.conf.new
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/12/data/

I am not a big fan of writing directly to postgresql.conf file. Instead, I keep configuration files under version control and include the directory where those files are deployed. I treat them as code, then it becomes easier to maintain and manage.

Another advantage is that I don't have any mess about config file differences when a new version arises. I am automatically using new default configurations, my customized setting are loaded and I can quickly address any incompatibility caused by migration without touching the original conf file.

Let's go for it:

# Add settings for extensions here
include_dir = '/var/lib/pgsql/conf.d'   # include files ending in '.conf' from

Then you may delete your old cluster. 🙂

Links:

  1. Extensions: https://www.postgresql.org/docs/current/contrib.html
  2. Locale: https://www.postgresql.org/docs/current/locale.html

PHP Test Coverage Using Bitbucket and Codacy

Wikipedia:

In computer science, code coverage is a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage.

Testing is an unavoidable process for building a trustful software. Unfortunately, in PHP world we have a massive number of legacy software still running today that are very valuable but born in an age where testing was skipped for various reasons.

As today we are refactoring those untested systems into tested ones or we are creating new projects already focusing on having tests, we can go one step further and measure the code coverage, leveraging bug protection and code quality.

You can use these steps for a legacy project, a new project, a well-covered project, a poorly covered project; no matter the state of your project.

We are considering a PHP project using Bitbucket Pipelines as our CI and Codacy for monitoring our test coverage reports but the main concepts could be easily used when using other tools.

Table of contents:

  1. Dependencies installation
  2. PHPUnit configuration
  3. Set up Codacy project API token
  4. Pipelines configuration

1. Dependencies installation

Use composer to install dependencies:

composer require --dev phpunit/php-code-coverage codacy/coverage

Installation results would be similar to:

Using version ^1.4 for codacy/coverage
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 3 installs, 0 updates, 0 removals
  - Installing symfony/process (v5.0.4): Downloading (100%)
  - Installing gitonomy/gitlib (v1.2.0): Downloading (100%)
  - Installing codacy/coverage (1.4.3): Downloading (100%)
Writing lock file
Generating autoload files
ocramius/package-versions:  Generating version class...
ocramius/package-versions: ...done generating version class

2. PHPUnit configuration

We need to configure at least whitelist and logging sections. They are required to code coverage.

Whitelist is the section that determines which files will be considered as your available code and how existent tests cover this code.

As I want that all my code loaded and analyzes by PHPUnit, I will set processUncoveredFilesFromWhitelist. Considering that all my code is under ./src folder:

<phpunit>
    <!-- ... -->
    <filter>
        <!-- ... -->
        <whitelist processUncoveredFilesFromWhitelist="true">
            <directory suffix=".php">./src</directory>
        </whitelist>
    </filter>
</phpunit>

Logging is where we configure logging of the test execution. Clover configuration is enough for now:

<phpunit>
    <!-- ... -->
    <logging>
      <log type="coverage-clover" target="/tmp/coverage.xml"/>
    </logging>
</phpunit>

You should now run your tests locally to ensure that you can fix everything that will be analyzed by PHPUnit. All errors have to be fixed. One example that might appear:

Fatal error: Interface 'InterfaceClass' not found in /var/www/src/Example/Application/ClassService.php on line 5

Oops.

<?php

namespace Example;

class ClassService implements InterfaceClass {
    /* */
}

I have a class extending from another but I missed the import for the parent class.

<?php

namespace Example;

use Example\Domain\InterfaceClass;

class ClassService implements InterfaceClass {
    /* */
}

And now I am good. After fixing all errors that might appear (and discovering some dead classes...), we test results and report generation message:

OK (10 tests, 17 assertions)

Generating code coverage report in Clover XML format ... done [5.71 seconds]

This has already configured code coverage. We will use Codacy as a tool to keep track of code coverage status, representing them in a beautiful dashboard and some other tools such as a check for new pull requests.

3. Set up Codacy project API token

For sending coverage results to Codacy, we need the project API token. This is located at Settings > Integrations tab.

If there is already a code, you can use it. Otherwise, generate one. A project token would look like something as:

a9564ebc3289b7a14551baf8ad5ec60a // not real

We will use this as an environment variable at Bitbucket. At your project in Bitbucket, go to Configurations > Pipelines > Repository Variables. In my case, I used:

Name: CODACY_PROJECT_TOKEN
Value: a9564ebc3289b7a14551baf8ad5ec60a

I want the value securely encrypted. Then I mark "Secure".

Right now we have Codacy token and the value enabled to use as an environment variable at our pipeline.

4. Pipelines configuration

For Pipelines now you should provide API token as an environment variable:

variables:
  - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN

Enable xdebug:

  - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug

And execute codacycoverage to send saved report to Codacy:

  - src/vendor/bin/codacycoverage clover /tmp/coverage.xml

Considering one of my legacy projects that I am adding code coverage, this could be my unit test step:

  • using alpine
  • source code (whitelisted) at site/src
  • logfile generated at /tmp/unit-clover.xml
  • g++ gcc make git php7-dev are required to install and enable xdebug
    - step: &step-unit-tests
        name: unit tests
        image: php:7.2-alpine
        variables:
          - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN
        caches:
          - composer
        script:
          - apk add --no-cache g++ gcc make git php7-dev
          - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug
          - site/src/vendor/bin/phpunit -c tests/Unit/phpunit.xml
          - site/src/vendor/bin/codacycoverage clover /tmp/unit-clover.xml

After being successfully executed by pipelines, you may see the results at your Codacy dashboard.

Now code with love.
And coverage.


Useful links:

Environment Variables in Angular

Need to use different values depending on the environment you’re in? If you’re building an app that needs to use API host URLs depending on the environment, you may do it easily in Angular using the environmen.ts file.

We are considering Angular 8+ apps for this article.

Angular CLI projects already use a production environment variable to enable production mode when in the production environment at main.ts:

if (environment.production) {
  enableProdMode();
}

And you'll also notice that by default in the src/environment folder you have an environment file for development and one for production. Let's use this feature to allow us to use different API host URL depending if we're in development or production mode:

environment.ts:

export const environment = {
  production: false,
  apiHost: https://api.local.com
}

environment.prod.ts:

export const environment = {
  production: true,
  apiHost: https://api.production-url.com
};

And in our app.component.ts all we have to do in order to access the variable is the following:

import { Component } from '@angular/core';
import { environment } from '../environments/environment';

@Component({ ... })
export class AppComponent {
  apiHost: string = environment.apiHost;
}

Now in development mode the apiHost variable resolves to https://api.local.com and in production resolves to https://api.production-url.com. You may run ng build --prod and check.

Detecting Development Mode

Angular also provides us with an utility function called isDevMode that makes it easy to check if the app in running in dev mode:

import { Component, OnInit, isDevMode } from '@angular/core';

@Component({ ... })
export class AppComponent implements OnInit {
  ngOnInit() {
    if (isDevMode()) {
      console.log('Development!');
    } else {
      console.log('Cool. Production!');
    }
  }
}

Adding a Staging Environment

To add a new environment in Angular projects a new entry to configuration property should be added at angular.json file. Let's add a staging environment for example. Note that production property already exists.

"configurations": {
  "production": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.prod.ts"
      }
    ]
  },
  "stating": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.stating.ts"
      }
    ]
  }

And now we can add a staging environment file and suddenly be and build the project with ng build --configuration=staging on our CI (or deploy process) to deploy on staging environment:

environment.staging.ts

export const environment = {
  production: false,
  apiHost: https://staging.host.com
};

High-performance Fibonacci numbers generator in PHP

Based on the article High-performance Fibonacci numbers generator in Go I wrote my version using PHP. Despite the differences between PHP and Go architectures reflected in response times, we can face a huge performance difference when using an optimized function. We may notice that we can have the same results, but the quality of the written code can change lots of things.

Recursive approach

function fibonacci(int $n):int {
  if ($n <= 1) {
    return $n;
  }

  return fibonacci($n-1) + fibonacci($n-2);
}

Benchmark and test

function test_fibonacci() {
  $data = [
    [0,0], [1,1], [2,1], [3,2], [4,3], [5,5], [6,8], [10,55], [42,267914296]
  ];

  foreach($data as $test) {
    $result = fibonacci($test[0]);
    if ($result !== $test[1]) {
      throw new \UnexpectedValueException("Error Processing Request. N: {$test[0]}, got: {$result}, expected: {$test[1]}", 1);
    }
  }

  echo "Tests - Success.".PHP_EOL;
}

/**
  * From https://gist.github.com/blongden/2352583
  */
function benchmark($x)
{
    $start = $t = microtime(true);
    $total = $c = $loop = 0;
    while (true) {
        $x();
        $c++;
        $now = microtime(true);
        if ($now - $t > 1) {
            $loop++;
            $total += $c;
            list($t, $c) = array(microtime(true), 0);
        }
        if ($now - $start > 2) {
            return round($total / $loop);
        }
    }
}
Benchmark 10 run: 163,754/sec or 0.0061067210571955ms/op
Benchmark 20 run: 1,351/sec or 0.74019245003701ms/op

As we can see, calculations of 20 Fibonacci numbers takes 123 times longer than 10 Fibonacci numbers. Not well performed at all! The explanation can be found in the linked article.

Sequential approach

function fibonacci_tuned(int $n):float {
  if ($n <= 1) {
    return $n;
  }

  $n2 = 0;
  $n1 = 1;

  for ($i = 2; $i < $n; $i++) {
    $n2_ = $n2;
    $n2 = $n1;
    $n1 = ($n1 + $n2_);
  }

  return $n2 + $n1;
}

function test_fibonacci_tuned() {
  $data = [
    [0,0], [1,1], [2,1], [3,2], [4,3], [5,5], [6,8], [10,55], [42,267914296]
  ];

  foreach($data as $test) {
    $result = fibonacci_tuned($test[0]);
    $float_test_value = (float) $test[1];
    if ($result !== $float_test_value) {
      throw new \UnexpectedValueException("Error Processing Request. N: {$test[0]}, got: {$result}, expected: {$float_test_value}", 1);
    }
  }

  echo "Tests - Success.".PHP_EOL;
}

Results:

Benchmark 10 tuned run: 3,345,999/sec or 0.00029886440492062ms/op
Benchmark 20 tuned run: 2,069,100/sec or 0.00048330191870862ms/op

As a much better scenario, calculate 20 numbers takes almost 2 times longer than 10 numbers. Makes sense. And performs well!

Considering the two approaches, the recursive approach runs 10 Fibonacci numbers operations 20 times longer than sequential one and 1,824 times longer for 20 Fibonacci numbers.

Fibonacci implementation in PHP can be found at https://github.com/rafaelbernard/php-fibonacci.

Um convincente e-mail fraudulento do Bitcoin extorquindo você

Esta publicação foi originalmente publicada em The convincing Bitcoin scam e-mail extorting you, por Mattias Geniar, em inglês. Mas o alerta vale ser traduzido para o português, tendo você assistido ou não do que a acusação se trata.

Mais uma vez vemos a criatividade de aplicadores de golpe. Fique atento. Fique alerta. A internet é uma terra tão selvagem quando as ruas em que andamos.

---

Há alguns meses recebi um e-mail que me deixou preocupado por alguns segundos. Parecia assim, e é bem provável que você tenha visto também.

From: Kalie Paci 
Subject: mattias - UqtX7m

It seems that, UqtX7m, is your pass word. You do not know me and you are probably thinking
why you are getting this mail, correct?

Well, I actually placed a malware on the adult video clips (porn) web-site and guess what,
you visited this site to have fun (you know what I mean). While you were watching videos,
your browser started operating as a RDP (Remote control Desktop) that has a keylogger which
gave me access to your display and also web camera. Immediately after that, my software
program collected your entire contacts from your Messenger, FB, and email.

What exactly did I do?

I created a double-screen video. First part displays the video you were viewing (you have
a nice taste lol), and second part displays the recording of your web camera.

What should you do?

Well, in my opinion, $1900 is a fair price for our little secret. You’ll make the payment
through Bitcoin (if you do not know this, search “how to buy bitcoin” in Google).

BTC Address: 1MQNUSnquwPM9eQgs7KtjDcQZBfaW7iVge
(It is cAsE sensitive, so copy and paste it)

Important:
You now have one day to make the payment. (I’ve a unique pixel in this message, and right
now I know that you have read this email message). If I don’t get the BitCoins, I will
send your video recording to all of your contacts including members of your family,
colleagues, and many others. Having said that, if I do get paid, I will destroy the video
immidiately. If you need evidence, reply with “Yes!” and I definitely will send your video
recording to your 11 friends. This is a non-negotiable offer, and so please don’t waste
my personal time and yours by responding to this mail.

Se você lê, parece spam - não é?

Bem, o que me preocupou por alguns segundos foi que a linha de assunto e o corpo continham uma senha real que eu usei um tempo atrás: UqtX7m.

Para receber um email com um - o que parece ser - segredo pessoal no assunto, chama a atenção. É inteligente no sentido de que você se sente violado e envergonhado pelas consequências. Parece legítimo.

Deixe-me dizer claramente: é uma farsa e você não precisa pagar ninguém.

Mencionei pela primeira vez no meu Twitter descrevendo o que parece ser a parte brilhante desse golpe:

  • E-mail + senhas, fáceis de obter (muitos vazamentos on-line)
  • Todo mundo assiste pornô
  • Ninguém quer que esta informação vaze
  • O mesmo e-mail genérico pode ser usado para todas as vítimas

Quem quer que esteja executando esse esquema pensou sobre a psicologia do golpe e encontrou o ponto ideal: ele chama sua atenção e deixa você preocupado.

Bem jogado. Mas não se apegue a isso e, mais importante: não pague nada.

---

Quanto à Combate à pornografia conheça Just1ClickAway. Busque se livrar deste mal.

Codility – PermMissingElem

I scored 100% in #php on @Codility!
https://codility.com/demo/take-sample-test/perm_missing_elem/

Training ticket

Session
ID: trainingWEF9F8-YEU
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 03:25 UTC
Started on: 2016-01-17 03:25 UTC
Finished on: 2016-01-17 03:40 UTC

Training ticket (real finishing time)

Session
ID: trainingCSVQV7-4KF
Time limit: 120 min.

Status: closed
Created on: 2016-01-17 04:29 UTC
Started on: 2016-01-17 04:29 UTC
Finished on: 2016-01-17 04:30 UTC