Categorias
Tropeçando

Tropeçando 92

Mocking/stubbing the current Date in Jest tests

This post goes through multiple approaches to mocking, stubbing and spying on the date constructor using Jest.

Don't get stuck

A history about the importance of know your time and to not get stuck.

For this special history guide, we are going to take a trip back in time to see where the seed of Linux was planted — namely via the Unix systems of the early 1970s and how it has progressed through the modern day. Though most are completely unaware of the enormous impact that Unix-like operating systems have planted on our society, understanding its storied history can allow us to realize why the Unix model has lived on far longer and become more successful than any other operating system architecture (and philosophy) in existence.

tmpmail - A temporary email right from your terminal

tmpmail is a command line utility that allows you to create a temporary email address and receive emails to the temporary email address. It uses 1secmail's API to receive the emails.

EXPLAIN docs

After a lot of time looking at query plans, we’re still coming across new operation types, fields, and terminology. Many of these terms were tricky to look up and understand, so we decided to share descriptions and useful links for many of the most common in a glossary.

Categorias
Tropeçando

Tropeçando 91

Introducing the MDN Web Docs Front-end developer learning pathway

The MDN Web Docs Learning Area (LA) was first launched in 2015, with the aim of providing a useful counterpart to the regular MDN reference and guide material. MDN had traditionally been aimed at web professionals, but we were getting regular feedback that a lot of our audience found MDN too difficult to understand, and that it lacked coverage of basic topics.
https://developer.mozilla.org/en-US/docs/Learn

SQL Optimizations in PostgreSQL: IN vs EXISTS vs ANY/ALL vs JOIN

This is one of the most common questions asked by developers who write SQL queries against the PostgreSQL database. There are multiple ways in which a sub select or lookup can be framed in a SQL statement. PostgreSQL optimizer is very smart at optimizing queries, and many of the queries can be rewritten/transformed for better performance.

Expose

Expose is a beautiful, open source, tunnel application that allows you to share your local websites with others via the internet.

Combining event sourcing and stateful systems

As Brent stated: In this two-part series, my colleague Freek and I will discuss the architecture of a project we're working on. We will share our insights and answers to problems we encountered along the way. This part will be about the design of the system, while Freek's part will look at the concrete implementation.

Let's set the scene.

Does scrum ruin great engineers or are you doing it wrong?

A question on Stack Overflow’s Software Engineering site caught our attention recently. It tries to come to terms with the impact of scrum on developers' ability to do a great job. The claim is a bold one: Scrum is turning good developers into average ones. Could that be true?

Categorias
Tropeçando

Tropeçando 90

Auto-restart a crashed service in systemd

Systemd allows you to configure a service so that it automatically restarts in case it’s crashed.

My Personal Best Practices For Using LaunchDarkly Feature Flags

The Most-Neglected Postgres Feature?

log_line_prefix should be the most-neglected postgres feature. Overused and mis-configured. The author talk about his finding, the great use and some tips for log_line_prefix configuration. This feature is very powerful on PostgreSQL.

Logical Replication Between PostgreSQL and MongoDB

A decoder plugin to enable logical replication from a PostgreSQL (as publisher) to MongoDB (as subscriber).

Awesome-Compose: Application samples for project development kickoff

A curated list of Docker Compose samples.

These samples provide a starting point for how to integrate different services using a Compose file and to manage their deployment with Docker Compose.

Categorias
PostGreSQL

PostgreSQL – pg_upgrade from 10 to 12

I have some PostgreSQL databases running pretty well but we need to keep our software updated. This is a mandatory practice for a high-quality service. Those servers are running version 10 and they need to be upgraded to version 12. I have used pg_dump / pg_restore strategy for a long time, but this time I would rather use pg_upgrade.

Let's dive into how to do it.

Table of contents:

  1. Install
  2. InitDB
  3. Check upgrade consistency
  4. Set locale
  5. Upgrade
  6. Configurations

Install

The package postgresql12-server contains everything needed to run the server, but my databases use some extensions [1], then I will add postgresql12-devel and postgresql12-contrib to be able to compile and to install the extensions.

yum install postgresql12-server postgresql12-devel postgresql12-contrib

InitDB

After installation we need to setup new server with initdb:

~% /usr/pgsql-12/bin/postgresql-12-setup initdb
Initializing database … OK

Check upgrade consistency

We need to check compatibility. Turn to postgres user (su - postgres) and run the command:

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data --check

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               fatal

Your installation contains tables declared WITH OIDS, which is not supported
anymore. Consider removing the oid column using
    ALTER TABLE ... SET WITHOUT OIDS;
A list of tables with the problem is in the file:
    tables_with_oids.txt

Failure, exiting

As you may see, I got a fatal error, indicating that the upgrade is not possible. In my case, tables with OIDs are the culprit. In your case could be something else. In any case, we need to fix before upgrading.

I fixed tables removing OIDs on mentioned tables. And ran check again:

Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

*Clusters are compatible*

Yay!

Set locale

There is a tricky configuration that is not detected by pg_upgrade check but it is very important to me. I use C locale on my databases [2], then I need to perform an extra step. If this is your case, you may follow the same steps applying yours.

I need to stop postgresql10 and start postgresql12:

systemctl stop postgresql-10.service
systemctl start postgresql-12.service

Then I run locale change at my template1 then locale will be enabled when my database will be upgraded.

UPDATE pg_database SET datcollate='C', datctype='C' WHERE datname='template1';

And stop again: systemctl stop postgresql-12.service to be ready to upgrade.

Upgrade

Upgrade command is the same that we run before, without --check flag.

~% /usr/pgsql-12/bin/pg_upgrade --old-bindir=/usr/pgsql-10/bin --new-bindir=/usr/pgsql-12/bin --old-datadir=/var/lib/pgsql/10/data --new-datadir=/var/lib/pgsql/12/data

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows in the new cluster                        ok
Deleting files from new pg_xact                             ok
Copying old pg_xact to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

Consider running analyze_new_cluster. Optional but nice to have.

vacuumdb: processing database "mydb": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "mydb": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "mydb": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics

Done

Configurations

Before deleting your old cluster, remember to get some of your configurations.

mv /var/lib/pgsql/12/data/pg_hba.conf /var/lib/pgsql/12/data/pg_hba.conf.new
cp /var/lib/pgsql/10/data/pg_hba.conf /var/lib/pgsql/12/data/

I am not a big fan of writing directly to postgresql.conf file. Instead, I keep configuration files under version control and include the directory where those files are deployed. I treat them as code, then it becomes easier to maintain and manage.

Another advantage is that I don't have any mess about config file differences when a new version arises. I am automatically using new default configurations, my customized setting are loaded and I can quickly address any incompatibility caused by migration without touching the original conf file.

Let's go for it:

# Add settings for extensions here
include_dir = '/var/lib/pgsql/conf.d'   # include files ending in '.conf' from

Then you may delete your old cluster. 🙂

Links:

  1. Extensions: https://www.postgresql.org/docs/current/contrib.html
  2. Locale: https://www.postgresql.org/docs/current/locale.html
Categorias
Tropeçando

Tropeçando 89

Upgrading Postgres major versions using Logical Replication

Arch Fun Statistics

Everyday Hacks for Docker

In this post, I’ve decided to share with you some useful commands and tools I frequently use when working with awesome Docker technology. There is no particular order or “coolness level” for every “hack.” I will simply present the use case and how the specific command or tool has helped me with my work. Read these great hacks and make sure to check out the great hack of all – Codefresh –  the best CI for Docker out there.

https://codefresh.io/docker-tutorial/not-ignore-dockerignore-2/

A video course Introduction to CQRS and Event Sourcing

Categorias
PHP Programação Technology

PHP Test Coverage Using Bitbucket and Codacy

Wikipedia:

In computer science, code coverage is a measure used to describe the degree to which the source code of a program is tested by a particular test suite. A program with high code coverage has been more thoroughly tested and has a lower chance of containing software bugs than a program with low code coverage.

Testing is an unavoidable process for building a trustful software. Unfortunately, in PHP world we have a massive number of legacy software still running today that are very valuable but born in an age where testing was skipped for various reasons.

As today we are refactoring those untested systems into tested ones or we are creating new projects already focusing on having tests, we can go one step further and measure the code coverage, leveraging bug protection and code quality.

You can use these steps for a legacy project, a new project, a well-covered project, a poorly covered project; no matter the state of your project.

We are considering a PHP project using Bitbucket Pipelines as our CI and Codacy for monitoring our test coverage reports but the main concepts could be easily used when using other tools.

Table of contents:

  1. Dependencies installation
  2. PHPUnit configuration
  3. Set up Codacy project API token
  4. Pipelines configuration

1. Dependencies installation

Use composer to install dependencies:

composer require --dev phpunit/php-code-coverage codacy/coverage

Installation results would be similar to:

Using version ^1.4 for codacy/coverage
./composer.json has been updated
Loading composer repositories with package information
Updating dependencies (including require-dev)
Package operations: 3 installs, 0 updates, 0 removals
  - Installing symfony/process (v5.0.4): Downloading (100%)
  - Installing gitonomy/gitlib (v1.2.0): Downloading (100%)
  - Installing codacy/coverage (1.4.3): Downloading (100%)
Writing lock file
Generating autoload files
ocramius/package-versions:  Generating version class...
ocramius/package-versions: ...done generating version class

2. PHPUnit configuration

We need to configure at least whitelist and logging sections. They are required to code coverage.

Whitelist is the section that determines which files will be considered as your available code and how existent tests cover this code.

As I want that all my code loaded and analyzes by PHPUnit, I will set processUncoveredFilesFromWhitelist. Considering that all my code is under ./src folder:

<phpunit>
    <!-- ... -->
    <filter>
        <!-- ... -->
        <whitelist processUncoveredFilesFromWhitelist="true">
            <directory suffix=".php">./src</directory>
        </whitelist>
    </filter>
</phpunit>

Logging is where we configure logging of the test execution. Clover configuration is enough for now:

<phpunit>
    <!-- ... -->
    <logging>
      <log type="coverage-clover" target="/tmp/coverage.xml"/>
    </logging>
</phpunit>

You should now run your tests locally to ensure that you can fix everything that will be analyzed by PHPUnit. All errors have to be fixed. One example that might appear:

Fatal error: Interface 'InterfaceClass' not found in /var/www/src/Example/Application/ClassService.php on line 5

Oops.

<?php

namespace Example;

class ClassService implements InterfaceClass {
    /* */
}

I have a class extending from another but I missed the import for the parent class.

<?php

namespace Example;

use Example\Domain\InterfaceClass;

class ClassService implements InterfaceClass {
    /* */
}

And now I am good. After fixing all errors that might appear (and discovering some dead classes...), we test results and report generation message:

OK (10 tests, 17 assertions)

Generating code coverage report in Clover XML format ... done [5.71 seconds]

This has already configured code coverage. We will use Codacy as a tool to keep track of code coverage status, representing them in a beautiful dashboard and some other tools such as a check for new pull requests.

3. Set up Codacy project API token

For sending coverage results to Codacy, we need the project API token. This is located at Settings > Integrations tab.

If there is already a code, you can use it. Otherwise, generate one. A project token would look like something as:

a9564ebc3289b7a14551baf8ad5ec60a // not real

We will use this as an environment variable at Bitbucket. At your project in Bitbucket, go to Configurations > Pipelines > Repository Variables. In my case, I used:

Name: CODACY_PROJECT_TOKEN
Value: a9564ebc3289b7a14551baf8ad5ec60a

I want the value securely encrypted. Then I mark "Secure".

Right now we have Codacy token and the value enabled to use as an environment variable at our pipeline.

4. Pipelines configuration

For Pipelines now you should provide API token as an environment variable:

variables:
  - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN

Enable xdebug:

  - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug

And execute codacycoverage to send saved report to Codacy:

  - src/vendor/bin/codacycoverage clover /tmp/coverage.xml

Considering one of my legacy projects that I am adding code coverage, this could be my unit test step:

  • using alpine
  • source code (whitelisted) at site/src
  • logfile generated at /tmp/unit-clover.xml
  • g++ gcc make git php7-dev are required to install and enable xdebug
    - step: &step-unit-tests
        name: unit tests
        image: php:7.2-alpine
        variables:
          - CODACY_PROJECT_TOKEN: $CODACY_PROJECT_TOKEN
        caches:
          - composer
        script:
          - apk add --no-cache g++ gcc make git php7-dev
          - pecl install xdebug-2.9.2 && docker-php-ext-enable xdebug
          - site/src/vendor/bin/phpunit -c tests/Unit/phpunit.xml
          - site/src/vendor/bin/codacycoverage clover /tmp/unit-clover.xml

After being successfully executed by pipelines, you may see the results at your Codacy dashboard.

Now code with love.
And coverage.


Useful links:

Categorias
Programação

Environment Variables in Angular

Need to use different values depending on the environment you’re in? If you’re building an app that needs to use API host URLs depending on the environment, you may do it easily in Angular using the environmen.ts file.

We are considering Angular 8+ apps for this article.

Angular CLI projects already use a production environment variable to enable production mode when in the production environment at main.ts:

if (environment.production) {
  enableProdMode();
}

And you'll also notice that by default in the src/environment folder you have an environment file for development and one for production. Let's use this feature to allow us to use different API host URL depending if we're in development or production mode:

environment.ts:

export const environment = {
  production: false,
  apiHost: https://api.local.com
}

environment.prod.ts:

export const environment = {
  production: true,
  apiHost: https://api.production-url.com
};

And in our app.component.ts all we have to do in order to access the variable is the following:

import { Component } from '@angular/core';
import { environment } from '../environments/environment';

@Component({ ... })
export class AppComponent {
  apiHost: string = environment.apiHost;
}

Now in development mode the apiHost variable resolves to https://api.local.com and in production resolves to https://api.production-url.com. You may run ng build --prod and check.

Detecting Development Mode

Angular also provides us with an utility function called isDevMode that makes it easy to check if the app in running in dev mode:

import { Component, OnInit, isDevMode } from '@angular/core';

@Component({ ... })
export class AppComponent implements OnInit {
  ngOnInit() {
    if (isDevMode()) {
      console.log('Development!');
    } else {
      console.log('Cool. Production!');
    }
  }
}

Adding a Staging Environment

To add a new environment in Angular projects a new entry to configuration property should be added at angular.json file. Let's add a staging environment for example. Note that production property already exists.

"configurations": {
  "production": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.prod.ts"
      }
    ]
  },
  "stating": {
    "optimization": true,
    "outputHashing": "all",
    "sourceMap": false,
    "extractCss": true,
    "namedChunks": false,
    "aot": true,
    "extractLicenses": true,
    "vendorChunk": false,
    "buildOptimizer": true,
    "fileReplacements": [
      {
        "replace": "src/environments/environment.ts",
        "with": "src/environments/environment.stating.ts"
      }
    ]
  }

And now we can add a staging environment file and suddenly be and build the project with ng build --configuration=staging on our CI (or deploy process) to deploy on staging environment:

environment.staging.ts

export const environment = {
  production: false,
  apiHost: https://staging.host.com
};
Categorias
Tropeçando

Tropeçando 88

Intro Guide to Dockerfile Best Practices

There are over one million Dockerfiles on GitHub today, but not all Dockerfiles are created equally. Efficiency is critical, and this blog series will cover five areas for Dockerfile best practices to help you write better Dockerfiles: incremental build time, image size, maintainability, security and repeatability. If you’re just beginning with Docker, this first blog post is for you! The next posts in the series will be more advanced.

The case against the ifsetor function

how to traverse nested array structures with potentially non-existing keys without throwing notices

Laravel Beyond CRUD

Proposal for thinking Laravel applications using DDD approach. A blog series for PHP developers working on larger-than-average Laravel projects.

Designing Your First App in Kubernetes, Part 1: Getting Started

Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious.

How to run short ALTER TABLE without long locking concurrent queries

Categorias
Tropeçando

Tropeçando 87

Craftsmen know their tools

When programmers call themselves craftsmen or artisans, I can agree that we are. At the same time though, some of these programmers underestimate what craftsmanship actually means.

We Programmers

The good, the bad and the ugly.

History and effective use of Vim

This article is based on historical research and on simply reading the Vim user manual cover to cover. Hopefully these notes will help you (re?)discover core functionality of the editor, so you can abandon pre-packaged vimrc files and use plugins more thoughtfully.

Google spent 10 years researching what makes the 'perfect' manager — here at the top 10 traits they found

59 Linux Networking commands and scripts

Categorias
Tropeçando

Tropeçando 86

Snyk

Use Open Source. Stay Secure.

A developer-first solution that automates finding & fixing vulnerabilities in your dependencies

Reading List - by Mathias Verraes

Code Reviews and Blame Culture

A common belief is that gated reviews lead to blaming individuals. The opposite can be true.

 

How to Write a Git Commit Message

Why good commit messages matter

Better Commits with Static Review