Month: August 2018

Replication from External Primary/Leader into GCP

Replication from External Primary/Leader into GCP

This is a post based on recent tutorials I published, with the goal of discussing how to prepare your current MySQL instance to be configured as an External Primary Server with a Replica/Follower into Google Cloud Platform.

First, I want to talk about the jargon used here. I will be using primary to represent the external “master” server, and replica to represent the “slave” server. Personally, I prefer the terms leader/follower but primary/replica currently seems to be more common in the industry. At some point, the word slave will be used, but because it is the keyword embedded on the server to represent a replica.

The steps given will be in the context of a VM running a one-click install of WordPress acquired through the Google Marketplace (formerly known as Launcher) .

To help prepare for replication you need to configure your primary to meet some requirements.

  1. server-id must be configured; it needs to have binary logging enabled; it needs to have GTID enabled, and GTID must be enforced. Tutorial.
  2. A Replication User must exist on the primary, remembering you may need root to create it
  3. A dump file must be generated using the mysqldump command with some information on it.

The steps above are also necessary if you are migrating from another cloud or on-prem.

Why split the application and database and use a service like Cloud SQL?

Cloud SQL
Cloud SQL

First, you will be able to use your application server to do what it was mainly designed for: serve requests of your WordPress application (and it doesn’t much matter for the purposes of this post if you are using nginx or Apache).

Databases are heavy, their deadly sin is gluttony, they tend to occupy as much memory as they can to make lookups fairly fast. Once you are faced with this reality, sharing resources with your application is not a good idea.

Next, you may say: I could use Kubernetes! Yes, you could, but just because you can do something doesn’t mean you should. Configuring stateful applications inside Kubernetes is a challenge, and the fact that pods can be killed at any moment may pose a threat to your data consistency if it happens mid transaction. There are solutions on the market that use MySQL on top of Kubernetes, but that would be a totally different discussion.

You also don’t need to use Cloud SQL, you can set up your
database replicas, or even the primary, on another VM (still wins when compared with putting the database and application together), but in this scenario you are perpetually risking hitting the limits of your finite hardware capabilities.

Finally, Cloud SQL has a 99.95% availability and it is curated by the SRE team of Google. That means you can focus your efforts on what really matters — developing your application — and not spend hours, or even days, setting up servers. Other persuasively convenient features include PITR (Point in Time Recovery) and High Availability in case a failover is necessary.

Setting up the replica on GCP

Accessing the menu SQL in your Google Cloud Console will give you a listing of your current Cloud SQL instances. From there execute the following:

  1. Click on the Migrate Data button
  2. Once you have familiarized yourself with the steps shown on the screen, click on Begin Migration
  3. In the Data source details , fill the form out as follows:
    1. Name of data source: Any valid name for a Cloud SQL instance that will represent the primary server name
    2. Public IP address of source: The IP address of the primary
    3. Port number of source: The port number for the primary, usually 3306
    4. MySQL replication username: The username associated with the replication permissions on the primary
    5. MySQL replication password: The password for the replication username
    6. Database version: Choose between MySQL 5.6 and MySQL 5.7. If you are not sure which version you are running, execute SELECT @@version; in your primary server and you will have the answer.
    7. (Optional) Enable SSL/TLS certification: Upload or enter the Source CA Certificate
  4. Click on Next

The next section Cloud SQL read replica creation, will allow you to choose:

  1. Read replica instance ID: Any valid name for a Cloud SQL instance that will represent the replica server name
  2. Location: choose the Region and then the Zone for which your instance will be provisioned.
  3. Machine Type: Choose a Machine Type for your replica; This can be modified later! In some cases it is recommended to choose a higher instance configuration than what you will keep after replication synchronization finishes
  4. Storage type: Choice between SSD and HDD. For higher performance choose SSD
  5. Storage capacity: It can be from 10GB up to 10TB. The checkbox for Enable automatic storage increases means whenever you’re near capacity, space will be incrementally increased. All increases are permanent
  6. SQL Dump File: Dump generated containing binary logging position and GTID information.
  7. (Optional) More options can be configured by clicking on Show advanced options like Authorized networks, Database flags, and Labels.
  8. Once you’ve filled out this information, click on Create.

The following section, Data synchronization, will display the previous selected options as well the Outgoing IP Address which must be added to your current proxy, firewall, white-list to be able to connect and fetch replication data. Once you are sure your primary can be accessed using the specified credentials, and the IP was white-listed, you can click on Next. After that replication will start.

Live demo

If you want to see this feature in action, please check this video from Google Cloud Next 2018:

Generating a mysqldump to import into Google Cloud SQL

This tutorial is for you that is trying to import your current database into a Google Cloud SQL instance, replica, that will be setup for replication purposes.

According to the documentation, you will need to run:

mysqldump \
-h [MASTER_IP] -P [MASTER_PORT] -u [USERNAME] -p \
--databases [DBS] \
--hex-blob --skip-triggers --master-data=1 \
--order-by-primary --compact --no-autocommit \
--default-character-set=utf8 --ignore-table [VIEW] \
--single-transaction --set-gtid-purged=on | gzip | \
gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP]

The mysqldump parameters are:

  • -h the hostname or IPV4 address of the primary should replace [MASTER_IP]
  • -P the port or the primary server, usually [MASTER_PORT] value will be 3306
  • -u takes the username passed on [USERNAME]
  • -p informs that a password will be given
  • --databases a comma separated list of the databases to be imported. Keep in mind [DBS] should not include the sys, performance_schema, information_schema, and mysql schemas
  • --hex-blob necessary for dumping binary columns which types could be BINARY, BLOB and others
  • --skip-triggers recommended for the initial load, you can import the triggers at a later moment
  • --master-data according to the documentation: “It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server”
  • --order-by-primary it dumps the data in the primary key order
  • --compact produces a more compact output, enabling several flags for the dump
  • --no-autocommit encloses the table between a SET autocommit=0 and COMMIT statements
  • --default-character-set informs the default character set
  • --ignore-table must list the VIEW to be ignored on import, for multiple views, use this option multiple times. Views can be imported later on after promotion of the replica is done
  • --single-transaction a START TRANSACTION is sent to the database so the dump will contain the data up to that point in time
  • --set-gtid-purged writes the the state of the GTID information into the dump file and disables binary logging when the dump is loaded into the replica

After that the result is compressed in a GZIP file and uploaded to a bucket on Google Cloud Storage with gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP] where [BUCKET] is the bucket you created on GCS and [PATH_TO_DUMP] will save the file in the desired path.

Be aware that no DDL operations should be performed in the database while the dump is being generated else you might find inconsistencies.

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.

Configuring GTID and binary logging

Configuring GTID and binary logging

This tutorial demands a service restart since some flags here presented can not be dynamically changed

What is GTID and why do I need it? Directly from the MySQL documentation (excerpt taken as is with different jargons than used here, for master/slave we are using primary/replica):

A global transaction identifier (GTID) is a unique identifier created and associated with each transaction committed on the server of origin (the master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication topology.

GTID assignment distinguishes between client transactions, which are committed on the master, and replicated transactions, which are reproduced on a slave. When a client transaction is committed on the master, it is assigned a new GTID, provided that the transaction was written to the binary log. Client transactions are guaranteed to have monotonically increasing GTIDs without gaps between the generated numbers. If a client transaction is not written to the binary log (for example, because the transaction was filtered out, or the transaction was read-only), it is not assigned a GTID on the server of origin.

In theory you can use replication with only binary logging enabled, however replication with GTID is significantly more reliable. And while some providers don’t require it, at Google Cloud GTID is mandatory.

Representation

To represent a GTID a pair of coordinates are used, one is the server_uuid paired with the transaction_id which is an integer. Example of valid GTID:

GTID = 8b5dbf2a-45b4-11e8-81bc-42010a800002:25

To understand more how this impacts replication, I recommend reading the section GTID Format and Storage  in the MySQL documentation.

Enabling GTID

Thankfully, to enable it you don’t need to do much, edit your mysqld.cnf file to support this variables:

server-id = 2 # Or any other number, we recommend to not be 1
log-bin = mysql-bin # Or any other valid value

gtid_mode = ON
enforce-gtid-consistency = true

Restart the database server to load up the new configuration with sudo service mysql restart.

Side effects

Some applications may cause errors due to the enforce-gtid-consistency flag. That happens because usually the application is trying to do a non-transactional action that also is not possible to replicate inside a transaction.

If you do the following:

START TRANSACTION;

CREATE TEMPORARY TABLE `tmp_users` ( id INTEGER );

COMMIT;

It is not a good practice I may add . You will get this error:

ERROR 1787 (HY000): Statement violates GTID consistency: CREATE TEMPORARY TABLE and DROP TEMPORARY TABLE can only be executed outside transactional context. These statements are also not allowed in a function or trigger because functions and triggers are also considered to be multi-statement transactions.

What are you basically doing is telling the database to create a connection, which is fine, however the following command is a CREATE TEMPORARY TABLE. This command is bound to the current connection, and because it won’t have a transaction_id it won’t be able to replicate the statement. Temporary tables are not replicated.

If your application happens to do that, all you need to do is remove the creation of temporary tables to outside of the transaction. Unfortunately Magento does not do that.

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.