Author: Gabriela D Ferrara

Extending WordPress Dockerfile to use MySQL 5.7 (or 8.0)

Extending WordPress Dockerfile to use MySQL 5.7 (or 8.0)

Oracle’s website shows End of life for MySQL 5.5 as of Jan 20th of 2019, so hurry up and upgrade!

I am working building some demos for Cloud SQL and one of the requirements I had was to run MySQL 5.7 and WordPress as my sample application. The demo consisted on migrating from a single VM environment with WordPress and MySQL running alongside. The narrative: the site got popular and the database became the bottle neck because of all the shared resources between them and the application. The proposed solution? A minimal downtime migration to Cloud SQL, moving the data layer to a dedicated server.

I am going to be doing this demo a lot of times, so I needed some way to automate it. I thought of doing through Docker. I am not Docker proficient, and to begin with I asked Anthony for help to get me to what I wanted, but there are so many nuances! Maybe someone will find a better solution to it than this one, but I decided to share what I got.

Let’s examine the two scenarios I faced. All examples assume Debian/Ubuntu.

I don’t run Docker, just have a VM and want to have MySQL 5.7

In this case it’s straightforward: you need to use the MySQL official APT repository available in https://dev.mysql.com/downloads/repo/apt/.

At this time the most recent version is mysql-apt-config_0.8.12-1_all.deb, keep an eye before continuing this because it may change the version until you use this tutorial.

sudo wget -O /tmp/mysql.deb https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
echo mysql-apt-config mysql-apt-config/select-server select mysql-5.7 | sudo debconf-set-selections
export DB_ROOT_PASSWORD=mypassword
echo mysql-community-server mysql-community-server/root-pass password $DB_ROOT_PASSWORD | sudo debconf-set-selections
echo mysql-community-server mysql-community-server/re-root-pass password $DB_ROOT_PASSWORD | sudo debconf-set-selections
sudo DEBIAN_FRONTEND=noninteractive dpkg -i /tmp/mysql.deb
sudo apt-get update
sudo apt-get -y install mysql-server mysql-client

In line 2 you can change from mysql-5.7 to mysql-8.0, if unspecified the command, version 8.0 will be installed.

I run Docker and want to have 5.7 or 8.0 installed on it

It’s a bit similar to the previous situation, you still need to go to the APT repository page to know which file to download and add this on your Dockerfile:

FROM wordpress:5.0.3-php7.3-apache
### WHATEVER COMES BEFORE ###
EXPOSE 80 443 3306
ENV DEBIAN_FRONTEND noninteractive
ARG DB_ROOT_PASSWORD
RUN apt-get update
RUN apt-get -y install wget lsb-release gnupg
RUN curl -o /tmp/mysql.deb https://dev.mysql.com/get/mysql-apt-config_0.8.12-1_all.deb
RUN echo mysql-apt-config mysql-apt-config/select-server select mysql-5.7 | debconf-set-selections
RUN echo mysql-community-server mysql-community-server/root-pass $DB_ROOT_PASSWORD rot | debconf-set-selections
RUN echo mysql-community-server mysql-community-server/re-root-pass $DB_ROOT_PASSWORD rot | debconf-set-selections
RUN dpkg -i /tmp/mysql.deb
RUN apt-get update
RUN apt-get -y install mysql-server mysql-client
### WHATEVER COMES AFTER ###
view raw Dockerfile hosted with ❤ by GitHub

Notice, you can also change the version of MySQL here. Don’t forget to pass DB_ROOT_PASSWORD​ when doing your docker build using the --build-arg argument. More details here.

It works!

These are the workarounds to avoid using MySQL 5.5. After that I was able to finally automate my demo. Feel free here to share better examples of what I did, as I said, I don’t have proficiency in the subject.

Replication from External Primary/Leader into GCP

Replication from External Primary/Leader into GCP

This is a post based on recent tutorials I published, with the goal of discussing how to prepare your current MySQL instance to be configured as an External Primary Server with a Replica/Follower into Google Cloud Platform.

First, I want to talk about the jargon used here. I will be using primary to represent the external “master” server, and replica to represent the “slave” server. Personally, I prefer the terms leader/follower but primary/replica currently seems to be more common in the industry. At some point, the word slave will be used, but because it is the keyword embedded on the server to represent a replica.

The steps given will be in the context of a VM running a one-click install of WordPress acquired through the Google Marketplace (formerly known as Launcher) .

To help prepare for replication you need to configure your primary to meet some requirements.

  1. server-id must be configured; it needs to have binary logging enabled; it needs to have GTID enabled, and GTID must be enforced. Tutorial.
  2. A Replication User must exist on the primary, remembering you may need root to create it
  3. A dump file must be generated using the mysqldump command with some information on it.

The steps above are also necessary if you are migrating from another cloud or on-prem.

Why split the application and database and use a service like Cloud SQL?

Cloud SQL
Cloud SQL

First, you will be able to use your application server to do what it was mainly designed for: serve requests of your WordPress application (and it doesn’t much matter for the purposes of this post if you are using nginx or Apache).

Databases are heavy, their deadly sin is gluttony, they tend to occupy as much memory as they can to make lookups fairly fast. Once you are faced with this reality, sharing resources with your application is not a good idea.

Next, you may say: I could use Kubernetes! Yes, you could, but just because you can do something doesn’t mean you should. Configuring stateful applications inside Kubernetes is a challenge, and the fact that pods can be killed at any moment may pose a threat to your data consistency if it happens mid transaction. There are solutions on the market that use MySQL on top of Kubernetes, but that would be a totally different discussion.

You also don’t need to use Cloud SQL, you can set up your
database replicas, or even the primary, on another VM (still wins when compared with putting the database and application together), but in this scenario you are perpetually risking hitting the limits of your finite hardware capabilities.

Finally, Cloud SQL has a 99.95% availability and it is curated by the SRE team of Google. That means you can focus your efforts on what really matters — developing your application — and not spend hours, or even days, setting up servers. Other persuasively convenient features include PITR (Point in Time Recovery) and High Availability in case a failover is necessary.

Setting up the replica on GCP

Accessing the menu SQL in your Google Cloud Console will give you a listing of your current Cloud SQL instances. From there execute the following:

  1. Click on the Migrate Data button
  2. Once you have familiarized yourself with the steps shown on the screen, click on Begin Migration
  3. In the Data source details , fill the form out as follows:
    1. Name of data source: Any valid name for a Cloud SQL instance that will represent the primary server name
    2. Public IP address of source: The IP address of the primary
    3. Port number of source: The port number for the primary, usually 3306
    4. MySQL replication username: The username associated with the replication permissions on the primary
    5. MySQL replication password: The password for the replication username
    6. Database version: Choose between MySQL 5.6 and MySQL 5.7. If you are not sure which version you are running, execute SELECT @@version; in your primary server and you will have the answer.
    7. (Optional) Enable SSL/TLS certification: Upload or enter the Source CA Certificate
  4. Click on Next

The next section Cloud SQL read replica creation, will allow you to choose:

  1. Read replica instance ID: Any valid name for a Cloud SQL instance that will represent the replica server name
  2. Location: choose the Region and then the Zone for which your instance will be provisioned.
  3. Machine Type: Choose a Machine Type for your replica; This can be modified later! In some cases it is recommended to choose a higher instance configuration than what you will keep after replication synchronization finishes
  4. Storage type: Choice between SSD and HDD. For higher performance choose SSD
  5. Storage capacity: It can be from 10GB up to 10TB. The checkbox for Enable automatic storage increases means whenever you’re near capacity, space will be incrementally increased. All increases are permanent
  6. SQL Dump File: Dump generated containing binary logging position and GTID information.
  7. (Optional) More options can be configured by clicking on Show advanced options like Authorized networks, Database flags, and Labels.
  8. Once you’ve filled out this information, click on Create.

The following section, Data synchronization, will display the previous selected options as well the Outgoing IP Address which must be added to your current proxy, firewall, white-list to be able to connect and fetch replication data. Once you are sure your primary can be accessed using the specified credentials, and the IP was white-listed, you can click on Next. After that replication will start.

Live demo

If you want to see this feature in action, please check this video from Google Cloud Next 2018:

Generating a mysqldump to import into Google Cloud SQL

This tutorial is for you that is trying to import your current database into a Google Cloud SQL instance, replica, that will be setup for replication purposes.

According to the documentation, you will need to run:

[code lang=bash]
mysqldump \
-h [MASTER_IP] -P [MASTER_PORT] -u [USERNAME] -p \
–databases [DBS] \
–hex-blob –skip-triggers –master-data=1 \
–order-by-primary –compact –no-autocommit \
–default-character-set=utf8 –ignore-table [VIEW] \
–single-transaction –set-gtid-purged=on | gzip | \
gsutil cp – gs://[BUCKET]/[PATH_TO_DUMP]
[/code]

The mysqldump parameters are:

  • -h the hostname or IPV4 address of the primary should replace [MASTER_IP]
  • -P the port or the primary server, usually [MASTER_PORT] value will be 3306
  • -u takes the username passed on [USERNAME]
  • -p informs that a password will be given
  • --databases a comma separated list of the databases to be imported. Keep in mind [DBS] should not include the sys, performance_schema, information_schema, and mysql schemas
  • --hex-blob necessary for dumping binary columns which types could be BINARY, BLOB and others
  • --skip-triggers recommended for the initial load, you can import the triggers at a later moment
  • --master-data according to the documentation: “It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server”
  • --order-by-primary it dumps the data in the primary key order
  • --compact produces a more compact output, enabling several flags for the dump
  • --no-autocommit encloses the table between a SET autocommit=0 and COMMIT statements
  • --default-character-set informs the default character set
  • --ignore-table must list the VIEW to be ignored on import, for multiple views, use this option multiple times. Views can be imported later on after promotion of the replica is done
  • --single-transaction a START TRANSACTION is sent to the database so the dump will contain the data up to that point in time
  • --set-gtid-purged writes the the state of the GTID information into the dump file and disables binary logging when the dump is loaded into the replica

After that the result is compressed in a GZIP file and uploaded to a bucket on Google Cloud Storage with gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP] where [BUCKET] is the bucket you created on GCS and [PATH_TO_DUMP] will save the file in the desired path.

Be aware that no DDL operations should be performed in the database while the dump is being generated else you might find inconsistencies.

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.