Generating a mysqldump to import into Google Cloud SQL

This tutorial is for you that is trying to import your current database into a Google Cloud SQL instance, replica, that will be setup for replication purposes.

According to the documentation, you will need to run:

[code lang=bash]
mysqldump \
-h [MASTER_IP] -P [MASTER_PORT] -u [USERNAME] -p \
–databases [DBS] \
–hex-blob –skip-triggers –master-data=1 \
–order-by-primary –compact –no-autocommit \
–default-character-set=utf8 –ignore-table [VIEW] \
–single-transaction –set-gtid-purged=on | gzip | \
gsutil cp – gs://[BUCKET]/[PATH_TO_DUMP]
[/code]

The mysqldump parameters are:

  • -h the hostname or IPV4 address of the primary should replace [MASTER_IP]
  • -P the port or the primary server, usually [MASTER_PORT] value will be 3306
  • -u takes the username passed on [USERNAME]
  • -p informs that a password will be given
  • --databases a comma separated list of the databases to be imported. Keep in mind [DBS] should not include the sys, performance_schema, information_schema, and mysql schemas
  • --hex-blob necessary for dumping binary columns which types could be BINARY, BLOB and others
  • --skip-triggers recommended for the initial load, you can import the triggers at a later moment
  • --master-data according to the documentation: “It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server”
  • --order-by-primary it dumps the data in the primary key order
  • --compact produces a more compact output, enabling several flags for the dump
  • --no-autocommit encloses the table between a SET autocommit=0 and COMMIT statements
  • --default-character-set informs the default character set
  • --ignore-table must list the VIEW to be ignored on import, for multiple views, use this option multiple times. Views can be imported later on after promotion of the replica is done
  • --single-transaction a START TRANSACTION is sent to the database so the dump will contain the data up to that point in time
  • --set-gtid-purged writes the the state of the GTID information into the dump file and disables binary logging when the dump is loaded into the replica

After that the result is compressed in a GZIP file and uploaded to a bucket on Google Cloud Storage with gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP] where [BUCKET] is the bucket you created on GCS and [PATH_TO_DUMP] will save the file in the desired path.

Be aware that no DDL operations should be performed in the database while the dump is being generated else you might find inconsistencies.

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.

3 thoughts on “Generating a mysqldump to import into Google Cloud SQL

  1. Hi Gabi,
    Is it possible to the dump, without switching all tables into ReadOlny?
    the case that We have is that database 1,5TB large and as You can assume dump will take ages so the business does not agree to stop production for that long time.
    We are trying to find some tools that could help us to achieve migration with minimal downtime.

Leave a Reply