This tutorial is for you that is trying to import your current database into a Google Cloud SQL instance,
replica, that will be setup for replication purposes.
According to the documentation, you will need to run:
mysqldump \ -h [MASTER_IP] -P [MASTER_PORT] -u [USERNAME] -p \ --databases [DBS] \ --hex-blob --skip-triggers --master-data=1 \ --order-by-primary --compact --no-autocommit \ --default-character-set=utf8 --ignore-table [VIEW] \ --single-transaction --set-gtid-purged=on | gzip | \ gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP]
mysqldump parameters are:
-hthe hostname or IPV4 address of the
-Pthe port or the
[MASTER_PORT]value will be
-utakes the username passed on
-pinforms that a password will be given
--databasesa comma separated list of the databases to be imported. Keep in mind
[DBS]should not include the
--hex-blobnecessary for dumping binary columns which types could be
--skip-triggersrecommended for the initial load, you can import the triggers at a later moment
--master-dataaccording to the documentation: “It causes the dump output to include a
CHANGE MASTER TOstatement that indicates the binary log coordinates (file name and position) of the dumped server”
--order-by-primaryit dumps the data in the primary key order
--compactproduces a more compact output, enabling several flags for the dump
--no-autocommitencloses the table between a
--default-character-setinforms the default character set
--ignore-tablemust list the
VIEWto be ignored on import, for multiple views, use this option multiple times. Views can be imported later on after promotion of the replica is done
START TRANSACTIONis sent to the database so the dump will contain the data up to that point in time
--set-gtid-purgedwrites the the state of the GTID information into the dump file and disables binary logging when the dump is loaded into the
After that the result is compressed in a GZIP file and uploaded to a bucket on Google Cloud Storage with
gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP] where
[BUCKET] is the bucket you created on GCS and
[PATH_TO_DUMP] will save the file in the desired path.
Be aware that no DDL operations should be performed in the database while the dump is being generated else you might find inconsistencies.
See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.