Tag: google cloud

MySQL 8.0 joins Google Cloud SQL

This is the announcement blogpost and this is my overview taken from medium:

The one where MySQL 8.0 lands on Google Cloud SQL

Ada Doglace and Lily Grace (lilygrams)
Ada Doglace and Lily Grace (lilygrams). Photo by Anthony Ferrara.

There are many things that makes me happy. Puppies (see picture), food, wine and databases… (not particularly in that order). And things that makes me even happier such as a well designed schema and proper usage of ORM (Object Relational Mapping).

MySQL was the database I used to love to hate. It grew on me and the fact that long strides were made to make it more consistent and more modern kept me away from using other open source databases on a daily basis for my projects. And with grand expectation and admiration I am proud of the Cloud SQL team for achieving this milestone:

Today we are making available MySQL 8.0 on Cloud SQL. It’s not just a new version inside our managed databases portfolio, but it also comes with all Cloud SQL capabilities such as Automatic Storage Increase, High Availability, Cross-region Replication and PITR (point in time recovery). And best yet: It’s not a beta, it’s a GA launch! See the announcement.

You can start using today from scratch or migrate an existing MySQL database to Cloud SQL; as a way to minimize downtime you can use the External Replication feature from your Google Cloud Console or via the gcloud command-line tool.

MySQL 8.0 has a huge list of new features, and you can do a range of new operations and querying. You can even do silly things like Fibonacci sequence using SQL (it is true!), traverse a Binary Tree or even do the old fizz-buzz:

WITH RECURSIVE fizz_buzz (sequence, modulo_3, modulo_5) AS (
  SELECT 1, CAST('' AS CHAR(4)), CAST('' AS CHAR(5))
  UNION ALL
  SELECT sequence + 1,
         IF(MOD(sequence + 1, 3) = 0, 'Fizz', ''),
         IF(MOD(sequence + 1, 5) = 0, 'Buzz', '')
  FROM fizz_buzz
  WHERE sequence < 100
)
SELECT
       IF(CONCAT(modulo_3, modulo_5) = '', sequence, CONCAT(modulo_3, modulo_5)) AS fizzbuzz
FROM fizz_buzz;

On a more serious note, and enterprise worthy, you can now avoid sub-queries (or other N+1 problems), have better access control, and use Window Functions. There are plenty of new stuff, and this is a small list of talks on MySQL 8.0 I gave since the preview versions in 2017. One of my favorites is the following:

phpDay about MySQL 8.0 Features

At Google we define anyone that uses a keyboard to work on a technical aspect of a product a Technical Practitioner. It is a wider definition, however more inclusive to the DBAs, DevOps and SysAdmins alike.

My particular goal has always been to help the day-to-day practitioner do things they wouldn’t imagine their database was capable of. This is why I have the Office Hours. This is why I do those talks and should do more blog posts.

If you want to learn more about MySQL 8.0 I ask you to read my website: gabi.dev, where I posted several things on MySQL.

Happy Launch day!

Gabi’s Office Hours a.k.a. the Hallway Track

Office Hours featuring gabidavila
Book your time on gabi.tips/slots

Since travel to deliver content and awareness is being held for a while (COVID-19), I am missing the interaction I would have with people at random times during a conference, the exchange of ideas and impromptu problem solving.

The hallway track of a conference has always been my favorite part: the networking, exchanging ideas, getting feedback about products, all of that is harder to replicate without talks and social the setting of a conference.

I decided to experiment and try to find new ways to interact with “my” audience, the idea of Office Hours came, which originally is not mine, however I like the fact that I can have a technical conversation with someone that has questions about databases (not just about Cloud SQL) and also provide a window of opportunity for you to give me feedback on our products.

Examples of things we can talk about:

  • Query Performance
  • Best Practices for Migrations
  • Where to store your data
  • Should I put my Server on Kubernetes?
  • How do I migrate my data to the cloud?

The pilot started with 30 min sessions with the availability to talk to up to 10 people weekly (5h/week). However I think it is more productive a change to up to 3h/week and talking to 9 people in 20 min slots in more timezones.

Click here to book your time: gabi.tips/slots.

Tips for a good appointment:

  • Write down your questions and add it to the booking tool.
  • If a lengthy context is needed to understand your problem, please add the information in the booking tool, however do not send to me:
    • PII – Personal Identifiable Information
    • Your SQL Dump
    • Your intellectual property
    • No database credentials
  • Have defined scope of what you want to talk about, I can’t solve everything in 20 minutes.
  • Do not double book, other people should also take advantage of this

Disclaimers

This is not a guaranteed consultancy agreement, this is just people talking informally about tech problems and possible solutions, information shared and explained are guides, you are responsible for weighing your options and if any advice is executed, the outcome is your responsibility.

Generating a mysqldump to import into Google Cloud SQL

This tutorial is for you that is trying to import your current database into a Google Cloud SQL instance, replica, that will be setup for replication purposes.

According to the documentation, you will need to run:

mysqldump \
-h [MASTER_IP] -P [MASTER_PORT] -u [USERNAME] -p \
--databases [DBS] \
--hex-blob --skip-triggers --master-data=1 \
--order-by-primary --compact --no-autocommit \
--default-character-set=utf8 --ignore-table [VIEW] \
--single-transaction --set-gtid-purged=on | gzip | \
gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP]

The mysqldump parameters are:

  • -h the hostname or IPV4 address of the primary should replace [MASTER_IP]
  • -P the port or the primary server, usually [MASTER_PORT] value will be 3306
  • -u takes the username passed on [USERNAME]
  • -p informs that a password will be given
  • --databases a comma separated list of the databases to be imported. Keep in mind [DBS] should not include the sys, performance_schema, information_schema, and mysql schemas
  • --hex-blob necessary for dumping binary columns which types could be BINARY, BLOB and others
  • --skip-triggers recommended for the initial load, you can import the triggers at a later moment
  • --master-data according to the documentation: “It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server”
  • --order-by-primary it dumps the data in the primary key order
  • --compact produces a more compact output, enabling several flags for the dump
  • --no-autocommit encloses the table between a SET autocommit=0 and COMMIT statements
  • --default-character-set informs the default character set
  • --ignore-table must list the VIEW to be ignored on import, for multiple views, use this option multiple times. Views can be imported later on after promotion of the replica is done
  • --single-transaction a START TRANSACTION is sent to the database so the dump will contain the data up to that point in time
  • --set-gtid-purged writes the the state of the GTID information into the dump file and disables binary logging when the dump is loaded into the replica

After that the result is compressed in a GZIP file and uploaded to a bucket on Google Cloud Storage with gsutil cp - gs://[BUCKET]/[PATH_TO_DUMP] where [BUCKET] is the bucket you created on GCS and [PATH_TO_DUMP] will save the file in the desired path.

Be aware that no DDL operations should be performed in the database while the dump is being generated else you might find inconsistencies.

See something wrong in this tutorial? Please don’t hesitate to message me through the comments or the contact page.