Ramblings on optimizations, anti patterns and N+1

A lot of people ask me to teach them how to do query analysis and performance. The truth is: there isn’t a script to follow. The following paragraphs are a brain dump on what usually goes on my mind when I am debugging and analyzing.

Please comment on what you think I should focus on to cover here.


  • It’s just a messy post with database-y stuff
  • This post doesn’t have a conclusion, it is just me laying down my thoughts on performance and optimizations.


Query performance is a really difficult subject to talk about. Mostly because because SQL is a declarative language, leaving it up to the Optimizer to decide which way is the best to retrieve the information needed and that is based in so many variables.

The most common problem regarding optimization I see, comes not from the Database itself, but how we handle the requests on the application layer, the following for instance would cause N+1 problems:

Code example:

users = User.all
users.each do |user|
puts "Name #{user.name}"
puts "Addresses: "
user.addresses.each do |address|
puts address.street
puts "#{address.city}#{address.state}"

view raw


hosted with ❤ by GitHub

Although seemingly innocent at first, this code could easily slow down performance on the database due to the amount of requests that would be made.

You also need to know about the intricacies of indexes, which one is the best, if you have a composite index, which should go first, and what happens if I only use one of the fields of a two column indexes in my search? Does it still uses the index somehow? Another rule of thumb is that if an index is a BTREE, on a single column, you can use it either ASC​ or DESC.

Or better yet: why my transactions are taking so long to complete? Does it have too many indexes on the table? Is any other query locking table X?

Even a single ​INNER JOIN could be highly costly if joining two large tables.

Why are you saving that JSON in a TEXT​ field? Since we are on the subject, you really need the JSON in the relational database and not in a document store?

You don’t need to port all your data from PostgreSQL/MySQL to MongoDB if you want to have MongoDB on your stack. Everything has its place, relational data on relational databases and non-relational data on non relational databases. I even find unfair benchmarks between a SQL database and a NoSQL one. They were made to solve different problems, you can’t possibly have the same use case for both of them.

No, it’s not ok to have category_1, category_2, ..., category_n as columns on your products table.

Avoid as much as possible nullable fields.

Relationships should also explicitly live on the RDBMS, not only on your model, if you have a user_id​ on your addresses​ table, tell the database so, naming it user_id doesn’t automatically create the foreign key.

You need:

ALTER TABLE addresses ADD CONSTRAINT `fk_user_id` FOREIGN KEY (`user_id`) REFERENCES users (`id`);

Or your migration should look something like this:

class CreateUsers < ActiveRecord::Migration[5.1]
def change
create_table :users do |t|
t.string :name
class CreateAddresses < ActiveRecord::Migration[5.1]
def change
create_table :addresses do |t|
t.text :street
t.string :city
t.string :state
t.string :zipcode
t.integer :user_id
add_foreign_key :addresses, :users

Line 24: adds to the table addresses​ a foreign key from users.


And you, what you think is missing in this blogpost? What do you want to get deeper on?

7 thoughts on “Ramblings on optimizations, anti patterns and N+1

  1. Innodb is a really good kv store, so sometimes it makes a lot of sense to use MySQL as a kv instead of the badly designed storage engines in most nosql stores.


  2. I’ve been at several orgs where they advocated removing all foreign key constraints in the db for performance reasons. In some cases they did dev with the constraints there to ensure that the logic was sound, but in others they just kinda hoped. I’m curious as to your implication that adding foreign keys is a boon to performance instead of a penalty? Or were you just saying that specific one was an anti-pattern vs. a performance win?


    1. You can have the fastest database in the world, however if you can’t ensure data consistency that means nothing. The cost spent on sanitizing, normalizing the data afterwards to use in other scenarios, like business intelligence, ends up as high as fixing the damn issue of the foreign key.

      I can see why foreign keys can slow down performance on writes, but there is also the gain of insuring the data is right and the query optimizer will assume a lot of things given the existence of a foreign key. (Interesting article: https://www.scarydba.com/2010/11/22/do-foreign-key-constraints-help-performance/)

      Organizations that advocate for extinguishing this usually, that I’ve seen, have a bad design regarding values in columns. NULL is not a value, hence the problem when you have a nullable field that is a FK. There are ways to circumvent that, but banishing foreign keys is not one of them. You should not use NULL as value on your table. Use a flag instead if you wish to indicate the presence or absence of something. Don’t build assumptions on top of NULL.


    2. Also complimenting my comment from before. If you want to save the performance case of having the database do not lookup for the value before INSERT/UPDATE don’t forget that whatever cost you saved by not having the foreign key, it is an added SELECT to verify the integrity the database would be doing for you.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s