Liquibase is a versioning tool for databases. Currently, it’s on version 3.5 and is installed as a JAR. It has been on the market since 2006, and recently completed its 10th anniversary. In its feature list we have:
- Code branching and merging
- Multiple database types
- Supports XML, YAML, JSON and SQL formats
- Supports context-dependent logic
- Generate Database change documentation
- Generate Database “diffs”
- Run through your build process, embedded in your application or on demand
- Automatically generate SQL scripts for DBA code review
- Does not require a live database connection
Why you need it?
Some frameworks comes with built-in solutions out of the box like Eloquent and Doctrine. There is nothing wrong with using something like that when you have only one DB per project, but when you have multiple systems, it starts to get complicated.
Since Liquibase works as a versioning tool, you can branch and merge as needed (like you would with code in git). You have contexts, which means changes can be applied to specific environments only, and tagging capabilities allow you to perform rollbacks.
A rollback is a tricky thing; you can either do an automatic rollback or define a script. Scripted rollbacks are useful when dealing with MySQL, for instance, where DDL changes are NOT transactional.
Guidelines for changelogs and migrations
- MUST be written using the
JSONformat. Exceptions are
- MUST NOT be edited. If a new column is to be added, a new migration file must be created and the file MUST be added AFTER the last run transaction.
There could be 3 main branches:
- Create your changelog branch;
- Merge into
- When the feature ready to staging, merge into
- When the feature is ready, merge into
productionDO NOT merge amongst themselves in any capacity;
- DO NOT rebase the main branches;
- Custom branch MUST be deleted after merged into
The downside of this approach is the diverging state between the branches. Current process is to, from time to time, compare the branches and manually check the diffs for unplanned discrepancies.
Procedures for converting a legacy database to Liquibase migrations
Some projects are complete monoliths. More than one application connects to it, and this is not a good practice. If you are working with that sort of project, I recommend you treating the database sourcing as its own repository, and not together with your application.
This is a way I found for keeping the structure reasonably sensible. Suggestions are welcome.
Create the property file
Should be in the root of the project and be named
driver: com.mysql.jdbc.Driver classpath: /usr/share/java/mysql-connector-java.jar:/usr/share/java/snakeyaml.jar url: jdbc:mysql://localhost:3306/mydb username: root password: 123
JAR files in the classpath can be manually downloaded or installed though the server package manager.
Create the Migration file
You can choose between different formats. I chose to use JSON. In this instance I will be running this SQL:
|CREATE TABLE `mydb_users` (|
|`id` int(11) NOT NULL AUTO_INCREMENT,|
|`username` varchar(25) CHARACTER SET utf8 DEFAULT NULL,|
|`password` varchar(255) CHARACTER SET utf8 DEFAULT NULL,|
|`activated` tinyint(1) NOT NULL DEFAULT '0',|
|PRIMARY KEY (`id`)|
|) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;|
Which will translate to this:
|"type": "int unsigned",|
|"value": " ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci"|
It is verbose? Yes, completely, but then you have a tool to show you what the SQL will look like and be able to manage the rollbacks.
Save the file as:
. /changes - changelog.json - create_mydb_users.json
Where changelog.json looks like this:
For each new change you add it to the end of the
To run, execute:
$ liquibase –changeLogFile=changes/changelog.json migrate
Don’t worry if you run it twice, the change only happens once.
Next post will cover how to add a legacy DB into Liquibase.
To learn how to go deeper into Liquibase formats and documentation, access this link.
One thought on “Creating Migrations with Liquibase”
Hi, do you separate somehow schema migration and data migration?
Which id strategy you chose?
We currently use xml format, but it looks completely unreadable. Why have you chosen json and not yaml?
LikeLiked by 1 person