The purpose of this article is to highlight how Database Change Management, which is an important aspect of the software delivery process, is often the bottleneck in implementing a Continuous Delivery process. The article also recommends some processes that help in overcoming this bottleneck and allow streamlining application code delivery and database changes into a single delivery pipeline.
Leaving the database out of CI has caused it to become the bottleneck that slows down the delivery of new features. Even though it’s more difficult to execute, many organizations have successfully implemented database CI with the right tools.
Almost all enterprise applications have a database involved and thus project deliverables would involve some database-related work in addition to application code development. Therefore, slowness in the process of delivering database-related work, for example, a schema change, slows down the delivery of an entire release.
When all database changes are described with scripts, source control can be applied and the database development process can include continuous integration and continuous delivery activities, namely taking part in the deployment pipeline.
Problem Statement :
- Most of the work related to database changes during deployments or releases is handled manually by database administrators.
- There are several challenges around this such as difficulty in maintaining the history of changes. Though this is managed in any of the work management tools, the flow or history part is still a challenge.
- There is a manual dependency on DBA’s and is limited by the number of DBA’s available for all services. This can become a bottleneck during deployment time and especially during major releases when there are several services being deployed with changes.
- Synchronizing issue : when the scripts are executed manually, there is a requirement for synchronization between developer and DBA. so that pre-deployment and post deployment scripts can be executed accordingly. This requires extensive communication during the deployment window and lack of synchronization might result in mistakes.
- To overcome the above challenges, we implemented database automation using bitbucket repo and Jenkins pipeline.
- Make all the database changes via scripts, storing the scripts for changes in source control and automating deployment steps.
- Delivering working software frequently, receiving early feedback from customers and having self-organized cross-functional teams led to faster delivery to market and satisfied customers.
Explaining below how the PR based database operation is being done.
Jenkinsfile: It contains the pipeline script on how the script to be executed. We are launching a GKE pod and install a respective database client utility on top of that to execute the script.
In variables.yaml the parameters required to make connection to the database to be mentioned,
Hostname: the target hostname where the script to be executed
DBK: Ticket id which will be created by the developer for the reference.
Username/password: For security concerns, the credentials will be retrieved from the vault, hence we just need to mention the keys here.
Secret_path: This is the path to the vault for the respective database.
This is the actual sql script that needs to be executed on the database.
Flow of execution:
Once the DBA approved PR is merged into the master branch, jenkins jobs will be triggered, based on Jenkins pipeline script the K8s pod will be launched and respective database client will be installed, from which the system will connect the target database and execute the queries, for errors/logs we can examine the jenkins console.
- Developers can execute the database scripts themselves without depending on the DBA during deployments.
- Tracking the changes to the source is possible.