How to best use Azure DevOps release pipelines with microservices?

Today, Robin Kilpatrick (@rakilpatrick) asked The League (#LoECDA) a question on twitter. What is the best way to use Azure Pipelines with microservices? He even posted the question on stack overflow.

clip_image001[7]

This is a great question. Microservices makes it super easy to build and release only the part of your application that changed vs the entire application.  But you pay a tax with integrating all the microservices together.

How do you make sure all your services work together well? How do you make sure a change to one of your services doesn’t break another service?  It makes perfect sense that you need integration tests that make sure all your services play together nicely.

So what’s the best way to Use Azure DevOps (Azure Pipelines specifically) to build and deploy microservices?

After discussing this with Donovan (@DonovanBrown) and Damian (@damovisa), here are our thoughts.

  1. Having a repo per micro service/releaseable unit/web front end makes sense.
  2. Each repo/micro service/releaseable unit/web front end has a CI build that compiles, runs extensive unit tests and creates your build artifacts. If any of those unit tests fail, the build fails.
  3. You can have a release pipeline for each CI Build as these pipelines can be used to deploy just an individual micro service if you want.
  4. For this scenario, you can then have a release pipeline that uses the build artifacts from each CI build. It will use the latest successful CI Build for all the repos. This release pipeline will have 2 (or more if you need them) stages. A nightly/testing stage and a production stage. The nightly/testing stage grabs all the latest successful artifacts and deploys all the services. At the end of the deployment, runs your integration tests. And if those passes, it goes to the next stage (prod) where you have a manual approver who will approve the release to production.
    1. So here’s where things get interesting. If you wanted to follow the same type of workflow as Rob is currently using, you can have this all-encompassing release be triggered on a schedule. In his workflow, it looks like this trigger is scheduled once a night.
    2. That does limit your release to at most 1 per 24 hours (or as many times as you schedule your trigger and yes, you can trigger more manually too). This seems a bit restrictive. Now, my pipeline has a bottleneck based on my scheduled trigger. If you wanted to, you can have the release be triggered on changes to the build artifacts! So anytime there is a successful build for any of the CI Builds for each individual repo, this will trigger the release pipeline. Which will deploy to nightly/testing, run integration tests, and if everything looks good, gets passed to the production stage.

Some more thoughts…

  • The need for super intense integration tests really arises from trying to make sure changes to one service don’t adversely affect other services. Integration tests are pretty brittle and take a long time to run. So to mitigate this, you can create really good unit tests and give each service the ability to query for versions and adjust based on versioning.

    Now, the need for integration tests to shake out all of your integration bugs become less important (still important, just not as vital). This does mean you have to pay a code tax where your unit tests better be good and your services have the ability to query for versions and route code based off of versioning. However, the benefit is a much faster and smoother pipeline.

  • Notice we are still deploying EVERYTHING all at once. Granted, if nothing has changed, we are deploying the exact same bits so “nothing changes”. Is there a better way of doing things? Can we just deploy the services that changed?

    Assuming our unit tests and services are written well, the answer becomes yes. Changing a specific service kicks off a CI build that runs our extensive set of unit tests, which sends it through a pipeline that deploys our service in a test environment. Integration tests are run against the system. And if everything looks good, it gets promoted to production which just deploys the service that changed.

    The only caveat is you need to make sure your tests, services and pipelines can ensure the quality is good enough for the entire application. Which means your microservices have to be written a certain way and your unit tests and integration tests need to be written well. Often times, this isn’t the case, specially when dealing with legacy apps moving toward a micro service architecture. Which means not all apps are quite ready to go all in like this.

Leave a Reply

Your email address will not be published. Required fields are marked *