Infrastructure as Code using the Azure CLI (or The URList has pipelines!!!!)

Recently Burke Holland (@burkeholland) and Cecil Phillip (@cecilphillip) asked me to help devops their URList project (actually, I’m not so sure if they asked or if I just kept pestering them until they let me on the team… ).

At first glance, I thought this was going to be simple. The URList only consists of a static web site written with Vue.js hosted in azure storage, a dot net core backend hosted in an azure function and cosmos db. And sitting in front of the app is Azure Front Door. On top of all that, Burke and Cecil already had a simple build and release pipeline in place. So I figured I’d go all in. Let’s do DevOps right! Let’s start with Infrastructure as Code. Let’s have a real world pipeline where we can start from literally nothing and with one click to initiate the pipeline, have everything created and deployed and configured including DNS. What could possibly go wrong?

Infrastructure as Code

I knew if I wanted a real one click pipeline that would do everything starting from nothing, I needed to use Infrastructure as Code (IaC). I needed “code” that would describe my infrastructure. And when used in my pipeline, it would provision and configure everything for my app. Since all of the infrastructure for the URList is in Azure, there were three approaches I could take.

  1. ARM Templates
  2. Terraform
  3. Scripts using the Azure CLI

ARM templates are awesome. True Infrastructure as Code. Completely idempotent. But man… they are a pain to author and even more painful to debug. For a project this simple, surely there is an easier way.

I felt that way about Terraform too. Terraform is super powerful, super cool. But it doesn’t always have the latest providers for the latest and greatest big thing from Azure right away. In fact, in this project we use Azure Front Door, which Terraform (at this time) does not have a provider for. Plus… I would have to sit down and learn Terraform (on my to do list but something always seems to come up). And again, for a project this simple, I wanted something quick and easy (looking back… maybe I should have just used Terraform Smile)

And finally, there’s using the Azure CLI. I’ve been hearing Donovan (@DonovanBrown) rave about using Azure CLI scripts as IaC for a while and I thought, hey, this is a perfect project to give this a go. I love using the Azure CLI. It’s super simple to use. And there is almost zero learning curve to creating a bunch of PowerShell scripts or bash scripts that provision and configure the environments I need in Azure. And of course, using Azure Pipelines, it’s a piece of cake to run those scripts in my pipelines.

After playing around with using the Azure CLI in scripts as my IaC, I came to these 3 conclusions:

  1. It’s intuitive and easy to script out the infrastructure you need
  2. It’s easy to debug provisioning problems (when compared to ARM templates)
  3. It’s extremely readable.

In fact, if you always completely tear down your infrastructure and re-deploy from scratch, the Azure CLI scripts work spectacularily. But where this method really craters is when you need to make incremental changes over time WITHOUT completely tearing down your infrastructure. Although some Azure CLI commands are idempotent (most creates are), not all command are completely idempotent. This means making changes over time is difficult and messy. You basically start wrapping all of your commands with if statements and pretty quickly, your script turns into a colossal mess that is a maintenance nightmare!

I wasn’t ready to abandon this method just yet because man, it is SO SIMPLE to write IaC using bash or PowerShell with the Azure CLI! I figured there has got to be a way to make managing and writing Azure CLI based IaC simpler and cleaner. And so I came up with my Azure Infrastructure Versioning Framework (yeah I know, I REALLY suck at naming things).

Azure Infrastructure Versioning Framework

This whole problem of making incremental changes over time and keeping your infrastructure in sync with your code without your code turning into a spaghettied mess of an if then else nightmare totally reminded me of the problem we had with trying to version and store database changes in source control. I loved how Entity Framework Code First handled this problem with their migrations. EF would create a version table in your database that would state which version the database was at. Then, it would create  Up and Down methods that would literally take you from version 1, to version 2. From version 2 to version 3 etc. And it would do this by calling the necessary Up methods in order until you reached the desired version (usually the latest). And if you were rolling back or downgrading, it would call the necessary down methods in order until you got the the desired version.

This seemed like a super simple way to handle the IaC with Azure CLI problem. I would need a table to hold the version of the currently deployed infrastructure, some api’s that would return the current version of the infrastructure as well as increment or decrement the current version in the table. Then, my IaC code would consists of a bash or PowerShell script with a 1_Up function, 2_Up function, 3_Up function etc.

1_Up would be the code that provisioned and configured my infrastructure for what it would look like at the beginning of time. If I needed to change my infrastructure, I would just create a 2_Up function and add the code in there. Any subsequent changes, just rinse and repeat. 3_Up function, 4_Up function etc. And my infrastructure framework would be smart enough to figure out what version the current infrastructure was at and then call the necessary Up functions in order to bring the deployed infrastructure up to the latest version while also updating the version table.

Azure Infrastructure Versioning Framework… The Details

My main goal was to make this framework as simple and seamless as possible for IaC authors to use. And since this is all for Azure, it made sense to use Azure resources to keep track of everything. It also made sense to use resources that cost the least because let’s face it, not everyone has a Microsoft Azure subscription they don’t pay for. And if you have to pay for your own subscription, you’re gonna want to save money right?

Therefor, I went with the cheapest solution I could think of. To store the versioning data, I used an Azure Storage Table. And the API to return the current version as well as increment and decrement the version table is hosted in an Azure Function.

The table looks like this:


Azure Storage Tables need to have a table name and the data within the table must have a partition key and a row key. The name of the table can totally be set by the user/pipeline. The partition key will hold the infrastructure (or infrastructures) environment. The row key will be the name of the IaC filename. And the Version column will of course hold the current version deployed.

At this moment, my API only consists of two function methods

  • InfraVersionRetriever – This (surprise surprise!) returns back the current version of your infrastructure. This function expects 3 parameters
    • tablename – the name of the Azure Table
    • stage – the infrastructure environment
    • infraname – The filename of the IaC file
  • InfraVersionUpdater – This (again, surprise surprise!) updates the current version incrementing it by 1. This function also expects 3 parameters
    • tablename – the name of the Azure Table
    • stage – the infrastructure environment
    • infraname – The filename of the IaC file

I really didn’t want an IaC author to care about any of the implementation details. I definitely didn’t want them to be calling individual api’s and trying to figure out what Up function to call. So I created a PowerShell Module named VersionInfrastructure that would do all this for them. All they would need to do is just add this module to the end of their IaC script and call the exposed function Update-InfrastructureVersion. And for those that like using Bash, at the end of their IaC file, all they would need to do is include the file. And then? Bam! the magic just happens. The infrastructure determines which Up methods need to be run, runs them and then updates the version table.

Now, all of this (my Infra Tools) needs to be built and deployed somewhere so it can be consumed by other pipelines. I created a public github repo that houses the source code to my Infra Tools project. The Infra Tools project consists of helpful tools that can be used in Azure Pipeline including the Versioning Framework. In the repo, there is also an azure-pipelines.yml file. This is the build definition which will build and bundle up the Infra Tools so it’s ready to be deployed. Part of the bundling process includes the IaC files for Infra Tools. They are /IaC/provisionInfraTools.bash and /IaC/provisionInfraTools.ps1. This way, you can deploy the infrastructure for Infra Tools using either bash or PowerShell.

I’ve also included an azure-piplines-full.yml. This is a unified build/release pipeline that will build and then deploy the Infra Tools out to Azure! If you look at the azure-pipelines-full.yml file and go to line 36, you’ll notice a bunch of variables that you will need to set.


These settings will determine the name of the function hosting the api’s, the name of the storage hosting the tables, the region, language and sku and the resource group to deploy and host Infra Tools.

PowerShell Example

Here’s a concrete example of using this framework for IaC using the Azure CLI. First, build and deploy the Infra Tools (using the unified pipeline works great). Now, an IaC author just needs to create the IaC code by wrapping their code in an 1_Up function, and then, at the end, install the magic module VersionInfrastructure  and add the function call Update-InfrastructureVersion. Here is my IaC that I used to provision and configure Front Door:


Lines 4-49 is my parameter list where I pass in all the parameters that I need. Lines 51-72 is where I log into my Azure subscription using a service principal. Lines 77-110 is my 1_Up function. This function provisions and configures my Front Door instance. Lines 112 is how I install my “magic” module from the PowerShell Gallery and line 113-116 is the exposed module function call that does all the infrastructure versioning magic. In a nutshell, all an IaC author needs to do is login, write their 1_Up function and then add lines 112-116, pass in some variables to the function and bam! Done!

If some months down the line, I needed to add a new health probe in my front door, all I would need to do is add a 2_Up like this

(I collapsed down the parameter, login and 1_Up section):


And now, when this script is run in my pipeline, that “magical” module function at the end will detect that the current deployed Front Door version is at 1, the latest version in the IaC file is at 2 so it will only run the 2_Up function and will then update the version table to 2


Ok, so all of that kinda makes sense right? But what about those 3 parameters I’m passing in to Update-Infrastructure (lines 136-137). What’s up with that?

In order for this VersionInfrastructure PowerShell module to work correctly, you have to pass in three parameters

  • infraToolsFunctionName – this is the name of the azure function hosting the infra tools versioning framework api’s
  • infraToolsTableName – this is the name of the table holding your versioning info. The user can pick whatever name makes sense. I usually name this after the app I’m deploying
  • deploymentStage – The deployment environment name.

Ok, that’s all well and fine, but why do those parameters look so wonky? As it turns out, if you declare a variable in an Azure Pipelines build or release, they automatically get added to your build/release agent’s environment variables. Cool right? So I just made sure I declared the following variables in my pipeline

  • Iac.Exclusive.InfraToolsFunctionName – This holds the name of the azure function hosting the infra tools api
  • IaC.InfraTableName – This is the name of the versioning table used. For TheUrlist, i set this variable to theurlistversiontable
  • IaC.DeploymentStage – This is the name of the environment. For TheUrlist, I had a beta stage and a prod stage

When Azure Pipelines adds your variables to the agent’s environment variables it does something interesting. Your variable names get all capitalized and any dots turn into underscores. So an Azure Pipeline variable named IaC.Exclusive.InfraToolsFunctionName turns into the environmental variable IAC_EXCLUSIVE_INFRATOOLSFUNCTIONNAME.

Of course, if you don’t want to use environment variables, you can easily just pass in these values through the scripts parameter list.

Bash Version

Some of you might be rolling your eyes at my examples thinking uggghhh PowerShell. I wish I could do all this in bash! Never fear. For those PowerShell haters or bash lovers, I created a bash version as well.

I’m not that familiar with bash so I actually can use some help. I don’t know if Bash has the same concept as modules or something like the PowerShell gallery. So I created a way to do this but it’s not quite as slick as the Power-Shell way. Maybe one of you all can give me a Pull Request that can slick this up.

Anyway, for bash, I added all the “magic” code in a file in my Infra Tools repo at /InfrastructureVersionVrameworkScript/ And now, all an IaC author needs to do is basically the same thing as the PowerShell version. Write login code if necessary, create your 1_Up() function, add your provisioning and configuration code in there. And at the end make sure you source this file. Easiest way to do that is copy this file and put it right next to your bash IaC file.

And now, provisioning and configuring Front Door looks like this:


Line 38 does the magic version checking and updating. And it does need these 3 environment variables defined (define them in your pipeline variables)

  • Iac.Exclusive.InfraToolsFunctionName – This holds the name of the azure
    function hosting the infra tools api
  • IaC.InfraTableName – This is the name of the versioning table
    used. For TheUrlist, i set this variable to theurlistversiontable
  • IaC.DeploymentStage – This is the name of the environment. For
    TheUrlist, I had a beta stage and a prod stage

And yes, the variable names have to be EXACTLY like that since my has those environment variables hard coded.

This Versioning Framework looks kinda interesting. What are the pros and cons of using this framework with IaC written using the Azure CLI and scripts?

Using this framework sparks joy in me. it actually works really well and does 4 things for me

  1. Makes my IaC with the Azure CLI truely idempotent
  2. My IaC with the Azure CLI becomes super easy to write (no crazy conditionals to keep track of)
  3. Maintaining my IaC code is easy.
  4. My IaC code is SUPER readable and easy to understand.

However, there is one downside. Writing your IaC like this does not protect you from drift. If someone were to go into the azure portal and change things up, this framework won’t detect the changes and fix them. You know what totally does do this? ARM templates.

Although not totally the end of the world. If I was really concerned about this, I could automatically lock down the resource group so that changes to the resource group could only be made by my service principal. This hasn’t been implemented in my Infra Tools but maybe sometime in the future I’ll add this.

Back To The Urlist. Let’s Build Out The Release Pipeline!

Since writing IaC using the Azure CLI and this framework sparks joy in me, I decided to build out my release pipeline using both PowerShell scripts and Bash scripts. PowerShell is super cool because PowerShell runs right out of the box on our Mac, Linux and Windows agents. I also implemented everything using Bash too for those that just absolutely don’t want to touch PowerShell. But be forwarned. My Bash skills are in direct relations to my googling skills.

First I needed to decide what my pipeline would look like. I could create one giant pipeline where I deploy all my infrastructure first, and then deploy my code into the infrastructure for both my beta and prod stages. Or I could create a separate pipeline for my infrastructure and another pipeline for deploying my code. Finally, I could create an infrastructure pipeline for just the infrastructure that is shared across multiple apps, and then the rest of the infrastructure and code deployment in another pipeline.

There are pros and cons for each of these methods. However, I really wanted to have EVERYTHING all in one pipeline. If I change one little thing in my infrastructure, I still want it to go through my entire pipeline including deploying my code into this “new” infrastructure. This lets me verify that my code deployments still work and that my code still works in the new infrastructure. I also wanted this app to be entirely self contained with no dependencies on anything else. This meant that instead of deploying my shared resources (for the URList, it would only be my Infra Tools) in a separate resource group, I was just going to deploy everything into its own resource group.

So that basically translates into one giant pipeline that deploys everything from my infra tools, to the infrastructure all the way to deploying the code

Here is the release pipeline:

The release pipeline deploys out The Urlist to two environments. A beta environment and then a prod environment. I’ll work my way from left to right and explain everything in this pipeline.

For our build artifacts, I’m pulling in the artifacts from

  • AbelIaCBuild – This “build” just copies all of my IaC files for The Urlist project.
  • _infTools-CI – This build compiles and packages up the infra tools. This build is literally the azure-pipelines.yml build.
  • _serverless Backend Build – This build compiles and packages up the backend code for The Urlist
  • _Frontend Build – this build compiles down the front end website to static files

Next, my very first deployment stage is InfraTools.

Here, I deploy the infrastructure for my infra tools (azure storage and an azure function) and also deploy the function code to the function. Wait!!!! Why am I deploying infra tools in the same pipeline and in the same resource group? You don’t have to. You can actually have one instance of infra tools running somewhere and just pass in the correct variables (see above) and everything will work. However, I didn’t want any dependencies on any other resource groups. I wanted to have my entire app completely self contained, including these tools. So the very first thing that I do is provision and deploy infra tools into my resource group.

Once that is done, I provision and configure the infrastructure for my azure storage to hold my static web site, the azure function and also my instance of Cosmos DB. I also provision in parallel my DNS settings to Cloudflare.

As you can see, there’s nothing magical here. I just use an Azure CLI task to deploy my infrastructure using my IaC bash scripts.

Before I go on and deploy and configure my Front Door instance, I have to wait until DNS propogates. There are some settings in Front Door that can not be set until DNS has propagated. So to handle that, at the end of the infr-dns-beta stage, I configure a post deployment gate.

This gate polls the DNS checker in my infra tools to see if DNS has propagated. If it has, the gate passes. And if it hasn’t, the gate fails and tries again in 5 minutes.

To configure this gate, I just use the out of the box azure function gate. I then point the url to https://$(IaC.Exclusive.infraToolsFunctionName)

$(IaC.Exclusive.infraToolsFunctionName) is the variable used to describe what the infra tools function name will be. This variable is used by both the IaC which provisions and deploys the infra tools and by this gate.

Next, I add the parameters needed by the DNS checker to the url parameter string. I could have added this in the POST body as well but for whatever reason, I just stuck it in the url parameter list. In the Query parameters text box, I enter in this:


And again, I use release variables to hold the DNS name and the alias CNAME. Finally, I configure the success criteria with

eq(root, true)

What this means is that if my function call returns back true, the gate passes. And if it returns back anything that’s not true, the gate will fail.

So using this gate, this ensures that I don’t go on to deploy and configure my front door instance until after DNS has propagated. Cool right?

Next I provision and configure my Azure Front door instance and then deploy my app to the the front end (Azure Storage) and the back end (Azure Function). I then have a manual approval gate and then rinse and repeat the process to deploy The Urlist to production.

If you want to see what my IaC looks like for The Urlist, it’s all here in this repo:


It has all my IaC files as bash scripts and powershell scripts. Front Door has an ARM template. This is because there are some things that you just can’t set Front Door with the Azure CLI. And conversely, there are some things you can’t set with Front Door just using ARM templates. And finally, notice the file? Yup, that needs to be right next to my other IaC bash scripts. If I wrote all my IaC as PowerShell scripts, I would not need this file since the framework code is a module in the PowerShell Gallery.

So there you go. A full release pipeline including IaC written using the Azure CLI. Using my framework, these Azure CLI scripts are easy to author, easy to read, easy to debug and truely idempotent!


This turned into a slightly more involved project than what I first thought  when i started. But I’m glad I went through this exercise. Like Donovan, I’m a huge fan of using the Azure CLI for Infrastructure as Code. It’s so easy to write and so easy to read. And now, with this “Azure Version Framework” (or whatever I called it), not only is it easy to read and write, it is now easy to make changes to your infrastructure over time in a clean and concise manner.

Regardless, no matter which direction you go, IaC is a VITAL part of DevOps. Having this pipeline in place means we can easily spin up instances of the Urlist. Beta environments, testing environments, private environments. As well as our prod environment. And the great thing is, we can spin up new instances with just a push of a button!



  1. Awesome post! I’m also a big fan of Azure CLI – imperative but clean and simple to write as well as read.

    One remark, as you’ve said, Terraform indeed does similar trick. Keeps state on azure storage (like your table storage).
    Terraform “module” can also run Azure CLI – as an workaround when provider doesn’t support certain resource.

    I was having fun ( ) building an UI to create Terraform scripts but easier. I would love to share that and hear your remarks about that.

    Have a nice day!

Leave a Reply

Your email address will not be published. Required fields are marked *