Building Lego with Terraform on Azure – Part One

So this blog post (Part One of Two) has actually nothing to do with lego’s I just put it there for click-bait, but however it has something to do with the analogy. Working onlywith Cloud related projects these days, much of the work has now been translated from working with storage providers, configuration of networks and virtualization to architecture and infrastructure as code (IaC) or Anything as code, either it be Platform services or Infrastructure as a service.

Now as with Lego we are building something piece by piece and that is essentially the same we are doing when using IaC when we are building the platform or underlying infrastructure components piece by piece.

Bilderesultat for lego building

Building Infrastructure in Azure

When it comes to building infrastructure in Microsoft Azure, there are many ways to perform the same operation, either using built-in tools from Microsoft such as PowerShell, REST og ARM deployments or using third-party tools such as Ansible, Puppet or Terraform (Most of these tools either use REST API or the SDK’s to do deployment)

In Azure you have the underlying automation layer called Azure Resource Manager which essentially is the gateway between what we are doing either from the portal or from CLI commands. Resource Manager integrates into the different resource providers to issue commands to “Create Resource X on Storage Resource Provider”. So regardless of which tool you use to perform an task against Azure you need to go trough the Azure Resource Manager.

Overview of Azure Resource Manager API and the different providers

Now Azure Resource Manager is bound to a schema which means that every command or API call sent to it needs to be validated against that predefined schema. You can see more about them here –> https://github.com/Azure/azure-resource-manager-schemas/tree/master/schemas and different resource providers have different API versions.

All API Calls are based upon JSON, So as part of that it means that Resource Manager deployments are based upon JSON code. JSON Code is….not that easy all the time and it has a strict format and often times you would need a JSON validator https://jsonlint.com/ when working with it, or having extensions within your code editor to help that part.

an ARM template is split into different elements. Where we can define parameters, variables to customize and generalize deployments. But also the resources themselves where we define which resource providers and API versions which we need to define. When Microsoft releases new services that we want to use as part of an ARM template, we need to define the API version.

{

    "$schema": "https://schema.management.azure.com/..json#",

    "contentVersion": "1.0.0.0",

    "parameters": {},

    "variables": {},

    "resources": [{

            "type": "Microsoft.Resources/resourceGroups",

            "apiVersion": "2018-05-01",

            "location": “West Europe",

            "name": "demo-storage",

            "properties": {}

        },

        {

            "type": "Microsoft.Storage/storageAccounts",

            "name": "demo-storage",

            "apiVersion": "2018-02-01",

            "location": "West Europe",

            "sku": {

                "name": "Standard_LRS"

            },

            "kind": "Storage",

            "properties": {}

        }

    ]}
The challenge with Azure Resource Manager

One of the issues I find with ARM is human readability, I’m bad a coder and ARM hurts my eyes when it get into a large complex multinested ARM templates. Another issues is that the format and how to define resources is limited to Azure, so if you are working with other cloud providers such as AWS (Cloudformation) Google Cloud (Deployment Manager) you will need to understand how they process and how to author them as well.

Relatert bilde

When authoring a template we need to remember what kind of API version we need to define for each resource when we are defining the resources in a template, also that the templates need to be valid JSON code. Of course using ARM which is the native format for deployment in Azure means that it supports all of the features/services in Azure but that also means that it is limited to that cloud provider and is only aimed at deployment of Azure based resources, if we need to deploy something on the application layer on top if we are using IaaS based deployment we need to shift to something else.

Now ARM has also evolved, previously it was bound to doing deployment into a predefined resource group, but now we can also create resource groups as part of a ARM template. When deploying an ARM template, the deployment and state of the resources deployed are maintained as part of Microsoft Azure.

ARM also now has its own Terraform Provider which has been now been in Private Preview for a while, but ill come back to that a bit later (https://azure.microsoft.com/en-us/blog/introducing-the-azure-terraform-resource-provider/)

Where does Terraform fit in?

Terraform is an open-source multi-cloud infrastructure as code tool from Hashicorp. It is not limited to just infrastructure but it also can handle different application platforms such as Kubernetes and such. The intention is that if there is a platform with an API available, Terraform can be used to deploy against it.

From a Microsoft perspective, Terraform actually has integrations with Azure, Azure AD and even Azure Stack these types of integrations are called providers, and Terraform has a lot of them. This allows us to have one tool that can “plugin” to multiple cloud platform and application platforms. Now regardless if Terraform supports mulitple cloud vendors you still need to understand what the cloud vendors supports and how to configure services/resources properly.

Bilderesultat for terraform resource provider

Terraform which being developed from Hashicorp, also has an entire stack of tooling to deploy, secure, run and applications on any cloud platform.  Where we have Packer which is used to provision machine images, which the Azure VM Image builder is based of. Vagrant which is an automation tool to build development enviroments. Consul is a service mesh and distributed key store. Vault  which is used as a secrets engine.  But of course Terreform plans an important piece in this picture.

Bilderesultat for hashicorp suite

When authoring code in Terraform it is using a syntax called HCL (Hashicorp Configuration Language) which is meant to strike a balance between human readable and editable as well as being machine-friendly. For machine-friendliness, Terraform can also read JSON configurations. As an example below.

resource "azurerm_resource_group" "testrg" {

    name = "resourceGroupName"

    location = "westus"

}

resource "azurerm_storage_account" "testsa" {

    name = "storageaccountname"

    resource_group_name = "testrg"

    location = "westus"

    account_tier = "Standard"

    account_replication_type = "GRS"

}

Now Terraform can be used in multiple ways. You can either downlod the executable locally or you can use Cloud Shell in Microsoft Azure or you can a docker container which containers.

docker run -i -t hashicorp/terraform:light 

To run it locally on your machine (such as mine where I’m running Windows 10 you can download the latest version using a package manager such as Chocolatey, these are a set of commands to install Visual Studio Code, Terraform (with some extensions and Git provider to Windows)

Set-ExecutionPolicy Bypass -Scope Process -Force; `
 iex ((New-Object System.Net.WebClient)
.DownloadString('https://chocolatey.org/install.ps1'))

Choco install vscode

Choco install git

Choco install terraform

Install-module az (Just in case…)

Code --install-extension msazurermtools.azurerm-vscode-tools

Code --install-extension mauve.terraform

As a last step you need to fix system paths so that you can start Terraform from CLI or PS directly. As part of Terraform executable you have four main commands that you can use.

  • Terraform init – initializes working directory
  • Terraform plan – pre-flight validation
  • Terraform apply – deploys and updates resources
  • Terraform destroy – removes all resources defined in a configuration
  • Terraform refresh – Updates the state file according to the real configuration

If is important to note that to use Terraform against Azure we need to have a way to authenticate with some method that gives us access to the subscription, which is mostly either using

  • Service Principal (Using ClientID, ClientSecret, TenantID and SubscriptionID)
  • MSI (Managed Service Identity)

To generate a service principal to use against Azure I tend to use the following command

New-AzADServicePrincipal –Role Contrubitor 
-Scope /subscriptions/subscriptionid -Displayname something

Terraform init is used to initialize a working directory containing Terraform configuration files, it will also check (if you have defined a resource provider in the configuration file) and will even download the resource provider.

Terraform processing and logic

When authoring terraform configuration files, all files must end with an .tf extension. Of course you can choose to have all configuration within a single file or you can split the configuration into different files such have different files containing network, storage, app1, app2 and so on. When you are running terraform apply it will by default process all *.tf files contained within that directory.

You can also have other tf files such as modules (Coming back to that in a later post) which can be stored in another

If we create a main.tf which contains the following. Here we define a provider azurerm which will be downloaded if not present, then we define which resources should be provisioned. Now all the resources that need to be defined is according to the documentation here –> https://www.terraform.io/docs/providers/azurerm/

Here we can also specify the different pieces that are needed for authenticating. That can of course be “hardcoded” but that is not benefical, the best example is to provide it is variables and move the actuall value to a tfvars file that can contain the credential, or that information is fetched from a vault or secret engine.

provider"azurerm"{

version="=1.21.0"

subscription_id="${var.subscription_id}"

client_id="${var.client_id}"

client_secret="${var.client_secret}"

tenant_id="${var.tenant_id}"

}

# Create a resource group with location and name

resource "azurerm_resource_group" "NICCON2019" {

name="NICCONF3"

location="${var.location}"

}

resource "azurerm_resource_group" "tfstate" {

name="TFSTATE"

location="WestEurope"

}

resource "azurerm_storage_account" "testsa" {

name="tfstatenic2019"

resource_group_name="${azurerm_resource_group.tfstate.name}"

location="WestEurope"

account_tier="Standard"

account_replication_type="LRS"

tags {

environment="tfstate"

}

}

All the different variables needs to be declared before either in a seperate variables.tf file or as part of the same configuraiton file. It is important that you should always have in the back of your mind on how you can generalize your configuration file to make it easy to reuse in other scenarios.

Using the configuration that we made above, and we run the configuration terraform plan it will essentially go trough the configuration and ensure that the syntax is correct and create a list of what resources that will actually be deployed using Terraform.

Now when we run Terraform apply it will start to apply the configuration that has been specified within all .tf files within the current directory. As part of this action it will also create an terraform.tfstate file which actually contains a copy of the configuration that has been provisioned. Since the state file can contain sensitive information such as Storage Account Access Keys or other usernames/password it is recommended to store the state file on a remote location also known as backends. 

with Terraform the state file can be pushed to a number of different backends, such as AWS S3, Blob Stoage, GCS, Consul for instance, but I will come back to that in part two.

Where does the Terraform Provider for ARM fit in?

Initially I mentioned that one of the shortcomings of ARM is that it is limited to Azure services and resources. One of the way for Microsoft to fix this was to add Terraform to the equation, where they basically have an extension in ARM to run Terraform Code and therefore they can plug into the ecosystem of Providers that Terraform currently has.

Bilderesultat for terraform provider for ARM

So basically this allows us to use ARM for everything (once it becomes available) but again are stuck with authoring JSON code. It is important to note that this feature is currently in private preview in Azure and is now currently limited to two resource providers in Terraform, Cloudflare, Datadog and Kubernetes. Hopefully this feature will evolve and allowing people already invested into ARM to extend the capabilites against other Cloud Platforms.

 

Leave a Reply

Scroll to Top