Deployment of Kubernetes, Helm and YAML files using Terraform

One of the great things with Terraform is the wealth of support for different providers and platforms. For instance, you have support for the major cloud providers, SaaS services like Cloudflare, and virtualization layers such as VMware.

So, when I’m setting up a Kubernetes environment on a cloud provider such as with Azure, I can use the Azure provider to set up the Kubernetes Cluster in a virtual network and deploy other components related to Azure such as networking and other integration points.

With Terraform we also have different providers to provision resources in the Kubernetes cluster as well such as with the Kubernetes Provider.

So, let’s say that I want to deploy some changes to the Kubernetes cluster, such as

* Setting up a namespace
* Setting up some services accounts and secrets
* Setting up Config Maps
* Deploying a custom YAML Definition file
* Deploying a helm chart (containing and application)
Now the three first features can be done using the Kubernetes Terraform provider that can be found here Docs overview | hashicorp/kubernetes | Terraform Registry
For many Kubernetes admins, we often want to install some customer extension or services into the cluster which often comes as YAML syntax (which we tend to install using kubectl apply commands) or prepared Helm Charts.

The problem with the built-in Terraform provider for Kubernetes is that it does not support Helm charts and custom YAML files natively. Hence, we also have a custom provider for Helm and Kubectl provider which allows me to install both helm charts and run directly Kubectl apply files using Terraform.

To showcase how this can be used.

1: Define the providers you need to use to deploy custom resources into Kubernetes, Helm, and Kubectl against a Kubernetes environment. In my scenario against Azure.
terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.94.0"
    }
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = ">= 2.0.3"
    }
    helm = {
      source  = "hashicorp/helm"
      version = ">= 2.1.0"
    }
    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.14.0"
    }
  }
}
2: Next, we need to define credentials that each of these resources needs to be able to authenticate to the Kubernetes API. Fortunately, each of these providers can read directly from the Azure Kubernetes Cluster resource.
provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.example.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
}
provider "helm" {
  kubernetes {
    host                   = azurerm_kubernetes_cluster.example.kube_config.0.host
    client_certificate     = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
    client_key             = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_key)
    cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
  }
}
provider "kubectl" {
  host                   = azurerm_kubernetes_cluster.example.kube_config.0.host
  client_certificate     = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_certificate)
  client_key             = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.client_key)
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.example.kube_config.0.cluster_ca_certificate)
  load_config_file       = false
}

This means that we now have a connection to the Kubernetes Cluster via the different providers and can now begin to create resources within the cluster including services and different templates.

So, to show some examples of how to use this. Just remember that when we are deploying helm charts, you might need to run the command helm repo update to ensure that the terraform configuration is deploying successfully, since the machine that is running the terraform helm packages needs to have updated information on where to collect the packages from.

Helm Charts installation using the Helm provider in Terraform can be done like this.

resource "helm_release" "kasten" {
  name              = "kasten"
  chart             = "kasten/k10"
  namespace         = kubernetes_namespace.kasten.metadata[0].name
  skip_crds         = true
  dependency_update = true
set {
    name  = "externalGateway.create"
    value = "true"
  }
  set {
    name  = "auth.tokenAuth.enabled"
    value = "true"
  }

This allows me to install Helm Charts and define environmental settings for the helm chart. It also references the namespace Kasten which is created by the Kubernetes Provider.

Kubernetes Namespace creation

resource "kubernetes_namespace" "kasten" {
  metadata {
    name = "kasten"
  }
  depends_on = [
    azurerm_kubernetes_cluster.example,
  ]
}
I’ve also add a “depends on” here to ensure that the cluster is done before Terraform tries to provision the namespace.  Sometimes we have some YAML files that we just wish we were able to run in Kubectl apply commands on, instead of having all the YAML content stored within Terraform Syntax which just clutters it up. This is where the Kubectl provider come in handy. Using this provider can be done by defining a custom YAML file as a data source and applying the manifest file.
data "kubectl_file_documents" "kubesphereinstall" {
  content = file("kubesphere-installer.yaml")
}

resource "kubectl_manifest" "kubesphereinstall" {
  for_each  = data.kubectl_file_documents.kubesphereinstall.manifests
  yaml_body = each.value
  depends_on = [
    kubectl_manifest.certmgr
  ]
}

Enjoy Kubernetes deployments!

Leave a Reply

Scroll to Top