Securing Azure Kubernetes Services

As part of a meetup that I was presenting for the Microsoft Cloud Security User Group this week, I also wanted to do a write up about tips and tricks to how you can secure you Kubernetes environment running in Microsoft Azure (but some tips can also apply for other platforms running elsewhere!) I have written down all tips here, you can find most of them here in the PowerPoint –> events/MSUGC-DevSecOps-Final.pptx at main · msugn/events · GitHub

  • Use of private load balancer – By default when setting up a service that uses a load balancer in Microsoft Azure it sets up an external load balancer by default. This is something that we should avoid unless we have a properly secured service. The easiest way to avoid this is to use Azure Policies to avoid use of public IP addresses, and for those that require use of Azure Load balancer can configure it to run only internally. This is done through annotations. As seen in the example below where we publish a Load Balacer but define it to be internal = true. You can also use the same to ensure that the service is given a specific IP address on the subnet as well.
apiVersion: v1
kind: Service
metadata:
  name: internal-app
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    service.beta.kubernetes.io/azure-load-balancer-ipv4: 10.240.0.25
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: internal-app
  • Ensuring that disk export is not available using public endpoint – This is something that by default for all CSI drivers in Azure, and also applies for the ability to access disk using public IP address or private endpoints when a disk export is configured. To avoid that you can make a small adjustment to the CSI driver to ensure that storage that is provisioning does not have this flag enabled by default. This can be done by adding a new Storage Class like the one mentioned below, where I just add a small section for networkaccesspolicy: DenyAll
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-csi-secure
provisioner: disk.csi.azure.com
parameters:
  skuName: StandardSSD_LRS 
  networkAccessPolicy: DenyAll
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
  • Use of Secret CSI Integration with Key Vault – As I’ve seen in many cases where developers are hosting keys and secrets in the docker containers themselves, which are used to authenticate to other backend services such as databases or storage accounts or even certificates. We want to have these secrets stored outside of the cluster and this is where we use the CSI driver integration with Key Vault. as a mechanism to get secret contents stored in Azure Key Vault instance and use the Secret Store CSI driver interface to mount them into Kubernetes pods. To use this integration for an existing cluster you need to run the following Azure CLI command
az aks enable-addons --addons azure-keyvault-secrets-provider --name myAKSCluster --resource-group myResourceGroup

Access to the Key Vault can either be done using User-Managed Identity on your AKS cluster or can be done using Workload Identity which is currently in preview. As an example, I’m going to show the setup using User-Managed Identity for now. It should be noted that with the use of Secret CSI integration, it updates the pod mount and the Kubernetes secret that’s defined in the secretObjects field of SecretProviderClass. It does so by polling for changes periodically, based on the rotation poll interval you’ve defined. The default rotation poll interval is 2 minutes. You can verify that the service was successfully installed by running the following command

kubectl get pods -l app=secrets-store-csi-driver -n kube-system

When you enable the addon, it will generate a managed identity that is assigned to the Kubernetes service which we will then assign access to the key vault to fetch secrets. The example below shows the different steps as an example.

# 1 Create Key Vault called arckvaks
az keyvault create -n arckvaks -g arc-rg -l westeurope

# 2 Get the SPN for cluster after eanbling keyvault addon
az aks show -g arc-rg -n aks --query addonProfiles.azureKeyvaultSecretsProvider.identity.clientId -o tsv

# 3 Use the SPN from the previous step to assign access to the KeyVault
az keyvault set-policy -n arckvaks --key-permissions get --spn 55e0a54f-083b-44a9-b6ac-16259c1edc52
az keyvault set-policy -n arckvaks --secret-permissions get --spn 55e0a54f-083b-44a9-b6ac-16259c1edc52
az keyvault set-policy -n arckvaks --certificate-permissions get --spn 55e0a54f-083b-44a9-b6ac-16259c1edc52

Once access is granted, you can generate a SecretProviderClass object of your own. In the YAML example provided below, the object is configured to work with my KeyVault named “arckvaks”. Prior to implementing this provider class, please ensure that the secrets already exist within the key vault. Additionally, you will need to include the tenant-ID, which is the ID of the Azure Active Directory catalog.

apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
  name: arckvaks                  
  namespace: kube-system
spec:
  provider: azure
  parameters:
    keyvaultName: arckvaks                  
    useVMManagedIdentity: "false"         
    userAssignedIdentityID: "false" 
    cloudName: ""
    objects:  |
      array:
        - |
          objectName: secret1       # In this example, 'ExampleSecret'   
          objectType: secret              # Object types: secret, key or cert
          objectVersion: ""               # [OPTIONAL] object versions, default to latest if empty
    tenantId: "<tenant-id>"               # the tenant ID containing the Azure Key Vault instance

Once you have applied the SecretProviderClass you can deploy a pod using the built-in secrets using the following YAMl file

kind: Pod
apiVersion: v1
metadata:
  name: busybox-secrets-store-inline
spec:
  containers:
  - name: busybox
    image: k8s.gcr.io/e2e-test-images/busybox:1.29
    command:
      - "/bin/sleep"
      - "10000"
    volumeMounts:
    - name: secrets-store-inline
      mountPath: "/mnt/secrets-store"
      readOnly: true
  volumes:
    - name: secrets-store-inline
      csi:
        driver: secrets-store.csi.k8s.io
        readOnly: true
        volumeAttributes:
          secretProviderClass: arckvaks                  
        nodePublishSecretRef:                       # Only required when using service principal mode
          name: secrets-store-creds                 # Only required when using service principal mode

When you then create the Pod, you can verify that it can read the secret using the following command (as I’ve done in the screenshot below, expect it does to another pod name.

kubectl exec busybox-secrets-store-inline-user-msi --namespace default -- ls /mnt/secrets-store/
  • Private Cluster and Secure Remote Access – In many cases I’ve seen that most cloud-based Clusters are running with a public endpoint, meaning that the Kubernetes API is publicly exposed. This picture below just shows the amount of Kubernetes clusters deployed publicly in Norway. While this itself might not be an issue, but there are steps that we can take to make sure that we lock down this access. For one we can use private cluster deployment meaning that the Kubernetes API is only available internally on the virtual network on Azure.

Then we need another way to provide developers with secure remote access to the clusters and other services that are being tested, how do we do that? a Secure tunnel! such as with Cloudflare Zero-Trust.

To setup secure remote access using cloudflare, we first need to create a new tunnel to generate a JSON credential file that we can use to authorize the tunnel. Go into Cloudflare Zero-Trust Tunnels and click Create a tunnel  this will then cenerate a token that will be used.

Once you get the token you need to convert it to a base64 which can be done using this command in Linux terminal/bash

 echo -n “token” | base64

Once you then have the base64 version of the token you need to configure the Cloudflared deployment, you can use YAML template as an example, just remember to adjust the number of replicas and the token at the bottom.
kubernetes / k8s cloudflare tunnel deployment (github.com)

Once you then deploy the YAML file, you should see that the tunnels become active and start communicating with Cloudflare, while looking at the dashboard page here.

Then we can publish either the entire virtual network or services directly to the internet behind Cloudflare Zero-Trust. Let me show one example, where I have a web portal that I want to publish externally. This dashboard is internally available on 10.0.148.120:8000, which I then add as a service to the tunnel, as seen in the screenshot below.

Then I add it as an application and where I then define criteria for conditions that needs to be fulfilled for the user to be given access to the application. When I try to access the application, I will be greeted with this logon page (defining on how you have configured Cloudflare zero-trust.

After you have then successfully authenticated, voila!

This capability can also be extended into publishing the kube-proxy API as well only for those that are using an authorized endpoint/client.

Container escape is a security vulnerability that occurs when an attacker gains unauthorized access to the host operating system from within a container. Containers are isolated environments that provide a secure way to run applications, but if an attacker can exploit a vulnerability in a container, they may be able to break out of the container and gain access to the underlying host operating system.

Once an attacker has escaped a container, they may be able to access sensitive information or execute malicious code on the host system. This can lead to data theft, system compromise, and other security breaches.

To prevent container escape, it is important to implement strong security measures such as Pod Security Admissions.

  • Use of Pod Security Admission – Pod Security Admission (PSA) is a Kubernetes feature that enforces security policies on pods before they are created. PSA ensures that the pods are configured to run securely and comply with the specified security standards, preventing potential security risks. PSA works by intercepting the pod creation request and validating the pod against a set of predefined security policies. The enforcement of Pod Security Standards policies on pods running within a namespace is achieved through Pod Security Admission. By adding labels to a namespace, the control of Pod Security Admission, which is enabled by default in AKS, can be managed.

By applying these annotations, you can apply PSA, to enforce a restricted baseline for a specific namespace

kubectl create namespace test-restricted 
kubectl label --overwrite ns test-restricted pod-security.kubernetes.io/enforce=restricted pod-security.kubernetes.io/warn=restricted
kubectl apply --namespace test-restricted -f https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml

When you try to apply the configuration, you will receive this warning message

You can also provide further isolation by using mechanisms such as KataVM (Which is currently in preview Pod Sandboxing (preview) with Azure Kubernetes Service (AKS) – Azure Kubernetes Service | Microsoft Learn) or gVisor. At the time of writing this blog post however running gVisor on AKS is not officially supported by Microsoft.

  • Backup of persistent data – It might be that you require to have persistent data stored within your pods such as services running databases or other storage services which requires you to do backup of the data. While there are numerous options available to handle backup such as Velero, Portworx or Kasten from Veeam. I’ve previously written about use of Kasten for backup of AKS here –> Get started with Kasten for data protection on Azure Kubernetes Service – msandbu.org (it should also be noted that Microsoft has now recently introduced its own backup service for AKS that is integrated with their own backup service) however this is based upon velero integrated with Azure backup vault)

Leave a Reply

Scroll to Top