Monthly Archives: July 2017

Why Azure Container Instances is a such a cool feature!

For those who haven’t seen it yet, Microsoft this week announced Azure Container Instances which is a new way of delivering Container instances on Azure.

Up until now Microsoft has had the ability to deliver Containers on Azure using Azure Container Service, where we specify which kind of orchestration engine we would like to use and specify the amount of worker nodes (virtual machines) we would like to have.  Then we were bound to that amount of virtual machine instances and the containers running on top of those virtual machines.

Azure Container Service is a free service on its own, but we are billed by the amount of virtual machines we are using underneath all the containers runnings per minute. That also means that the virtual machines that are being used by Azure Container Service is part of our responsibility.


So this means that if we have 10 virtual machines in a Kubernetes cluster but we are only using a limited amount of nodes it means that we need to pay for all virtual machines per minute. So not really cloud native right?

Now this is the cool part about Azure container Instances is that we do not actually need to think about the underlying virtual machines, all we need to care about is the container itself. image

The containers are billed per second they are running, instead of per minute which VM’s are which of course will allow for even greater flexibility.


Now unlike Azure Container Services, ACI is not linked to a specific Container orchestration solution so this will require you to use Azure specific commands and not reuse the existing kubernetes commands you are used to right? Well Microsoft has understood that Kubernetes is the right approach and have therefore created a Kubernetes Connector to ACI –> which allows us to use Kubectl against Azure Container Instances.

It does this by
* Registering into the Kubernetes data plane as a Node with unlimited capacity
* Dispatching scheduled Pods to Azure Container Instances instead of a VM-based container engine

So now Microsoft is quite serious about Containers and moving forward, ACI will also support Windows Containers, one might wonder how Microsoft does segmentation of tenant workloads on the virtual machines that run underneath. Microsoft has quite a good range of different services for different workloads now.


Now as mentioned since this is in preview there are some limitations still

  • * Only Linux containers are supported at the moment 
  • * Not possible to attach a container to a virtual network.
  • * Use of ACS is currently only through the Azure Cloud Shell or using Azure Resource Manager templates.
  • * There are some limitations both on region availability and the size of containers in a region.

Create the Resource Group in Location West Europe and Container

az group create –name myResourceGroup –location westeurope

az container create –name mycontainer –image microsoft/aci-helloworld –resource-group myResourceGroup –ip-address public


Connect to the URL and we come to this page


Delete the container:

az container delete –name mycontainer –resource-group myResourceGroup

Container Groups

Azure Container Instances also support the deployment of multiple containers onto a single host using a container group. This is useful when building an application sidecar for logging, monitoring, for instance. We can do group based container deployment using an ARM template.  Here we can also define the CPU and Memory specifics for each container.

az group create –name myResourceGroup –location westus

az group deployment create –name myContainerGroup –resource-group myResourceGroup –template-file azuredeploy.json

  “$schema”: “”,
  “contentVersion”: “”,
  “parameters”: {
  “variables”: {
    “container1name”: “aci-tutorial-app”,
    “container1image”: “nginx”,
    “container2name”: “aci-tutorial-sidecar”,   
    “container2image”: “nginx”
    “resources”: [
        “name”: “myContainerGroup”,
        “type”: “Microsoft.ContainerInstance/containerGroups”,
        “apiVersion”: “2017-08-01-preview”,
        “location”: “[resourceGroup().location]”,
        “properties”: {
          “containers”: [
              “name”: “[variables(‘container1name’)]”,
              “properties”: {
                “image”: “[variables(‘container1image’)]”,
                “resources”: {
                  “requests”: {
                    “cpu”: 1,
                    “memoryInGb”: 1.5
                “ports”: [
                    “port”: 80
              “name”: “[variables(‘container2name’)]”,
              “properties”: {
                “image”: “[variables(‘container2image’)]”,
                “resources”: {
                  “requests”: {
                    “cpu”: 1,
                    “memoryInGb”: 1.5
          “osType”: “Linux”,
          “ipAddress”: {
            “type”: “Public”,
            “ports”: [
                “protocol”: “tcp”,
                “port”: “80”
    “outputs”: {
      “containerIPv4Address”: {
        “type”: “string”,
        “value”: “[reference(resourceId(‘Microsoft.ContainerInstance/containerGroups/’, ‘myContainerGroup’)).ipAddress.ip]”

What about persistent storage options for ACI containers? Microsoft has already made some documentation on how to add persistent storage to ACI containers which can be found here –>

More info to come.

Creating Azure ARM template with Veeam Agent unattended setup

One of the things that you need to remember to have when moving to the public cloud is to do backup of your stateful virtual machines since this is not something that is included as part of the basic service that cloud platforms provide (Azure, Google and Amazon) for instance.

NB: Azure has an Backup service part of Recovery vault which enables backup of VM in Azure, but does not deliver the in-depth recovery options that Veeam delivers.

One of the cool things that Veeam provides is the Veeam Agent for Windows which now supports backup directly to Cloud Connect at a Service Provider for instance, and has quite good ways of doing silent installs with automatic configuration of the agent itself. Which makes it easy for us to do automatic deployment of VMs with backup configured.


Veeam Agent for Windows comes in different editions –> 
where Cloud Connect is supported on Workstation and Server editions.

Now when setting up new virtual machines in Azure you should automate the deployment in some way, either that you are using some form of script which does unattended deployment of the agent and the backup configuration jobs.  Or that you have a sysprepped Azure Image which already contains the Veeam Agent which makes it easier for mass deployment.

Doing Unattended deployment

To do unattended install of the Veeam Agent you just run the executeable with the following parameters.

/silent /accepteula

This should also import the license file and define which type of edition the agent is running. The default path of the agent is %Program Files%\Veeam\Endpoint Backup

From here we can for instance have a script which configured the correct license on the host.

Veeam.Agent.Configurator.exe -license /f: [/w] [/s]

/f: Path to license file

/w: Sets the agent edition to Workstation. If this parameter is not specified, Workstation edition is set automatically for the client OS versions.

/s: Sets the agent edition to Server. If this parameter is not specified, Server edition is set automatically for the server OS versions.
So a quick install script can look like this.

cd c:\

# Installs agent silent
.\VeeamAgentWindows_2.0.0.700.exe /silent /accepteula

# Add sleep before changing directory
Start-Sleep -Seconds 50
$path = ‘.\Program Files\Veeam\Endpoint Backup’
cd $path

# Adds license and changes the Server edition

.\Veeam.Agent.Configurator.exe -license /f:c:\veeam_agent_windows_trial_0_0.lic /s

Sysprepped Image

If you plan on creating a sysprepped image predefined with configuration and job you need to create a registry value under HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Endpoint Backup\SysprepMode (DWORD)=1. This registry key value is used to regenerate the job ID when Veeam Agent for Microsoft Windows starts for the first time on the new computer.

Setting it up in Sysprep mode will retain the license and the current job configuration of the agent itself.

Creating custom Configuration files

It is also possible to create a custom configuration file using XML which can contain all the settings and backup jobs of the agent itself. This configuration file can also be imported to other agents. It is important to note that the configuration files does not contain any license files so that has to be impoted seperatly.

After you have configured the agent with the required configuration from a base VM you can export the configuration file using

.\Veeam.Agent.Configurator.exe –export

And just using the –import parameter will import the configuration. Now the issue is when we want to generalize this into an Azure VM. The simplest way is to export the agent install setup to an Azure Storage Blob or have it part of a sysprepped image and then also have the license with the configuration file as part of the setup placed in a Azure Storage Account and if you also create a SAS token with access to each seperat blob as well you have granular access to each blob in Azure. When you have each SAS blob token which can be created using

Get-AzureRmStorageAccount –Name –ResourceGroup |  New-AzureStorageBlobSASToken -Container “script” –Blob  “blobname” -Permission rwd

When this is done we can download each file using an web request from PowerShell so we can reference is using a script.

Example script where I have each file with its unique SAS token, without any specific configuration.

# Store variables

$folderName = “veeam”
$dest = “C:\WindowsAzure\$folderName”
$veeamagent = ‘VeeamAgentWindows_2.0.0.700.exe’
$veeamlic = ‘veeam_agent_windows_trial_0_0.lic’

mkdir $dest
cd $dest

# Downloads Veeam agent and license file and Configuration file

Invoke-WebRequest -outfile $dest\$veeamagent

Invoke-WebRequest -OutFile $dest\$veeamlic

# Installs agent silent
.\VeeamAgentWindows_2.0.0.700.exe /silent /accepteula

# Add sleep before changing directory
Start-Sleep -Seconds 60
$path = ‘c:\Program Files\Veeam\Endpoint Backup’
cd $path

# Adds license and changes the Server edition

.\Veeam.Agent.Configurator.exe -license /f:$dest\$veeamlic /s


Adding it to a Azure ARM template with Script extension
Now the final piece is to add this script to an ARM template to do automatic deployment of virtual machines in Azure with Veeam agent installed with a license and prepared configuration job.

The easiest way to do this is using the vm script extension in Azure to run the script directly either as part of the deployment or adding the extension to a preexisting VM that is running in Azure. Create the script above as PowerShell script which then will be run as part of the ARM template deployment.

Important here that the Veeam agent is placed in the storage name that is part of the ARM template variable.

“name”: “MyCustomScriptExtension”,
“type”: “extensions”,
“apiVersion”: “2016-03-30”,
“location”: “[resourceGroup().location]”,
“dependsOn”: [
   “[concat(‘Microsoft.Compute/virtualMachines/myVM’, copyindex())]”
“properties”: {
   “publisher”: “Microsoft.Compute”,
   “type”: “CustomScriptExtension”,
   “typeHandlerVersion”: “1.7”,
   “autoUpgradeMinorVersion”: true,
   “settings”: {
     “fileUris”: [
https://’, variables(‘storageName’),
     “commandToExecute”: “powershell.exe -ExecutionPolicy Unrestricted -File start.ps1”

Or we can use Powershell to run the script against a currently running VM

Set-AzureRmVMCustomScriptExtension -ResourceGroupName “example” -Location “example” -VMName “example” -Name “Veeam” -TypeHandlerVersion “1.1” -StorageAccountName “Contoso” -StorageAccountKey <StorageKeyforStorageAccount> -FileName “veeam.ps1” -ContainerName “Scripts”



Azure Stack– Secure by design

Previously I have blogged about the underlying architecture and features which is going to be part of Azure Stack → Microsoft recently announced the launch of Azure Stack as well →

Responsibility model in Cloud

Now I want to focus a bit on one aspect that was not included in the previous blogpost, and that also has not been highlighted in Microsoft’s blog which is security in the platform.

In a Public Cloud scenario, there is a distinct line between what is the public cloud vendor’s responsibility and what is the customer’s responsibility. The area of responsibility changes when a customer goes from IaaS (Infrastructure as a service) to PaaS or SaaS model. In the shift from IaaS to PaaS more responsibility move to the cloud provider. As it would be if we were to go from a SQL server running inside a virtual machine to a Azure SQL Database where we as a customer have no control of the virtual instances that deliver the service.

Shared responsibility model between customer and cloud provider

In Azure there are numerous of security mechanisms in place to ensure that data is safeguarded, going from the physical aspect up to the different services running on top. So an example from a customer perspective. In public Azure a customer does not have access to the hypervisor layer, as we might be used to in a regular virtualization environment. We as a partner havethe same limitations, so therefore when managing customers we have to consider the same limitation. This means we have to do management and in a different manner.

Security on the platform layer

One of the design principles that Microsoft did with Azure Stack was that it should be a self-contained platform and be consistent with public Azure, which meant that management needed to have the same mechanisms in place.

With Azure Stack from a management perspective we only have access to the admin portal where we have no visibility into customer workloads. We can only use the admin portal to create tenant subscriptions, do Azure Stack infrastructure management and also get health status about the platform and audit information about administrator activities.

One of the security design principles that Microsoft used for Azure and Azure Stack is something called “Assume breach”. Assume breach is a philosophy where we already assume that my system is compromised or will be compromised. So how can the platform detect the breach and how to limit the effect of an attack. Microsoft therefore put in place numerous security mechanisms in Azure Stack such as

  • * Constrained Administration
    * Least privileged account – The platform itself has a set of service accounts used for different services which are running with least privilege
    * Administration of the platform can only happen via Admin portal or Admin API.
    * Locked down Infrastructure

    • * Application Whitelisting –  Used so only code that is digitally signed  Microsoft or signed by Azure Stack OEM will run on the system, any other executable by other non-signed or third party will not run.
      * Least-privileged communication – Means that internal components in Azure Stack can only talk with components that it is intended to.
      * Network ACLs – Everything is blocked by default using firewall rules.
      * Sealed hosts – No access to the underlying hosts directly

    • * Lifecycle management – Microsoft together with the OEM vendors will provide full lifecycle management using the lifecycle host to do uninterrupted system patching, such as firmware, drivers, OS patches and so on.

The Second security design principle that Microsoft used is hardened by default, which means that the underlying operating system has been fine tuned on security.

  • * Data at rest encryption – All storage is encrypted on disk using Bitlocker, unlike in Azure where you need to enable this on a tenant level. Azure Stack still provides the same level of data redundancy using three way copy of data.
    * Strong authentication between infrastructure components
    * Security OS baselines – Using Security Compliance Manager to apply predefined security templates on the underlying operating system
    * Disabled use of legacy protocols – Disabled old protocols in the underlying operating system such as SMB 1 also with new security features protocols such as NTLMv1, MS-CHAPv2, Digest, and CredSSP cannot be used.

  • Windows Server 2016 security features
    * Credential Guard – Credential Guard uses virtualization-based security to isolate secrets so that only privileged system software can access them.

    • * Code Integrity – A feature used in Credential Guard, Allows Only code that is verified by Code Integrity, usually through the digital signature that you have identified as being from a trusted signer, is allowed to run. This allows full control over allowed code in both kernel and user mode.

    • * Anti malware – Uses Windows Defender on the underlying virtual machines which make the platform and on the host operating system.

    • * Uses Server Core to reduce attack surface and restrict use of certain features.

Security at tenant layer

Now we have looked at the different aspects of security mechanisms which are operating at the platform layer which are invisible for the different tenants running on Azure Stack. So let us take a closer look at the security features we can use as a tenant in Azure Stack.

  • Azure Resource Manager – ARM by itself has a RBAC model which is used to determine what kind of resources a user has access to a subscription, resources group and then different objects. Access within ARM can be given either at a subscription level using the different built-in roles or a custom role. Users can also inside each tenant can be given access to a certain resource group which might contain one or multiple objects such as virtual machines. Here they might only be given access to do restarts of the virtual machine inside a certain resource group or we can create a custom role with specific permissions to certain objects.

Overview of the role based access control in Azure Stack and different levels of access

  • Azure Resource Policies – Can be used to enhance regular ARM access rules, such as allow a user to only provision virtual machines with a certain instance type or to enforce tags on objects.

  • Network Security Groups – Allows for five-tuple firewall access rules which can be defined on either for each network card or per subnet level. This means that we can define firewall rules regardless of which guest OS is running and can define rules before the traffic can leave the virtual NIC.

  • Virtualized networking layer – Azure Stack is using a software-defined networking solution which itself is being managed by the underlying network controller. This however is using a tunneling protocol called  VXLAN which is used to isolate each tenant into its own virtual network. Using VXLAN it does not make it to open to traditional layer 2 attacks which it would using regular VLAN.

  • TLS/SSL –  which uses symmetric cryptography based on a shared secret to encrypt communications as they travel over the network. This is enabled by default on all platform service and API’s which are available in Azure Stack.

  • IPsec – an industry-standard set of protocols used to provide authentication, integrity, and confidentiality of data at the IP packet level as it’s transferred across the network. This is used when setting up Site to Site VPN connection with a Gateway in Azure Stack.

  • Azure Key Vault – helps you easily manage and maintain control of the encryption keys used by cloud apps and services. Key Vault can for instance be used to store private keys of Digital Certificates used for App Service or virtual machines.

  • What do we still need to think about?
    Even if Microsoft comes with a lot of security enhancements and features in Azure Stack which are enabled by design, there are still alot of considerations we need to think about when moving workloads to Azure Stack.

  • * Azure Stack does not provide a solution to do management of virtual machines – This means that we still need to do patching, updates, monitoring and management of in-guest applications and services on virtual machines.

  • * Azure Stack does not provide a solution that does backup of data and virtual machines  – This mean that we still need to use some form of in-guest backup solution to maintain copies of our data.

  • * Azure Stack does not provide an antimalware solution for in-guest VM – This means that we still need to have some form of malware protection for  in-guest virtual machines.

  • * Azure Stack does not have Azure Security Center so if you open up the virtual firewall to your virtual machines you will not get notified.

Running and optimizing Citrix in Microsoft Azure

This is a recap of  the webinar me and a fellow CTP Dave Bretty, had earlier today on MYCUGC “Delivering and Optimizing Citrix from Microsoft Azure” since the webinar itself only covered a big overview of the big picture I decided to do some more work in-depth of the webinar.

So therefore I’ve started on a Citrix & Microsoft Azure Whitepaper to cover all the different things one can consider ranging from automation to best-pratices and different integration options. If you want to review of be part of the whitepaper process feel free to reach out to me either on Twitter or email

Other then that you can see the slidedeck from the webinar here –>

Nutanix + GCP match made in heaven?

It has been quite here on this blog for some time now, there are alot of reasons for this. First of I have been swamped in work lately which has affected the time I have had available to actually do any blogging, and also I have been busy with some other side projects which will be visible soon.  One of the stuff I’ve been working one recently in Google Cloud Platform.

Google Cloud Platform is Google’s public cloud platform which I have fallen in love it. Its fast, elegant and simple to use but ill get back to that in a later blogpost. Earlier this week Nutanix announced a strategic partnership with Google –>

For those who haven’t heard about Nutanix it is a company that delivers Enterprise Private Cloud based upon a hyperconverged platform. I have been blogging on different topics on Nutanix as well –>

But I belive this is a match made in heaven. Since Nutanix is actually based on some of the same technology that powers underlying platform of GCP and both platforms follow some of the same design principles: Simplicity, Speed & Security.

So what will this partnership provide us with in terms of  technology?

  • * Easing hybrid operations by automating provisioning and lifecycle management of applications across Nutanix and Google Cloud Platform (GCP) using the Nutanix Calm solution. This provides a single control plane to enable workload management across a hybrid cloud environment.

  • * Bringing Nutanix Xi Cloud Services to GCP. This new hybrid cloud offering will let enterprise customers leverage services such as Disaster Recovery to effortlessly extend their on-premise datacenter environments into the cloud.

  • * Enabling Nutanix Enterprise Cloud OS support for hybrid Kubernetes environments running Google Container Engine in the cloud and a Kubernetes cluster on Nutanix on-premises. Through this, customers will be able to deploy portable application blueprints that target both an on-premises Nutanix footprint as well as GCP.

* In addition, we’re also collaborating on IoT edge computing use-cases. For example, customers training TensorFlow machine learning models in the cloud can run them on the edge on Nutanix and analyze the processed data on GCP.

You can also read more here –>

This is now going to be in strong competition with Amazon Web Services + VMware, and Microsoft with Azure and Azure Stack offering. Now while the partnership with VMware and AWS is going to be focusing purely on IaaS but have maybe a better direct integration in between and not leverage the AWS services to its full extent, and Azure and Azure Stack has the benefit on having a same underlying management layer while not actually having any hybrid integrations.


Just hope that this partnership is a start of a journey when it comes to integration with Google. Would love to see even better deeper integrations here and some more information on the different options that will be available here.

Nutanix also has support for AWS and Azure so some extent (and have had for some time) but it seems to me that they haven’t prioritized developing more features there (Which has of course made some sense since they want to focus on the private cloud first). So I hope that the partnership with GCP will change that and they integrate Insight, more management, software-defined network, and hybrid IaaS models as well in the future.