Author Archives: Marius Sandbu

Comparison between Horizon Cloud and Citrix Cloud on Azure

The last couple of weeks I have been working with VMware Horizon Cloud for Microsoft Azure, and tesing the bits and pieces about the platform, and especially I’ve been looking at how it compares against Citrix Cloud in general. Therefore I decided to write this blog post to maybe enlighten how it differs in terms of deployment and operations and how to get it up and running. you can review the requirements for Horizon Cloud for Azure deployment here –> 

One thing I want to highlight that moving VDI to the cloud does not bring any real value unless it is for the proper reasons, in most cases the public cloud is still more expensive then running it on local infrastructure. The most common use-case if you can benefit from the automatic scaleability that cloud provides such as companies where the amount of users is fluctuating going from 10 – 100 users during working hours ( 7 AM – 5 PM) where you only need to pay for what you use in terms of infrastructure cost and licensing.

The architecture is quite simple, as Citrix Cloud it requres that we have an existing Azure subscription and with an existing Active Directory virtual machine running and an virtual network defined. After you have setup the connection it will deploy a Horizon Cloud Node(Node Manager) which acts as the hub between Horizon Cloud Control Plane and your servers and Active Directory.

It also provides simple update mechanism, so when an new version is available the node will automatically upgrade itself and the unified access gateway running in parralell and configuration information and system state is copied from the running SmartNode and Unified Access Gateways to the new ones. After the configuration information is copied and checks completed, the new SmartNode and Unified Access Gateways become active.

Architecture illustration of the node's resource groups, VMs, and subnets

To begin with let’s take a closer look at some of the capabilities that are included in the initial release of Horizon Cloud on Microsoft Azure.

* Application & Session Desktop Delivery
Ability to publish and manage RDS-hosted applications and desktops on Microsoft Azure while leveraging on-premises and cloud resource (VDI not available that is coming later)
* Hybrid Architecture
Support for both Horizon Cloud with on-premises infrastructure and Horizon Cloud on AzureMicrosoft Azure, in a single solution.
* User Experience & Access
Identity-based end-user catalog access via VMware Workspace ONE
Secure remote access for end users with integrated VMware Unified Access Gateway
Support for Blast Extreme, Blast Extreme Adaptive Transport (BEAT) protocol.
* Power Management
Ability to track and manage Microsoft Azure capacity consumption to keep costs low, allowing for scaling based upon sessions or schedule.
* Easy Deployment
Automated deployment of Horizon Cloud service components Integration with Microsoft Azure Marketplace to allow importing a Windows Server image on which the necessary agents get automatically applied.
* Simplified Management
Horizon Cloud always maintained at latest versions Under five-minutes, self-scheduled upgrades for components on Microsoft Azure via Blue-Green upgrades.
Unified Access Gateway deployed automatically in Microsoft Azure.
* Pricing
Horizon Cloud Apps
Named User – $8/month
Concurrent User – $13/month

One of the first inital things that struck me was the price model that they have for cloud. With is named user or concurrent user. If we are thinking about a global organization where task workers are roaming across different regions concurrent user would make a lot more sense also combined with the pay-as-you-go model that is in the cloud. Also that XenApp Essentials from Citrix cost 12$/month for each named user.
Another detail was that VMWare chooses to do automatic deployment of their Unified Access Gateway as a virtual appliance directly to Microsoft Azure, while in Citrix you would need to deploy this on your own, or using NGaaS service from Citrix. However the NGaaS Service all traffic is routed trough Citrix Cloud POPs which the unified gateway provides direct communication from the endpoint to the applications.

Another thing is when setting up agents in Azure, VMware has a limited set of virtual machine instances that they support  which are Standard_D2_v2, Standard_D3_v2, Standard_D4_v2 & Standard_NV6 not sure why they only have this list, Citrix Cloud supports all available instance types on Azure. Also one thing with the NV series. With this release, GPU is supported for use only in Microsoft Windows Server 2012 R2 due to a driver limitation in the Horizon agent in Microsoft Windows Server 2016.

Setting up Horizon Cloud against Azure we need to create an application service principal in our Azure AD account and this application ( service principal ) needs to have contributer right on the Azure subscription.
NOTE: is is important that the sign-on URL is http://localhost:8000 or else the wizard will fail.

Create App Registration screen with values for Hzn-Cloud-Principal

Doing all this work on setting up the service principal should be automated however, Citrix Cloud uses an Azure AD account to create a service account for the use. This way we don’t need to get all the info like App ID, Directory ID and such.

The initial wizard also requires us to have a precreated vNET. The wizard will automatically create the subnets within the vNET( Management, Desktop and DMZ). It will also handle the deployment og the access gateway.


Also the wizard will also automatically deploy a unified access gateway which will be accessable behind an Azure load balancer also equipped with a certificate. The only piece we need to fix is the public DNS record.

If you have a fresh account it will also validate the quota setup for the Azure account both to ensure the certificate, quota of users and make sure that the subsets are not already defined.


After you are done with the initial wizard it will start to provision a jumpbox server on the Azure account and start downloading agents and other VHD files. After the jumpbox server is up and running it will start to setup the node manager. The jumpbox will then self destruct after the node manager is up and running and is only provisioned/used when there is an update or building up a node manager.


After the node manager is up and it has successfully connected back to the control plane (Horizon Cloud) you just need to complete the wizard setup, and setup integration with Active Directory.


After you have integrated Horizon Cloud with Active Directory will need to reauthenticate to VMware cloud and also after login again you will also need to authenticate against Active Directory which the node manager is integrated with.


After you are authenticated you need to create an image which will be used to deploy your applications. You can either bring you own image or you can import a VM from the marketplace.

    • image
      • Horizon will essentially create a VM using a image from the Azure marketplace (Which is either 2012 R2 or 2016) and it will preinstall the agent and such which we then can convert to an image.
        • image
        • After the desktop from the marketplace was created we can go ahead and convert to an image after we have adjustments to it. This makes it easy to create a master image with doing just a small piece of the image setup.
        • After that I need to create an farm based upon the image, where I have the same list of machine models that are supported. I also specify what kind of protocol, domain and client type I want to use. Further down I also specify the logon idle timeout value as well (before a session is kicked out)
  • image

    Next I specify the update/maintance sequence, where it will do automatic draning of each server, as best practice for virtual machine maintenance is to restart the VMs from time to time, to clear out cached resources or any memory leaks from third-party applications in the VM. I can also specify what the servers should do during maintance window, such as restart or rebuild.

    • image
  • so after I’ve specified the amount of VM’s it will start to provision the farm based upon the image and machine instance type in Azure.
  • image
  • And last but not least, do an assigment of a desktop to a set of users.
  • image

One thing I notice is that I love the dashboard showing issues directly related to Azure such as quota management, since most subscriptions in Azure have a soft quota which should be increased. image

From the first impression, I do love the work that VMware has done with Azure in terms of integration. It does provide and supports many of the Azure features.
* Using Azure AD Service Principal for authenticating with Azure and also checking the storage quota.
* Using Managed Disks for VM provisioned on the farms
* Power Management for virtual machines using ARM underlying API.

* Automatic starting of another node in a farm if one goes down suddenly.
Also that they provide the simple deployment of the Unified Access Gateway and certificate management can be done using the Horizon Cloud HTML5 portal which makes it easy to manage the remote access. Now I enjoy working with NetScaler, but Citrix should do something simliar to have simple deployment of remote access where they just deploy a VPX instance directly to Azure.

  • A couple of things I would like to see for the future setup.
    * Support for Encrypted Disks in Azure
  • * Support for other machine model and instances in Azure
    * Be able to define my own resource grups.
    * Provide OMS module for Monitoring ( yes please! )
    * Specify disk size use of managed disks.
  • Looking forward to seeing this develop moving forward!

More info on VMware HCX

After looking into the blog post announcements on VMware HCX after VMworld I decided to get a bit more info what HCX actually is. This blog post will try to summarize what it is and what it can do. HCX is not a single product, but a combination of multiple VMware products which will be available is a single solution.
HCX is a also a cloud product which is delivered together with HCX Providers such as IBM or OVH.

So what can HCX Provide? It is essentially works as an extension between your existing infrastructure and a HCX Provider or a bridge. This allows for instance use of

  • Disaster Recovery 
  • Hybrid Cloud 
  • Migration to newer platforms

On the HCX provider side we have a VMware Cloud Foundation setup. Cloud Foundation is based off of  VMware vSphere, vSAN, NSX, and SDDC Manager, where the last part automates and orchestrates the entire deployment process on the providers end. Using NSX on the providers end opens up for a new way to do software-defined network where all traffic is wrapped into VXLAN traffic. On the client’s or customers end we only need to deploy a single virtual machine (which is the HCX Client) this runs on your existing VMware infrastructure. The HCX client will be backwards compatible with as old versions ESX 5.1, and allow for management from the existing VMware console.

HCX will also come with WAN optimization and will allow us to connect our existing DC over regular internet or we can use a direct connection with the cloud provider. Regardless all traffic will be encrypted using AES 256 bits encryption.

So HCX will provide secure Live vMotion – HCX proxies vMotion, resulting in a secure, zero downtime live migration to the cloud, over the HCX interconnect fabric described above.  It will also provide built-in business continuity – HCX provides DRaaS to enable business continuity while migrating/moving applications, and will allow customers can define as low as 5-minute RPO/RTO for VMs.

So some question that I am left with (Which I’m guessing will be answered when it will be released.)

1: How will the HCX Client provide redudancy on the customer side? can we setup multiple HCX clients which can load balance across the traffic?
2: How can we handle disaster recovery when it comes to layer two network failure?
3: How does it integrate with older versions of VMware where we don’t have web based console?
4: Since it doesn’t require us to have NSX on the customer side, and we pay for the license as part of the cloud offering what kind of functionality will we get?

When will it be available?
November – 2017 ( Later this month ) so looking forward to testing this on IBM Bluemix especially. IBM is also consolidating both of their platforms (Bluemix and Softlayer) into a single platform from a management perspective as well. So it should be available from the different IBM Softlayer locations pretty soon –>

What is Microsoft security story?

Looking back at 2017 so far is has been alot of development from Microsoft when it comes to security products. Just by looking at the Microsoft Ignite which was a couple of weeks back most announcements were security focused with new products such as Azure ATP, improved version of Windows Defender ATP as well and of course with a lot of investments into security features directly into Azure Active Directory. All these new stuff has made me confused at times, what does the product actually do and how does it interact with other features? Therefore I decided to write this post to show what the product does and how it fit into the Windows ecosystem. This visio shows some of the integration and the different cloud security products that Microsoft offers.

But to understand what they actually provide in terms of functionality lets look closer into some of the products.

Microsoft ATA
Microsoft ATA (Advanced Threat Protection) or Azure ATP which it was announcing during Microsoft Ignite is a service which can detect multiple attacks that can happen against Active Directory.

This can be for instance:
Pass-the-Ticket (PtT)
Pass-the-Hash (PtH)
Forged PAC (MS14-068)
Golden Ticket
Malicious replications
Brute Force
Remote execution

The list is quite long, you can read more about it here → until now Microsoft ATA has been a piece of software that you needed to install on your local infrastructure, and the architecture has been quite simple. You needed to have a ATA Center which was the central point of management and containing the MongoDB which contained all the events. You also had ATA Gateway’s which was used to gather information from domain controllers either using Windows Event Forwarding or using Port Mirroring traffic from domain controllers. Later on with ATA, Microsoft released Lightweight Gateway’s which allowed us to install a Windows agent inside the domain controllers. This removed the need to setup SPAN or RSPAN to get domain controller traffic. With the latest release of ATA 1.8 the Lightweight agent was improved further to reduce bandwidth usage and be able to forward Windows based event as well. With Azure ATP, Microsoft essentially moved the ATA Center directly to Azure.

Bottomline for Azure ATP
Detect, investigate and respond to advanced attacks inside Active Directory, looking at exploiting weaknesses in security or authentication protocols.

Windows Defender Advanced Threat Protection
Is another cloud service, which initially was aimed at Windows 10 endpoints but has recently also gotten support for Windows Server (again mostly aimed at Windows environments) This is using sensors in Windows 10 which will report a lot of the internals happening inside a machine directly to the cloud service

It allows us to take a closer look at suspicious behaviour such against a machine, IP address, end user for instance. This can be that a end-user has run some weird script which was triggered from an email that the end-user received and has therefore raised an alert to ATP. All suspicious behavior is also sent through Microsoft Intelligent Security Graph, so when Windows Defender ATP flags a process tree—let’s say a tree for a PE file that opens a command-line shell connecting to a remote host—our systems augment this observation with various contextual signals, such as the prevalence of the file, the prevalence of the host, and whether the file was observed in Office 365. Windows Defender ATP classifiers consider these contextual signals before arriving at a decision to raise an alert.

To on-board an agent to this service, you basically run a PowerShell which is deploying using Group Policy or using Intune OMA-URI against ./Device/Vendor/MSFT/WindowsAdvancedThreatProtection/Onboarding

Bottom line for Windows Defender ATP is
Detect, investigate and respond to advanced attacks or suspicious attacks inside Windows infrastructure (Clients and Windows Server) combined with Machine Learning capabilities within Microsoft Intelligent Security Graph.

Microsoft Azure Security Center
Initially Azure Security Center was a offering which was about protecting Virtual Machine and policy control in Microsoft Azure, but it has now evolved into so much more. It has now extended and can now be used against on-premises machines as well using the Log Analytics agent (Microsoft Monitoring Agent)

The service is more about looking to give recommendations about service you can use in Microsoft Azure. So the solution can monitor
Virtual machines (VMs) (including Cloud Services)
Azure Virtual Networks
Azure SQL service
Azure Storage account
Azure Web Apps (in App Service Environment)
Partner solutions integrated with your Azure subscription such as a web application firewall on VMs and on App Service Environment

It also allows us to create a security policy which can define a baseline for security in Microsoft Azure, such as that all machine runnings in Azure needs to have the following default ports blocked by firewall, all machines needs to have monitoring agent installed and such. You can also use something called playbooks which can react based upon an alert in Security Center. It can trigger a Logic App workflow from it.

The bottomline about Azure Security Center is.
Proactive security detection for resources in Azure (Virtual Machines, PaaS) security baselines and automation.

Cloud App Security
This is another Cloud service which is aimed at protecting information flow in SaaS such as data and governance. IT can discover all cloud use in your organization, including Shadow IT reporting and control and risk assessment. It can monitor and control data in the cloud by gaining visibility, enforcing DLP policies, alerting and investigation.

It can connect to the following Cloud applications
G Suite
Office 365

Where it can be used to apply DLP policies, or who accesses what from where.It can also get information from firewall products such as BlueCoat, Cisco, Zscaler, Web Sense, Palo Alto and such. This allows Cloud App Security to easily detect what kind cloud applications your end-users are running.

The bottomline for Cloud App Security.
Protecting Cloud Applications (SaaS) usage for enterprises using DLP policies and Cloud Application discovery.

Microsoft Intune
Intune can be described as an endpoint management solution, since it allows for remote management and policy control and software deployment to endpoints. This can be Windows machines or mobile devices. It also provides MAM (Mobile Application Management as well) Intune is using the open standard OMA-URI to manage Windows 10 devices directly. Microsoft is also investing heavily into the OMA-URI CSP with alot of new options coming with every Windows 10 update, which you can see here →

From a security perspective, Intune can be used to control endpoint configuration
Management of Windows Defender on Client computer
Management of Windows based features such as Application Guard, Bitlocker, Device Guard
Remote Management (Reset, Remote Lock, Factory Reset) of devices
Security policies for mobile devices such as disallow camera, enforce encryption and such and other policy management using OMA-URI
Deliver Windows Patches using Windows update for Business
Configure Windows Information Protection policies.
Integrate with VPN devices to provider NAC using Device Health Attestation Service

The bottom line for Intune is.
Provide endpoint management and policy control. Also handling update management and simple reporting

Azure Rights Management Azure Rights Management is an Azure feature which is aimed at protecting data. This allows us for instance to be used to encrypt file attachments and send it to another recipient. We can also use it to define policies on who should have access to view the file and can actually use it as well to trace who has opened the file.Azure Rights Management includes integration into Office 365, and can also integrate into regular windows based environments such as Windows file servers, Exchange and SharePoint as well. It also requires a special agent installed to be able to view files which are protected by Azure RM and it its core we have Azure AD which defines if a user gets access to the protected resource or not.

the bottom line for Azure Right Management.

Provides protection and encryption of information such as files and content, both locally and in Office365.

In the center of this we also have Azure Active Directory, which is the source of identity for multiple services such as Microsoft Azure, Office 365 among others but is also needed for many of the other services such as use of Azure MFA, RMS, Conditional Access and such. Azure Active Directory also provides multiple security services such as Privileged Identity Management and Identity Protection.

With Windows 10 we also have a new feature called Azure AD Join which allows us to join a Windows 10 device directly to Azure AD instead of a local Active Directory domain. This also helps us isolate the endpoints away from Active Directory and help it stay more secure and move the endpoints away from the infrastructure. Also with the latest versions of Windows 10, Microsoft has done a lot of investment adding new security features as well such as Application Guard, Exploit Guard and such which all can be managed from Microsoft Intune as well.

Looking at all the features and products that Microsoft have just within security it might be a bit confusing but hopefully this article highlighted some of the differences and what area within security these products belong too. Looking at this rich ecosystem shows how much Microsoft has invested into cloud based security over the last years. If we were to look back only 5 years, only a fraction of these services actually existed, but Microsoft has to make this a lot more integrated solution because now we have a lot of different products which seem to have a bit more overlapping functionality and different product suites as well. Then again different detection and defense product should be treated differently, but I for one would love to have one place for cloud based security such as Cloud App Security and Log Analytics to detect SaaS based cloud usage and user activity, combined with ATA information to see if someone has gotten access to some end-user credential and have tried something suspicious.

Also Defender ATP with Azure Security as well to see more in-depth on server workloads, and also enable Defender ATP to do more Defender and endpoint security policy management as well, because now it feels like a product which can only be used for analytics and recon. Would love to have more in-depth control of Defender and other security mechanisms in Defender ATP.




Shared Computer support Office365 and Citrix with AD PTA

One of the issues that has been with delivering Office 365 on a non-persistent Citrix environment is how to manage licensing and activation. Previously we needed to have a ADFS infrastructure in place with Group Policy to allow “Automatic Activation with federated credentials” to allow for seamless activation without the end-user to need to do enter any type of information. This was needed because when a user logs onto a XenApp host and starts Office they will need to login with their Azure AD credential. This process would generate a license token that was bound to that machine the user was logged into. If the user then switched to another virtual machine the next day they would need to repeat the process there.

During Ignite, Microsoft announced that Azure AD Connect PTA (pass-trough authentication) was now generally available. This provides seamless sso authentication against Office365 without then need to setup an ADFS infrastructure. This makes a lot of sense for small businesses who doesn’t want to have the complexity with ADFS just to get automatic activation and or authentication for Office365.

However AD Connect PTA had one issue for Office365 was that
1: It does not work together with “Automatic Activation with federated credentials” policy
2: The user is required to type in their UPN to get authenticated.
This makes the authentication process a bit simpler but the license token was still machine bound and therefore a user would need to repeat the process the next day.

However! With Version 1704 of Office365 you now have the ability to setup licensing token roaming. This allows us too configure the licensing token to roam with the user’s profile or be located on a shared folder on the network.

To configure licensing token roaming, you can use either the Office 2016 Deployment Tool or Group Policy, or you can use Registry Editor to edit the registry. Whichever method you choose, you need to provide a folder location that is unique to the user. The folder location can either be part of the user’s roaming profile or a shared folder on the network. Office needs to be able to write to that folder location. If you’re using a shared folder on the network, be aware that network latency problems can adversely impact the time it takes to open Office.

If you’re using Group Policy, download the most current Office 2016 Administrative Template files (ADMX/ADML) which can be found here –>
and enable the “Specify the location to save the licensing token used by shared computer activation” policy setting. This policy setting is found under Computer Configuration\Policies\Administrative Templates\Microsoft Office 2016 (Machine)\Licensing Settings.


Now this together with Azure AD PTA the user is now required to type their username once, a token is generated and is cached on a network drive, allowing for a better end user experience.

All Announcements from Microsoft Ignite

Earlier today, Microsoft had the opening keynote under Microsoft Ignite and with it came a bunch new announcements with services and updates to most of the different features as well. This blog post is to summarize all the updates that have been announced so far. Now if we look at the keynotes, there have been little to none updates so far to the System Center suite, there has been some updates to Configuration Manager but that is to bridge client management into Intune.

More in-depth writing on each service will come shortly.

General Availability of HDInsight Query –>
What is new with Cloud App Security –>
What is new with Azure Active Directory –>
Announcing Preview of Storage Firewalls
Managed Application Services Catalogue –>
Azure Log Analytics, meet our new query language –>
Microsoft Azure:
Microsoft Azure DDoS Protection
You can read more about it here –>
Microsoft App Service Premium –>
Microsoft Azure Policies –>
Microsoft Azure Cost Management (Based upon Cloudyn) –> &
More Flexible NSG rules with Tags –>
Microsoft Azure FileSync –>
Microsoft Azure IoT updates –>
Microsoft Azure Certifications and exam updates –>
Microsoft SQL Server Vulnerability Assessment –>
Microsoft Azure Data Factory Enhancements –>
Microsoft Azure Machine Learning Enhancements –>
Semi-Annual Channel release of Windows Server, version 1709
Azure Service Fabric with Linux Support –>
PowerShell support in Azure Cloud Shell –>
Hybrid Security with Azure Security Center –>
Updates to Cognitive APIs in Azure –>
General availability of SQL Server 2017 –>
Updates and first shipping Azure Stack –>
Azure Database Migration Services –>
Integration between Cosmos DB and Azure Functions –>
Updates to Azure Stream Analytics –>
Azure DataBox for offline storage migration –>
Bringing Azure Functions to MacOSX and Linux –>
Faster Compute options for Azure SQL Datawarehouse –>
Announcing Azure Migrate –>
Microsoft Azure Availability Zones –>
Microsoft Azure Load Balancer Standard –>
Co management with System Center and Intune –>
Microsoft Compliance Manager –>
Updates to Conditional Access –>
Updates to Microsoft RDS –>
Bridging together Azure Monitoring and Log Analytics –>
Microsoft Bing For Business –>
Microsoft Azure Quantum Services –>
Microsoft Azure and planned Maintance –>
Microsoft Bing For Business –>
Microsoft Global Vnet peering –>
Microsoft PowerBI updates –>
Microsoft ADFS webpage helper –>
Microsoft Exchange migrate to Office365 Groups –>
Microsoft Defender ATP and auto remidiation –>
Other news and updates.
PowerBI Embedded into Microsoft Azure –>
Linux and Python support for Azure Automation –>

Microsoft #Teams will replace #Skype for Business (Client). But there will be a vNext SfB on-prem server late 2018 –>
Office 365 MyAnalytics –>
Office 365 – Multiregion –>
NVIDIA P40 and P100 GPUS now available on virtual machine instances in Azure
Microsoft Office 365 My Analytics –>
GDPR assessment –>
Microsoft Azure Network updates –>
New VM instances types Azure –>
SharePoint Migration tool –>
Azure ATP preview –>
All Updates to Office365 –>
Office365 and Security –>
The Next Version of OFfice 2019, no more MSI only C2R –>
OneDrive Announcements –>

Google ID support for B2B authentication against Azure AD.
Microsoft Intune can now deploy PowerShell and EXE using MDM channel –>

Microsoft Server 2019 announced!

Public Cloud Comparison on GCP, AWS, Azure and IBM.

It has been a busy last month but finally I have some news to share. Lastly I have been working almost full time on Azure, AWS & GCP. As part of that experience I also get a lot of information about the different cloud providers, especially on the strenghts and weaknesses of each vendor.

I have previously worked with on their ADC Category where I did a comparison on KEMP, Citrix NetScaler, AVI and such. Now I am honored to be part of a group which have gotten together and started on a Public Cloud Platform Comparison –>

I have been part of doing the work on Microsoft Azure and Google Cloud Platform. Please feel free to give me any feedback or comments if there is anything you find that is missing or details missing.

Or next plan is to expand this offering with Platform services as well and taking a closer look at the different database, app servies and such.

So stay tuned for more – Marius

Just in time Access for Virtual Machines in Azure

The issue with sometimes having a virtual machine on Microsoft Azure is that if it is publicly accessable that the IP is uses is on a known IP range (Microsoft publishes the IP ranges here – –> which will make those IP addresses quite popular by hackers using different brute force mechansims. (Having a VM available on Azure for 5 hours, I got about 1500 authentication attempts)

Just in time scenario


So therefore it is always recomended to lock down your virtual machines using network security groups and only give access when needed. Of course this is a cumbersome process because then you need to go in and alter the NSG rules when someone needs access. Luckily Microsoft recently released in preview, Just-in-time access for virtual machines using Azure Security Center.

NOTE: The just in time feature is in preview and available on the Standard tier of Security Center (Which can be setup on using trial in standard of 60 days) and only supports virtual machines using Azure Resource Manager.

This feature allows us to grant access to a virtual machine on a specific service such as SSH or RDP in a set amount of time for instance 3 hours and then the feature will revert the NSG rules back to the orginal configuration.

In order to use the feature we have to enable JIT access on our virtual machines

Enable just in time access

Here below I have virtual machines which are configured already by JIT access. We can only configure virtual machines that have a NSG attached to it.


From here I can select a VM and request access (all requests and approval access is logged) and I can also specify ports, access source IP and time range (default is 3 hours)


And note that this module does not log any activity going on inside the virtual machine, this should be used in conjuction with Log analytics which can do security logs gathering from inside the guest OS and there you can track all the access that happend in the interval on which a virtual machine was available for Remote Access.

In order to use this feature using PowerShell, you need to have the latest Azure PowerShell cmdlets and also the Azure Security Center Cmdlets or you can use the UI to setup the access
and from there on invoke access using PowerShell.

Get the latest module here –>

Install-Module -Name Azure-Security-Center (NOTE: New version came out today 04/08/2017)

Then you need to login to your Azure subscription using PowerShell
In order to invoke a JIT session for a virtual machine you run the command

Invoke-ASCJITAccess -ResourceGroupName nameofresourcegroup -VM nameofvm -Port portyouwanttoopen -Hours 3

NOTE: I had some issues because of regional setup on my computer using the Azure Security Module

Why Azure Container Instances is a such a cool feature!

For those who haven’t seen it yet, Microsoft this week announced Azure Container Instances which is a new way of delivering Container instances on Azure.

Up until now Microsoft has had the ability to deliver Containers on Azure using Azure Container Service, where we specify which kind of orchestration engine we would like to use and specify the amount of worker nodes (virtual machines) we would like to have.  Then we were bound to that amount of virtual machine instances and the containers running on top of those virtual machines.

Azure Container Service is a free service on its own, but we are billed by the amount of virtual machines we are using underneath all the containers runnings per minute. That also means that the virtual machines that are being used by Azure Container Service is part of our responsibility.


So this means that if we have 10 virtual machines in a Kubernetes cluster but we are only using a limited amount of nodes it means that we need to pay for all virtual machines per minute. So not really cloud native right?

Now this is the cool part about Azure container Instances is that we do not actually need to think about the underlying virtual machines, all we need to care about is the container itself. image

The containers are billed per second they are running, instead of per minute which VM’s are which of course will allow for even greater flexibility.


Now unlike Azure Container Services, ACI is not linked to a specific Container orchestration solution so this will require you to use Azure specific commands and not reuse the existing kubernetes commands you are used to right? Well Microsoft has understood that Kubernetes is the right approach and have therefore created a Kubernetes Connector to ACI –> which allows us to use Kubectl against Azure Container Instances.

It does this by
* Registering into the Kubernetes data plane as a Node with unlimited capacity
* Dispatching scheduled Pods to Azure Container Instances instead of a VM-based container engine

So now Microsoft is quite serious about Containers and moving forward, ACI will also support Windows Containers, one might wonder how Microsoft does segmentation of tenant workloads on the virtual machines that run underneath. Microsoft has quite a good range of different services for different workloads now.


Now as mentioned since this is in preview there are some limitations still

  • * Only Linux containers are supported at the moment 
  • * Not possible to attach a container to a virtual network.
  • * Use of ACS is currently only through the Azure Cloud Shell or using Azure Resource Manager templates.
  • * There are some limitations both on region availability and the size of containers in a region.

Create the Resource Group in Location West Europe and Container

az group create –name myResourceGroup –location westeurope

az container create –name mycontainer –image microsoft/aci-helloworld –resource-group myResourceGroup –ip-address public


Connect to the URL and we come to this page


Delete the container:

az container delete –name mycontainer –resource-group myResourceGroup

Container Groups

Azure Container Instances also support the deployment of multiple containers onto a single host using a container group. This is useful when building an application sidecar for logging, monitoring, for instance. We can do group based container deployment using an ARM template.  Here we can also define the CPU and Memory specifics for each container.

az group create –name myResourceGroup –location westus

az group deployment create –name myContainerGroup –resource-group myResourceGroup –template-file azuredeploy.json

  “$schema”: “”,
  “contentVersion”: “”,
  “parameters”: {
  “variables”: {
    “container1name”: “aci-tutorial-app”,
    “container1image”: “nginx”,
    “container2name”: “aci-tutorial-sidecar”,   
    “container2image”: “nginx”
    “resources”: [
        “name”: “myContainerGroup”,
        “type”: “Microsoft.ContainerInstance/containerGroups”,
        “apiVersion”: “2017-08-01-preview”,
        “location”: “[resourceGroup().location]”,
        “properties”: {
          “containers”: [
              “name”: “[variables(‘container1name’)]”,
              “properties”: {
                “image”: “[variables(‘container1image’)]”,
                “resources”: {
                  “requests”: {
                    “cpu”: 1,
                    “memoryInGb”: 1.5
                “ports”: [
                    “port”: 80
              “name”: “[variables(‘container2name’)]”,
              “properties”: {
                “image”: “[variables(‘container2image’)]”,
                “resources”: {
                  “requests”: {
                    “cpu”: 1,
                    “memoryInGb”: 1.5
          “osType”: “Linux”,
          “ipAddress”: {
            “type”: “Public”,
            “ports”: [
                “protocol”: “tcp”,
                “port”: “80”
    “outputs”: {
      “containerIPv4Address”: {
        “type”: “string”,
        “value”: “[reference(resourceId(‘Microsoft.ContainerInstance/containerGroups/’, ‘myContainerGroup’)).ipAddress.ip]”

What about persistent storage options for ACI containers? Microsoft has already made some documentation on how to add persistent storage to ACI containers which can be found here –>

More info to come.

Creating Azure ARM template with Veeam Agent unattended setup

One of the things that you need to remember to have when moving to the public cloud is to do backup of your stateful virtual machines since this is not something that is included as part of the basic service that cloud platforms provide (Azure, Google and Amazon) for instance.

NB: Azure has an Backup service part of Recovery vault which enables backup of VM in Azure, but does not deliver the in-depth recovery options that Veeam delivers.

One of the cool things that Veeam provides is the Veeam Agent for Windows which now supports backup directly to Cloud Connect at a Service Provider for instance, and has quite good ways of doing silent installs with automatic configuration of the agent itself. Which makes it easy for us to do automatic deployment of VMs with backup configured.


Veeam Agent for Windows comes in different editions –> 
where Cloud Connect is supported on Workstation and Server editions.

Now when setting up new virtual machines in Azure you should automate the deployment in some way, either that you are using some form of script which does unattended deployment of the agent and the backup configuration jobs.  Or that you have a sysprepped Azure Image which already contains the Veeam Agent which makes it easier for mass deployment.

Doing Unattended deployment

To do unattended install of the Veeam Agent you just run the executeable with the following parameters.

/silent /accepteula

This should also import the license file and define which type of edition the agent is running. The default path of the agent is %Program Files%\Veeam\Endpoint Backup

From here we can for instance have a script which configured the correct license on the host.

Veeam.Agent.Configurator.exe -license /f: [/w] [/s]

/f: Path to license file

/w: Sets the agent edition to Workstation. If this parameter is not specified, Workstation edition is set automatically for the client OS versions.

/s: Sets the agent edition to Server. If this parameter is not specified, Server edition is set automatically for the server OS versions.
So a quick install script can look like this.

cd c:\

# Installs agent silent
.\VeeamAgentWindows_2.0.0.700.exe /silent /accepteula

# Add sleep before changing directory
Start-Sleep -Seconds 50
$path = ‘.\Program Files\Veeam\Endpoint Backup’
cd $path

# Adds license and changes the Server edition

.\Veeam.Agent.Configurator.exe -license /f:c:\veeam_agent_windows_trial_0_0.lic /s

Sysprepped Image

If you plan on creating a sysprepped image predefined with configuration and job you need to create a registry value under HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Endpoint Backup\SysprepMode (DWORD)=1. This registry key value is used to regenerate the job ID when Veeam Agent for Microsoft Windows starts for the first time on the new computer.

Setting it up in Sysprep mode will retain the license and the current job configuration of the agent itself.

Creating custom Configuration files

It is also possible to create a custom configuration file using XML which can contain all the settings and backup jobs of the agent itself. This configuration file can also be imported to other agents. It is important to note that the configuration files does not contain any license files so that has to be impoted seperatly.

After you have configured the agent with the required configuration from a base VM you can export the configuration file using

.\Veeam.Agent.Configurator.exe –export

And just using the –import parameter will import the configuration. Now the issue is when we want to generalize this into an Azure VM. The simplest way is to export the agent install setup to an Azure Storage Blob or have it part of a sysprepped image and then also have the license with the configuration file as part of the setup placed in a Azure Storage Account and if you also create a SAS token with access to each seperat blob as well you have granular access to each blob in Azure. When you have each SAS blob token which can be created using

Get-AzureRmStorageAccount –Name –ResourceGroup |  New-AzureStorageBlobSASToken -Container “script” –Blob  “blobname” -Permission rwd

When this is done we can download each file using an web request from PowerShell so we can reference is using a script.

Example script where I have each file with its unique SAS token, without any specific configuration.

# Store variables

$folderName = “veeam”
$dest = “C:\WindowsAzure\$folderName”
$veeamagent = ‘VeeamAgentWindows_2.0.0.700.exe’
$veeamlic = ‘veeam_agent_windows_trial_0_0.lic’

mkdir $dest
cd $dest

# Downloads Veeam agent and license file and Configuration file

Invoke-WebRequest -outfile $dest\$veeamagent

Invoke-WebRequest -OutFile $dest\$veeamlic

# Installs agent silent
.\VeeamAgentWindows_2.0.0.700.exe /silent /accepteula

# Add sleep before changing directory
Start-Sleep -Seconds 60
$path = ‘c:\Program Files\Veeam\Endpoint Backup’
cd $path

# Adds license and changes the Server edition

.\Veeam.Agent.Configurator.exe -license /f:$dest\$veeamlic /s


Adding it to a Azure ARM template with Script extension
Now the final piece is to add this script to an ARM template to do automatic deployment of virtual machines in Azure with Veeam agent installed with a license and prepared configuration job.

The easiest way to do this is using the vm script extension in Azure to run the script directly either as part of the deployment or adding the extension to a preexisting VM that is running in Azure. Create the script above as PowerShell script which then will be run as part of the ARM template deployment.

Important here that the Veeam agent is placed in the storage name that is part of the ARM template variable.

“name”: “MyCustomScriptExtension”,
“type”: “extensions”,
“apiVersion”: “2016-03-30”,
“location”: “[resourceGroup().location]”,
“dependsOn”: [
   “[concat(‘Microsoft.Compute/virtualMachines/myVM’, copyindex())]”
“properties”: {
   “publisher”: “Microsoft.Compute”,
   “type”: “CustomScriptExtension”,
   “typeHandlerVersion”: “1.7”,
   “autoUpgradeMinorVersion”: true,
   “settings”: {
     “fileUris”: [
https://’, variables(‘storageName’),
     “commandToExecute”: “powershell.exe -ExecutionPolicy Unrestricted -File start.ps1”

Or we can use Powershell to run the script against a currently running VM

Set-AzureRmVMCustomScriptExtension -ResourceGroupName “example” -Location “example” -VMName “example” -Name “Veeam” -TypeHandlerVersion “1.1” -StorageAccountName “Contoso” -StorageAccountKey <StorageKeyforStorageAccount> -FileName “veeam.ps1” -ContainerName “Scripts”



Azure Stack– Secure by design

Previously I have blogged about the underlying architecture and features which is going to be part of Azure Stack → Microsoft recently announced the launch of Azure Stack as well →

Responsibility model in Cloud

Now I want to focus a bit on one aspect that was not included in the previous blogpost, and that also has not been highlighted in Microsoft’s blog which is security in the platform.

In a Public Cloud scenario, there is a distinct line between what is the public cloud vendor’s responsibility and what is the customer’s responsibility. The area of responsibility changes when a customer goes from IaaS (Infrastructure as a service) to PaaS or SaaS model. In the shift from IaaS to PaaS more responsibility move to the cloud provider. As it would be if we were to go from a SQL server running inside a virtual machine to a Azure SQL Database where we as a customer have no control of the virtual instances that deliver the service.

Shared responsibility model between customer and cloud provider

In Azure there are numerous of security mechanisms in place to ensure that data is safeguarded, going from the physical aspect up to the different services running on top. So an example from a customer perspective. In public Azure a customer does not have access to the hypervisor layer, as we might be used to in a regular virtualization environment. We as a partner havethe same limitations, so therefore when managing customers we have to consider the same limitation. This means we have to do management and in a different manner.

Security on the platform layer

One of the design principles that Microsoft did with Azure Stack was that it should be a self-contained platform and be consistent with public Azure, which meant that management needed to have the same mechanisms in place.

With Azure Stack from a management perspective we only have access to the admin portal where we have no visibility into customer workloads. We can only use the admin portal to create tenant subscriptions, do Azure Stack infrastructure management and also get health status about the platform and audit information about administrator activities.

One of the security design principles that Microsoft used for Azure and Azure Stack is something called “Assume breach”. Assume breach is a philosophy where we already assume that my system is compromised or will be compromised. So how can the platform detect the breach and how to limit the effect of an attack. Microsoft therefore put in place numerous security mechanisms in Azure Stack such as

  • * Constrained Administration
    * Least privileged account – The platform itself has a set of service accounts used for different services which are running with least privilege
    * Administration of the platform can only happen via Admin portal or Admin API.
    * Locked down Infrastructure

    • * Application Whitelisting –  Used so only code that is digitally signed  Microsoft or signed by Azure Stack OEM will run on the system, any other executable by other non-signed or third party will not run.
      * Least-privileged communication – Means that internal components in Azure Stack can only talk with components that it is intended to.
      * Network ACLs – Everything is blocked by default using firewall rules.
      * Sealed hosts – No access to the underlying hosts directly

    • * Lifecycle management – Microsoft together with the OEM vendors will provide full lifecycle management using the lifecycle host to do uninterrupted system patching, such as firmware, drivers, OS patches and so on.

The Second security design principle that Microsoft used is hardened by default, which means that the underlying operating system has been fine tuned on security.

  • * Data at rest encryption – All storage is encrypted on disk using Bitlocker, unlike in Azure where you need to enable this on a tenant level. Azure Stack still provides the same level of data redundancy using three way copy of data.
    * Strong authentication between infrastructure components
    * Security OS baselines – Using Security Compliance Manager to apply predefined security templates on the underlying operating system
    * Disabled use of legacy protocols – Disabled old protocols in the underlying operating system such as SMB 1 also with new security features protocols such as NTLMv1, MS-CHAPv2, Digest, and CredSSP cannot be used.

  • Windows Server 2016 security features
    * Credential Guard – Credential Guard uses virtualization-based security to isolate secrets so that only privileged system software can access them.

    • * Code Integrity – A feature used in Credential Guard, Allows Only code that is verified by Code Integrity, usually through the digital signature that you have identified as being from a trusted signer, is allowed to run. This allows full control over allowed code in both kernel and user mode.

    • * Anti malware – Uses Windows Defender on the underlying virtual machines which make the platform and on the host operating system.

    • * Uses Server Core to reduce attack surface and restrict use of certain features.

Security at tenant layer

Now we have looked at the different aspects of security mechanisms which are operating at the platform layer which are invisible for the different tenants running on Azure Stack. So let us take a closer look at the security features we can use as a tenant in Azure Stack.

  • Azure Resource Manager – ARM by itself has a RBAC model which is used to determine what kind of resources a user has access to a subscription, resources group and then different objects. Access within ARM can be given either at a subscription level using the different built-in roles or a custom role. Users can also inside each tenant can be given access to a certain resource group which might contain one or multiple objects such as virtual machines. Here they might only be given access to do restarts of the virtual machine inside a certain resource group or we can create a custom role with specific permissions to certain objects.

Overview of the role based access control in Azure Stack and different levels of access

  • Azure Resource Policies – Can be used to enhance regular ARM access rules, such as allow a user to only provision virtual machines with a certain instance type or to enforce tags on objects.

  • Network Security Groups – Allows for five-tuple firewall access rules which can be defined on either for each network card or per subnet level. This means that we can define firewall rules regardless of which guest OS is running and can define rules before the traffic can leave the virtual NIC.

  • Virtualized networking layer – Azure Stack is using a software-defined networking solution which itself is being managed by the underlying network controller. This however is using a tunneling protocol called  VXLAN which is used to isolate each tenant into its own virtual network. Using VXLAN it does not make it to open to traditional layer 2 attacks which it would using regular VLAN.

  • TLS/SSL –  which uses symmetric cryptography based on a shared secret to encrypt communications as they travel over the network. This is enabled by default on all platform service and API’s which are available in Azure Stack.

  • IPsec – an industry-standard set of protocols used to provide authentication, integrity, and confidentiality of data at the IP packet level as it’s transferred across the network. This is used when setting up Site to Site VPN connection with a Gateway in Azure Stack.

  • Azure Key Vault – helps you easily manage and maintain control of the encryption keys used by cloud apps and services. Key Vault can for instance be used to store private keys of Digital Certificates used for App Service or virtual machines.

  • What do we still need to think about?
    Even if Microsoft comes with a lot of security enhancements and features in Azure Stack which are enabled by design, there are still alot of considerations we need to think about when moving workloads to Azure Stack.

  • * Azure Stack does not provide a solution to do management of virtual machines – This means that we still need to do patching, updates, monitoring and management of in-guest applications and services on virtual machines.

  • * Azure Stack does not provide a solution that does backup of data and virtual machines  – This mean that we still need to use some form of in-guest backup solution to maintain copies of our data.

  • * Azure Stack does not provide an antimalware solution for in-guest VM – This means that we still need to have some form of malware protection for  in-guest virtual machines.

  • * Azure Stack does not have Azure Security Center so if you open up the virtual firewall to your virtual machines you will not get notified.