Monthly Archives: March 2017

Getting started with XenApp Essentials

XenApp Essentials is the new Azure only offering which provides us with a easy way to deliver Desktop as a service for end-users and is now available in the Azure marketplace.. This offering is now in GA and this blog post is an introduction to the service and how to configure it properly for the first deployment. XenApp Essentials is a Citrix cloud offering which is exclusive to Azure, it deployed Task Workers which is available using NetScaler Gateway as a service and requires no other configuration (Storefront and NetScaler is managed by Citrix Cloud).  There are a couple things you need to have in place before you can deploy XenApp Essentials as it is today.

* Existing Azure Subscription
* An existing Azure AD account and an user which exists within that domain that has access to provision resources within the Azure Subscription
* Existing AD domain controller VM within that Azure Subscription.

So just as a quick note, I have a pay-as-you-go subscription with XenApp Essentials where my AzureAD user has owner rights to my subscription.

image

And this users needs to be a AzureAD user, we cannot add a regular Microsoft Account to XenApp Essentials like @outlook or @hotmail.com addresses.

Also after you have configured the domain controller you need to define the domain controller as authoritative DNS server for the virtual network it resides in to allow DNS queries to work properly when configuring the setup. So in my case the IP address of my domain controller is 10.1.0.4 this might be different for you depending on the IP range you have for your virtual network.

image

When this is done, we can go into the marketplace and order XenApp Essentials

image

And define the different options that you need such as a resource group, location of the different VMs that will be running and you need to link this to your Citrix Cloud account if you have any or if not you will be prompted to create one.
(I already have one therefore I get this error message)

image

And when you scroll further down you will also get asked to enter the amount of users which you want to use for XenApp Essentials, this will then include RDS cal and XenApp cals on a monthly basis

image

And note you have an option to define additional data transfer since this feature is using NetScaler Gateway as a service which is running inside the cloud connector VMs and therefore bandwidth billing occurs from Microsoft.

So now that we are done with the setup and configuration in Citrix cloud we can continue on with the process. Since I already have a Citrix Cloud subscription I need to do it the other way around from within the Citrix cloud portal.

image

Now you need to authenticate with the user we just created in AzureAD to allow access to the subscription.

image

Next go into Catalogs and setup a new app collection (as during the tech preview only domain joined scenarioes was available)

image

When you define the Azure options, remember to have it placed in the same setup as where the domain controller resides so that communications works properly

image

image

After you are done with all the configurations, you are good to go with the deployment. Now you can follow the deployment process from within the citrix admin portal

image

So after deployment it seems like. So we have two virtual machines that can be used for XenApp connections, so these two servers hosts the cloud connector component and wiill also be used for NetScaler Gateway as a Service.

image

The deployment will also create a new resource group which it will provisiong the task workers. So note when you get the “Creating XenApp Server VMs” in the App Collection this will take a lot of time so be patient.

So now when the deployment is done, we need to deploy the applications and publish it to the user.

image

image

After we are done with the publishing we can access the setup using, Citrix Cloud Storefront URL, which is shown within the App Collection.

Citrix and Azure Stack – Current status

After a lot of discussions with other colleagues in the end-user computing space (Yes you know who you are!)
There has been a lot of questions around Citrix and Azure Stack and what can we actually deliver there today? Therefore I wanted to write this blogpost to explain the current limitations and what we can actually deliver when it comes to Citrix on Azure Stack as it is now, before the general availability. One thing to note however is that the restrictions here are mostly from the Microsoft side of things as it is on platform now.

So I have described most of the Azure Stack platform and architecture in my previous blogpost found here –>  http://msandbu.org/what-is-azure-stack-and-want-is-the-architecture/

Now on thing I wanted to point out is that since Azure Stack wants to mimic public Azure is also means that it has the same limitations as Azure especially when it comes to networking capabilities, but ill get back to that later. The best way to describe how Citrix can be used with Azure Stack is to describe how it can integrate with Azure today and take it from there.

Citrix has multiple services which can be provisioned in Azure today using the Azure marketplace. Which is Sharefile, Unidesk, NetScaler, XenApp trial and the Essentials services

image

Networking and marketplace challenges:
None of these marketplace offerings are available today in the AzureStack marketplace, even if Microsoft now has syndication enabled which allows us to download “pre-approved” services into Azure Stack and publish it to tenants. As of now there are quite limited number of items available in the syndicated marketplace. So Citrix needs to get the process going, since that is the only way that we can get NetScaler in there properly since it is running a custom firmware. We can of course download a NetScaler provisoined instance in Azure and import it to Azure Stack but that is a quite cumbersome process. The second part if we managed to get NetScaler up and running in Azure Stack we would still have the same limitations when setting up a HA pair we would need a Azure load balancer in front since GARP does not work in Azure Stack. GSLB would not be supported as well since it is not support in public azure (before the multi NIC / IP which came last week)

Of course NetScaler support and access to the enviroment could still be done using NetScaler Gateway as a Service

Templates:
Also the XenApp 7.13 trial template which is available in marketplace might use a newer ARM API version which also is not available in Azure Stack as of now (AzureStack API version = api-version=2015-01-01) and that also uses Azure Automation for some pieces as well which is not included in Azure Stack. Which means that even if Citrix published the XenApp 7.13 trial on the marketplace syndication option we might not be able to use it properly.

However we can build our own service template that we can publish in the marketplace.

Hypervisor Integration:
Citrix has been great on adding support for Azure Resource Manager when it comes to MCS resource provisioing and support for premium disks and the different virtual machine instances. As of now Citrix has a hard-coded integration with public Azure using a subscription ID, and it cannot therefore be integrated with Azure Stack completely as of now. Which means we would not have the hypervisor integration and therefore cannot do single image management properly using Citrix studio as we can with public Azure.

Might be that Citrix needs to create a custom resource provider to link into hypervisor directly to be able to properly provision resources.

GPU:
And of course the final piece is if you are using Citrix with GPU. Public Azure published the N-series end of last year, which gives public Azure GPU-passtrough capabilities using DDI feature in Windows Server 2016. These virtual machine instances called N-series will not be available on Azure Stack (It only has A, D, and D2 series instances) which means you will not be able to deliver GPU support for Citrix servers running on Azure Stack.

We could of course still use another part of the infrastructure to deliver the GPU-based desktops such as Hyper-V/XenServer/VMware the only issue is that we cannot directly bridge the gap between the tenant workloads running on AzureStack across to for instance a vSphere cluster. The issue is here is that AzureStack does not allow us to stretch the VXLAN traffic to another part of the infrastructure.

Summary
Now we are still far away from the GA on Azure Stack from Microsoft, so it will be interesting to see what Citrix has in terms of a strategy for adding capabilities against Azure Stack. There are some limitations as of now that you should be aware of but these might change!

What is Azure Stack and what is the architecture?

Microsoft has always been heavily committed to the datacenter and has a large footprint with Hyper-V & System Center. If we dial the clock back five years, Microsoft released Azure Pack, which was their first big attempt at a complete integrated private cloud offering and which was meant as a private offering of Microsoft Azure. As time progressed Microsoft also introduced the CPS (Cloud Platform Systems) together with Dell → http://www.dell.com/en-us/work/learn/microsoft-cloud-platform-system-powered-by-dell

The negative aspect of Azure Pack and CPS was that it was too integrated with other products such as System Center, and was still restricted to the traditional three-tier architecture with Compute, Storage, and Networking as separate parts of the infrastructure. This coincided with the emergence of a lot of the other hyper converged infrastructure platforms as well.

Microsoft has now been aiming Azure Stack to be the true next generation enterprise private cloud platform, and has been pushing development on the platform for almost three years now since the announcement at Microsoft Ignite in 2015 and will hopefully be GA Mid-CY17.

So what features are included in Azure Stack so far and what do we know that is coming to the platform during the course of the year?

Features:
Virtual Machines
Storage Accounts
Virtual Networks
Load Balancer
Network Security Groups
DNS Zones
Azure Functions
Web Applications
SQL
Marketplace with syndication option
KeyVault
VPN
ARM functionality
Azure Pack Connector

  • BlockChain templates*
    CloudFoundry templates*
    Mesos templates*
    Service Fabric*
    Azure Container Service*

* Post GA

Much of the core concepts behind Azure Stack is to have a consistent experience between public Azure and Azure Stack, therefore all features and services will be identical to their counterparts in public Azure. So if a feature is added to Azure Stack it will have the same “look and feel” as the feature has in public Azure. So from a developer standpoint this will translate into smaller changes if you have applications or ARM templates that are being used for public Azure today to be able to use these for Azure Stack. The only thing that is limiting this for now is the support for newer API versions of ARM on Azure Stack.

Moving on the purpose of this post is to explore the different building blocks that make up Azure Stack, from the virtualization layer up to the different platform services themselves to try to explain how it all fits together.

Lifecycle management

Azure Stack will come as a bundled platform from the OEM vendor which is a set of certified servers from the vendor.  Azure Stack cannot be installed on any type of infrastructure. The reason behind this logic is that Microsoft wants to take total responsibility of the lifecycle management of the platform as well as ensuring optimal performance. So if one  of the OEM vendor releases a firmware update, BIOS update or any update to the hardware Microsoft wants to ensure that the upgrade process goes as smooth as possible and that the patch/firmware has been prevalidated in testing. In order to do this Microsoft needs to set certain limitations to the hardware vendors to ensure that they can maintain control of the hardware.

Azure Stack is split into different infrastructure roles which has its dedicated area of responsibility such as networking ,storage and compute.

Source: https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-architecture

From a tenant perspective they interact using the different APIs that are available from Azure Resource Manager (ARM). Now ARM which is available using REST API’s can be triggered either from using the Web portal or using for instance CLI tools like Azure CLI, Powershell or using the SDK. Now depending on what the end user does to trigger an request it will be forwarded to the broker and will then be forwarded to the responsible resource provider. The resource provider might be the networking provider if the tenant wants to create a virtual network or the compute provider if the tenant wants to provision a virtual machine.

The oveview architecture of the Azure Stack is split into Scale Units (Which is a set of nodes which makes out a Windows Failover Cluster as is a fault domain) Then we have an Azure Stack Stamp which consits of one or more Scale Units. Then we have one or more Azure Stack stamps which can be added to a region. By default in GA we are limited to 12 nodes which consists of 3 Scale units and 4 nodes in each scale unit.

image

Core philosophy – Software Defined Everything
At its core Azure Stack consists of a hyper converged platform running Windows Server 2016 from one of the four OEMs (Dell, Cisco, HP or Lenovo). The purpose with a hyper-converged setup is that you have a server with local disks attached which then are connected together and make a distributed filesystem. This is not a unique feature for Microsoft, there are many vendors in this marketspace already like Nutanix, VMware and Simplicity but all have different approaches on how they store and access their data.This hypercoverged setup also comes with other features like auto-tiering, deduplication and having these features only in software makes this a software-defined architecture.

Source: https://msdnshared.blob.core.windows.net/media/MSDNBlogsFS/prod.evol.blogs.msdn.com/CommunityServer.Blogs.Components.WeblogFiles/00/00/00/73/13/0363.hyper-converged.JPG

It should be noted that since it is a hyper converged setup, the compute will always scale up with the storage attached to it, since this is the current setup with Storage Spaced Direct as of now. Another thing to be clear about is that Azure Stack has a current limitation at GA to scale up to 12 nodes in a single region as mentioned above, there is more content on that here –> https://azure.microsoft.com/mediahandler/files/resourcefiles/ebb2fd25-06ec-476b-a29a-bca40f448cf6/Hybrid_application_innovation_with_Azure_and_Azure_Stack.pdf

Storage Spaces Direct
The bare-metal servers are running Windows Server 2016 with Hyper-V as the underlying virtualization platform. The same servers are also running a feature called Storage Spaces Direct (SPD) which is Microsoft’s software-defined storage feature. SPD allows for the servers to share internal storage between themselves to provide a highly-available virtual storage solution as base storage for the virtualization layer.

SPD will then be used to create a virtual volume with a defined resiliency type (Parity, Mirrored, Two-way mirror) which will host the CSV shares and will use a Windows Cluster role to maintain quorum among the nodes.

SPD can use a combination of regular HDD disks and SSD disks (Can also be all-flash) to enable capacity and caching tiers which are automatically balanced so hot data is placed on the fast tier and cold data on the capacity tier. So when a virtual machine is created and storage is placed on the CSV share, the virtual hard drive of the VM is chopped into interleaves of blocks which by default are 256KB and is then scattered across the different disks across the servers depending on the resiliency level.

Overview of Storage Spaces Direct and Hyper-V in a Hyper Converged setup

SPD uses many of the different features in the file protocol SMB 3 to provide highly-redundant paths the storage among the nodes. It also utilizes another part of the SMB 3 protocol which is SMB direct.

SMB Direct is Microsoft’s implementation of remote direct memory access (RDMA) which gives direct memory access from the memory of one computer into that of another without involving either one’s operating system. With this feature it provides low latency, high throughput connections between each server in the platform, without putting a lot of strain on the CPU on the servers, since it is essentially bypassing the operating system when it moves data. It is important to note however that SPD does not provide data locality and a virtual machine running on any node can request data blocks from all disk across the entire cluster, so this puts a lot of strain on the network therefore RMDA is an important aspect, but ill get back to that.

Storage Spaces Direct by default uses the ReFS file system, which has some enhancements compared to NTFS. . It works proactively to do error correction, In addition to validating data before reads and writes, ReFS introduces a data integrity scanner, known as a scrubber. This scrubber periodically scans the volume, identifying latent corruptions and proactively triggering a repair of corrupt data.

ReFS also introduces a new Block cloning API which accelerates copy operations & also sparse VDL allows ReFS to do zero files rapidly. You can read more about the mechanims behind it here –>  https://technet.microsoft.com/en-us/windows-server-docs/storage/refs/block-cloning

SMB Direct with RDMA
To be able to use RDMA based connections between the hosts you need to have specific network equipment both on the adapter side and the leaf/spine switch configuration. There are different implementations of RDMA but the most common ones are RoCE, iWARP or Infiniband. Now with RDMA as mentioned effectively bypassing much of the operating system layer to transfer data between nodes, which has a negative effect on any QoS policies since it is hard to implement OS based QoS when we are bypassing it. This is where Data Center Bridging (DCB) comes in, it provides hardware-based bandwidth allocation to a set of  specific type of traffic and is there to ensure that the SMB traffic does not hog of all the available bandwidth on the converged NIC team.

Azure Stack also uses a new virtual switch called SET (Switch Embedded teaming) which is a converged NIC teaming feature and combined the Hyper-V virtual switch in the same logical entity on the operating system. This also provides high availability for the physical network layer, so for instance if a NIC on a particular node should stop working, traffic would still be active on the other NICs on the node

So far we have covered the hyper converged setup and the physical networking aspect, what about the virtual networking layer?

Network virtualization
In order to have a fully cloud platform you need to abstract away the physical network as well and move toward using network virtualization so be able to fully automate the tenant configuration. In the early days of Azure Pack Microsoft used a tunneling protocol called NVGRE. This protocol encapsulated IP packets within a GRE segment which allowed to go away from the restrictions that traditional layer two networking had with for instance limited VLAN space and having tenants with overlapping IP ranges.

The issue with NVGRE is that traffic was encapsulated using GRE which in essence is a tunneling protocol developed by Cisco. The negative part about GRE is that it makes it difficult for firewalls to inspect the traffic inside the GRE packets. Microsoft decided therefore with Windows Server 2016 to focus on supporting VXLAN instead, which is now the default network virtualization protocol in Azure Stack. The positive part about VXLAN is that is more widely used by other vendors such as Cisco, Arista, VMware NSX, OpenStack and such. It also uses UDP instead of GRE as its tunneling protocol which allows for lower overhead and makes packet inspection much easier.

VXLAN allows each tenant to have the same overlapping IP segment for instance 192.168.0.0/16, and it will also associate all tenant traffic with a VNI which is a unique identifier for that specific tenant, in pretty much the same matter that VLAN expect that this is completely virtualized and does not involve the switches as well. The switches in a VXLAN setup only see the server IP address and not the tenant specific IP address inside the VXLAN packet. Using this VNI we know which traffic is from a specific tenant and that VNI is used to interact with other of the tenant resources across other nodes as well.

azurestackvxlan

Now the tunneling protocol is one part of the puzzle. The second part is adding NFV (network functions virtualization) which adds functionality to the virtualized network and this is where the distributed firewall and the software load balancer.comes in.

Distributed Firewall
Showing how the distributed firewall is being controlled by the Network Controller

The distributed firewall feature is a virtualized network feature which runs on each of the hyper-v vswitches in a Azure Stack environment. The feature runs regardless of the operating system inside the guest virtual machine and can be attached to a vNIC directly, or to a virtual subnet. Unlike traditional firewalls which acts as a security layer between subnets the distributed firewall acts as a security layer directly to a VM or to a subnet. Now in Azure Stack the distributed firewall feature is presented as network security groups inside the platform. This feature allows for most of the basic stateless access list configurations, IP, PORT & PROTOCOL (Source & Destination) and does not replace in stateful or packet inspection firewall

Software load balancer
Software load balancer combined with Azure Stack

image

The software load balanced also is a feature which is running on the hyper-v switch as a host agent service, and is also managed centrally by the network controller which acts as a central management for the network. The load balancer works on layer two and is used to define a public IP with a port against a backend pool on a specific port. The software load balancer is load balancing using (DSR) direct server return which means that it only load balances incoming traffic and the return traffic from the backend servers are going directly from the server back to the requesting IP address via the Hyper-V switch. This feature is also presented in Azure Stack as the regular load balancer.

The Brains of the network – Network Controller
To ensure that the software load balancing rules are in place and that the distributed firewall policies are synced and maintained and of course when we have VXLAN in place all the hosts needs to have a IP table so each node knows how to communicate with all the different virtual machines on different hosts. This is where there needs to be a centralized component in place which takes care of that and that is the network controller.

On Azure Stack the network controller runs as a highly available set of three virtual machines which operates as a single cluster across different nodes. The network controller has two API interfaces, one which is the northbound API which accepts requests using REST API, so for instance if we go and change a firewall rule or create a software load balanced in the Azure Stack UI the Northbound API will get that request. The network controller can also be integrated with System center but that is not a part of Azure Stack.

Network Controller architecture – With Azure Stack

The southbound API will then propagate the changes the different virtual switches on the different hosts. The Network controller is intended to be a centralized management component for the physical and virtual network since it uses the Open vSwitch standard, but the schema it uses is still lacking some key features to be able to manage the physical network.

The Network Controller is also responsible for managing the VPN connections and advertisement of the BGP routes and maintaining sessions states across the hosts.

So this summarizes some of the capabilities in Azure Stack and how it interacts with the different components and what the platform underneath contains with some of the services. There are of course more features here which I have not elaborated on here such as the storage options it presents, but you can read more about it here –> https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-architecture

There will be more information available on this subject as we get closer to GA from Microsoft and can go more in-depth of the technical architecture on a full platform from top to bottom.

Understanding & troubleshooting the ICA/HDX protocol

The last weeks I’ve been involved with working on Goliath’s guide on troubleshooting and understanding the Citrix ICA & HDX stack, which can be found available for download here –> http://bit.ly/2o5v9me

To be perfectly honest I did not contribute much, most has been done by the support team at Goliath and I did a small part on the NetScaler and Adaptive Transport protocols pieces on the guide.

For many this can be an handy guide to understanding many of the bits and pieces which are involved in a ICA session. Also understanding some of the different transport modes and display protocols which are combined to deliver a unique end-user experience.

Security aspects when moving to the public cloud – IaaS

This is a follow-up from my earlier blogpost around –> http://msandbu.org/security-aspects-when-moving-to-the-public-cloud/ and the purpose about this post is highlighting the security aspects on IaaS services.
with IaaS we have the capabilities to provision a set of virtual machines and have different supporting services around it. The purpose of this post is going to be focusing on what services that are being offered from  Google, AWS and Microsoft Azure .However most of the topics in this post will act as general guidelines for IaaS public cloud offerings. So let us start looking and the common scenarioes.

1: Control the Deployment & Automation tools
One of the cool features within Azure (Resource Manager), Amazon( Cloudformation) & Google (Deployment manager) Is that you have services that can be used to automate deployment using either JSON or YAML templates, which of course can ease the deployment time instead of navigating around the UI and setting up resources manually.

Of course this can have a negative effect if someone manages to reuse the same template to overwrite or if you have an orchestration tool which overrites an existing configuration or removes an production enviroment running in the any of the cloud vendors. Now all providers have ways to defined policies for their deployment tools so that they can’t replace or delete an existing deployment. Also combine this with resources locks so no one deletes someone unintended.

2: Management access
One of the properties of the cloud is of course self-service, being able to provision features and virtual machines using either CLI or UI, now as part of this offering to make this easy to navigate and manage many virtual machines provisioned are by DEFAULT open to management ports ( SSH or RDP) on their default ports.

Insert Public IP range, Port scan and dictonary attack here –>
Of course what I’ve seen is that many from ease of management like to maintain the easy way to manage virtual machines using RDP “in case something fails” So remember to always remove the default management capabilities after you are done configuring it or restrict it using ACLs inside the provider. Also what I also see is that when a company sets up a cloud subscription many people are added as full administrators which have full access everywhere, now unlike internal IT systems the public cloud is available from everywhere, so if someone manages to get one of those usernames they can delete your entire infrastructure or even access virtual machines or reset local administrator password of a virtual machine.

image

So always use MFA (Multi-factor authentication) on your cloud subscription so avoid a catastrophic disaster if someone managed to get a admin user.

3: Role based access control within cloud offering
Setting up MFA is one step to closing the gap if someone outside of the organization manages to get access to a set credentials. Defining role based access control within the cloud is the next step. Like you wouldn’t like to have helpdesk with global admin rights on your on-premises infrastructure, you wouldn’t like them to have even more access in the cloud? All cloud vendors have good robust ways to define role based access control within their offerings using their IAM offering. So for instance you can create a custom role which has only access to view performance or view status on virtual machine, or a custom role that can be used to restart virtual machines, and not have the access to delete a instance or remove a harddrive or create a virtual machine which costs the company 10,000$ a month for instance.

4: Think how to segregate the network
The cloud providers all use some form of network virtualization apart of the regular VLAN standard (Since it only scales to 4096) which acts like a generic tunneling protocol, but in most scenarioes customers get a private IP range where all their servers reside on 10.0.0.0/24 something. Which is pretty much the same as you might have had before on-premises where you exposed service externally using NAT on the firewall.  Also cloud providers have built-in ACL features into this network virtualization solution, which in cases acts both on layer 2 and layer 3, which means that you can restrict traffic using 5-tuple between two hosts on the same network (where the policy is enforced on the vNIC) and not inside in the guest, which is different from on-premises where we in most cases have a in-guest firewall and ACL between different subnet. Now don’t threat the public cloud network any differently. You are going to expose service using a NAT rule or LB rule to a service which you should have a DMZ-subnet and it again has a set of ACLs which defines how it is allowed to communicate with the internal network. Also it is important to remember that since cloud vendors are using a form of network virtualization feature, many of the layer two features like GARP, RARP, VLAN, VMAC are not supported and will not technically work at all, so therefore it removes some of the risk on layer two attacks.

5: Don’t treat the cloud virtual machines any differently
Regardless if you run a virtual machine in the cloud or on-premises is still requires the same attention, which means you need to have mechanisms in place to handle patching, logging and monitoring, endpoint control and such. So manage and control a VM in cloud as you would on-premises. One thing to note here is that AWS for instance provides a service to do patch management and such –> https://aws.amazon.com/managed-services/?nc2=h_m1

Also many providers have the option to encrypt virtual machines using a Key or hybrid with on-premises HSM modules. This is of course going to reduce the risk if someone manages to get access to the enviroment and managed to download the VM virtual harddisk.

6: Don’t forget the basic features
One of the main things I notice when customers start adopting cloud is that they aren’t reading what’s included in the service. For instance when we setup a virtual machine in Microsoft Azure what are we actually getting? A VM with a dedicated set of resources and a hard disk with IP network connected. Also the backend storage for a virtual machine is often assigned a storage account where it is defined that all storage blocks should be replicated three times within the same datacenter. Great so now we have redundancy what happens if the VM get have gets infected with ransomware? Back to start…. So remember to have basic stuff like backup of virtual machines in place like you would have on-premises.

8: Understand where your responsibility lies
this is also an important part too understand, as a customer of a cloud provider if we manage to understand what is the cloud providers responsbility and what our responsbility is, it makes it easier for us to understand what we actually need to take care of ourselves.

image

7: Read their documentation!
And if you are uncertain where to start, all providers have good documentation which describes their current best-pratices to deploy a set of different services.

Azure: https://docs.microsoft.com/en-us/azure/security/security-best-practices-and-patterns, IaaS Azure: https://docs.microsoft.com/en-us/azure/security/azure-security-iaas

Amazon Web Services: https://d0.awsstatic.com/whitepapers/Security/AWS_Security_Best_Practices.pdf

Google Cloud Computing: https://cloud.google.com/docs/enterprise/best-practices-for-enterprise-organizations

Defining Resource Policies in Azure Resource Manager

One of the cool things with Azure Resource Manager is that we can define central policies which can either be deployed to resource groups or entire subscriptions which can be used to for instance define policies that

* We can only deploy to North and West Europe
* We can only deploy Windows Server 2012 Datacenter VM
* We can only deploy encrypted virtual machines
* We can only use storage accounts with LRS or GRS.

As of now the resource policies can affect the following resource type

  • Microsoft.CDN/profiles/sku.name
  • Microsoft.Compute/virtualMachines/imageOffer
  • Microsoft.Compute/virtualMachines/imagePublisher
  • Microsoft.Compute/virtualMachines/sku.name
  • Microsoft.Compute/virtualMachines/imageSku
  • Microsoft.Compute/virtualMachines/imageVersion
  • Microsoft.SQL/servers/databases/edition
  • Microsoft.SQL/servers/databases/elasticPoolName
  • Microsoft.SQL/servers/databases/requestedServiceObjectiveId
  • Microsoft.SQL/servers/databases/requestedServiceObjectiveName
  • Microsoft.SQL/servers/elasticPools/dtu
  • Microsoft.SQL/servers/elasticPools/edition
  • Microsoft.SQL/servers/version
  • Microsoft.Storage/storageAccounts/accessTier
  • Microsoft.Storage/storageAccounts/enableBlobEncryption
  • Microsoft.Storage/storageAccounts/sku.name
  • Microsoft.Web/serverFarms/sku.name

So for instance if we want to deploy a policy definication which we use to define which images are allowed to be deployed to a specific resource group we can use this.

$policy = New-AzureRmPolicyDefinition -Name regionPolicyDefinition -Description “Policy to allow certain images in region group” -Policy ‘{   

“if”: {
  “allOf”: [
    {
      “field”: “type”,
      “equals”: “Microsoft.Compute/virtualMachines”
    },
    {
      “not”: {
        “allOf”: [
          {
            “field”: “Microsoft.Compute/virtualMachines/imagePublisher”,
            “equals”: “MicrosoftWindowsServer”
          },
          {
            “field”: “Microsoft.Compute/virtualMachines/imageOffer”,
            “equals”: “WindowsServer”
          },
          {
            “field”: “Microsoft.Compute/virtualMachines/imageSku”,
            “equals”: “2012-R2-Datacenter”
          }
        ]
      }
    }
  ]
},
“then”: {
  “effect”: “deny”
}

  }'

  $resourcegroup = Get-AzureRmResourceGroup -Name EVRYPOC
  $policy = Get-AzureRmPolicyDefinition -Name regionPolicyDefinition
  New-AzureRmPolicyAssignment -Name “VirtualPolicyAssigment” -PolicyDefinition $policy -Scope $resourcegroup.resourceid

With the new portal UI enhancement we can actually see the policy as well. which can be seen if you access your azure subscription here –> https://preview.portal.azure.com

image

So what happens if we try to deploy a image not listed in the policy to that specific resource group? You can now see that you can a “Forbidden” deployment error in the portal page

image

Of course you can use this template to restrict the deployment to disallow deployment of SQL virtual machines or older specific skus from the marketplace.  We can also combine this with  resource groups locks as well to disallow updates to the resource group.

New-AzureRmResourceLock -LockLevel CanNotDelete -LockName LockSite  -ResourceGroupName EVRYPOC

image

If you are looking towards defining more role based access control within Azure Resource Manager I highly recommend reading the best-pratices guide from Microsoft here –> https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-subscription-governance

Announcements at Google Next 17 and price cut on IaaS

So the last couple of days I’ve been watching closely on the keynotes at #GoogleNext, where Google was showing alot of the new enhancements to their platform. So a couple of weeks back they also introduced a new relational database service which is geo syncronous as well called Spanner

17191129_262247187565661_1842704247700938377_n

Now up until now Google has been one of the cheapest providers when it comes to regular IaaS, but now they have also announced commited use discount as well which will cut the IaaS cost even more. Let me give you an example.

Price cut on IaaS
A virtual machine running in Microsoft Azure (With Linux, which means no OS licenses) in this case a VM running 16 vCPU, 56 GB of RAM will cost 1,12$ per hour.
A virtual machine running in Google Cloud Compute (With Linux, which means no OS licenses) in this case a VM running 16 vCPU, 60 GB of RAM will cost $0.7600 per hour

How much is this equal to in one year?
9811$ for Microsoft Azure
6657$ for Google Cloud Compute

If this is a static workload which will not be changed during the course of 3 years for instnace we can choose the new Committed Use Discounts, which allows us to commit
a set of resources up to 3 years for even more discount. Actually up to 57% Discount so the same virtual machine instance can come down to 2996$ each year. Remember Committed use discounts can apply to all predefined n1-* machine types and custom machine types. They do not apply to small machine types, f1-micro and g1-small, and also it is now in beta and is not reflected in the Google Cloud Calculator yet.

GPU
Google also announced Google Compute Engines GPU which is of now supported with NVIDIA Tesla K80 cards and soon you will be able to select the AMD FirePro S9300 x2 and the NVIDIA® Tesla® P100.
And one important difference here is that we can attach GPUs To Any Machine Type, and not limited to a certain instance type like we are in Amazon or Azure.  So I can’t wait until they come with a M60 for instance to provide cheap GPU instances for Citrix, as an example.

Google also announced new regions in Montreal, California and Nederlands as well. It like the other regions have three zones within the same country boarder.

Google also announced that they have hosts running with the latest Intel Skylake chip which is also nothing that the other vendors have up and running yet.

You can actually see all the new announcements that were made here –> https://blog.google/topics/google-cloud/100-announcements-google-cloud-next-17/

And this shows that Google is pretty serious with their investmest into their Cloud platform, and during the course of the year, I guess we will be seeing a lot of new features and product coming into this stack.

Security aspects when moving to the public cloud

So this is based upon a session I had at Hackcon a couple of weeks back. There are more and more businesses moving or adopting public cloud one way or another, it can be SaaS, PaaS or traditional IaaS features to move their existing datacenter to the cloud.  Now if we look at the cloud market today 5 vendors make up over 50% of the total market for public cloud where the vendors are (Amazon, Google, Microsoft, Salesforce and IBM) So these guys dictate most of the feature set and pricing for cloud services.

Gartner also predicts that IT spending on public cloud is going to be up from 208 million dollars which it is was in 2016 up to 5x the amount within 2020. Why shouldn’t they? There can be many advantages of moving to the public cloud for a simple business if they do it right.

* Cost Control
I’m not going to say that moving to cloud is going to save you money, but it is going to be give you a better overview on how much money you are actually using on IT infrastructure and will also make it easier to control the usage since in most cases you only pay for what you use or you have a monthly subscription

* Flecibility
To take advantage of new features which are constantly being developed and added to allow customers, quicker time-to-market with new services and features.

* Scaleability
The ability to scale up and down resources based upon different metrics or schedules, for instance for ecommerence and black friday having that capability is a key essence in cloud

* Security
Of course when I talk about security here (before I go more in-depth)  I’m mostly talking about the physical security aspect, since I know from real life experience that some customers have had their small business server standing under their desk or having a small rack in a room where the washing lady also has access. Moving those type of customers to public cloud will gain alot of benefits from the heavily invested cloud datacenters which the different vendors have.

So can we trust the cloud providers with our data? There are new reports about data leakage and hacking happening everyday, so can we trust them with our data? of course many vendors go to great lengths to ensure that they can be trusted with your or your customers data.

All of the providers, stribe after being certified within many different standards, to convince partners that they have routines in place for how they handle physical security, access control and how to manage customer data. Also there are contry specific standard and customer vertical specific certifications within healthcare for instance which is to ensure that the services they provide are compliant with what you can expect with that services.

Now most cloud providers are certified within the common standard which are ISO 27001 and ISO 27017 where the last one is specifically aimed at cloud services. All vendors are good at exposing this information to customers which standards they comply with and a bit more specifics at how they handle data, sub-vendors they are using and where the seperation of responsibility lies between the customer and the provider. For instance you can view Microsoft, Amazon and Googles compliance information here –>

https://azure.microsoft.com/en-us/support/trust-center/
https://cloud.google.com/security/
https://aws.amazon.com/compliance/soc-faqs/

And also within the EU, there are a lot of different compliance needs like EU Privacy Shield and Safe Harbor, but in May 2018 a new standard will be active for companies which deliver services within EMEA, which is GDPR which is going to separate the providers even more in terms on market share. Now I’m not going to describe it in-depth, but GDPR is all about providing more power back to the consumers in terms of demanding the right to be “deleted” and get insight in how a provider handles / uses their data. It also gives consumers the right to get their data exportable to be moved to a new provider. It also has some requirements in terms of security and if there for instance is a data leakage incident at a cloud provider and sensitive data/information gets exposed and it is determined that the cloud provider did not follow the guidelines, the provider can be fined with up to 20 million euros (at the max)

So of course many companies are not adjusted or able to comply with the GDPR and the largest cloud providers are already ramping up in order to comply with the guidelines there, while smaller companies might have more difficulty with being able to comply with the GDPR which might leave them in a position where they cannot sell their services to customers within EMEA.

This will of course create another barrier between the different providers.

1: Risk – Manage the expectations and educate them on what they sign up for

Now one of the most important security risks I see when talking to customers, is not understanding what they actually signed up for when it comes to cloud services. Many have configured services on their own and got infected with ransomware only to discover that the cloud providers don’t actually do automatic backup of virtual machines. And what we also saw with the S3 outtage is that we saw many businesses go down either because they didn’t design (lacking the knowledge) for it or the accepted the risk that a service might go down. Now all of the cloud providers have options to deliver GEO redundant solution IF you design for it.

Also what I also see is that many don’t understand whats included as part of the Service level agreement on individual services. It might be that many companies are actually using the services not as intended and therefore do not fulfill the requirements to get SLA on that service. Then again the SLA will not help you in an outage it only specifies the compensation in credits you will be refunded if something were to happen. You should also be aware that the SLA and feedback is in most cases not automatic, so you will need to follow up with the provider in most cases you get credits back.

2: Find the balance between risk and productivity

Of course what many IT departments are worried about with cloud is loosing control, from what I’ve seen in many customer examples is where people start using cloud services on their own because the IT department doesn’t want to lose control on where the data is stored. With the upraise of SaaS services using these types of services to empower collaboration between internal employees and external users has become simpler and simpler, so we as IT guys have to empower the users but still having to do a risk assessment and finding the balance between

3: Know your cloud vendor

And of course no cloud provider is identical even though many of them provide services which might be similiar in terms of functions. All of them have different ways to do redundancy, do access control and how data is stored and protected. Another thing is that many providers do so much development that services might be in GA but the documentation team is not up to date and therefore you might not be aware of the full security aspect of that particular service. Of course it might also happen that a cloud vendor uses a service which uses a unsecure API or a service which as a known exploit, but they are often pretty good at notifying customers if these types of incidents occur.

4: Things to think about IaaS – PaaS – SaaS

Stay tuned for part 2…

Setting up Marketplace Syndication in Azure Stack TP3

One of the cool features introduced in TP3 is the ability to do marketplace syndication between AzureStack and the public Azure marketplace. This allows us to provide all the different services and images in the public Azure marketplace trough AzureStack.

You need to register your AzureStack deployment with an active Azure Subscription first in order to set it up, this is because of the billing capabilities of the Marketplace.

Important to note that you cannot do this registration with a Microsoft account it needs to be a AzureAD workpalce annount. The setup cannot be done using CSP or EA subscriptions as well and your account you register with needs to be a subscription owner.

To set it up you need to have the AzureRM tools installed on PoC host computer, pref the Hyper-V host.

Open PowerShell
Install-Module AzureRM

When its installed you need to run the script here – –>https://github.com/Azure/AzureStack-Tools/blob/master/Registration/RegisterWithAzure.ps1

RegisterWithAzure.ps1 -azureDirectory YourDirectory -azureSubscriptionId YourGUID -azureSubscriptionOwner YourAccountName

The Azure Subscription ID can be found within Azure under Subscriptions pane.
When, the script is done running you will see this, if if ran successfully.

image

Then login to the admin portal UI and go to Marketplace Management.

image

From there click on Add from Azure

image

This will list out the “supported” marketplace items, note that the list is going to be short compared to what is available in Azure.

image

But this is because I’m guessing that Microsoft will require some approval from the different partners before they are allowed to be published as available in the marketplace syndication feature.

Citrix moving forward with Citrix Cloud and Essentials package

There is alot of interest around Citrix these days with the upcoming releases with XenDesktop Essentials and XenApp Essentials, and also with all the investments in the latest release with 7.13 as well. The purpose of this blog is to summurize the differences between the different offers that Citrix has now moving forward and of course touch upon the strenghts and weaknesses of each deployment model.

Regular XenDesktop deployment
Traditionally XenDesktop & XenDesktop on-premises have been the logical deployment option, and in that case we from an infrastructure point of view, need to maintain the management of the entire stack

image

So that means we need to build up delivery controllers, use a provisioing engine which can use PVS or MCS against a set of different hypervisors and cloud providers, for instance that cloud provider might be Azure or AWS.. In that case we have a connection to a cloud provider which will be used to provision VDI or Session hosts, nothing else. We also need to maintain Storefront and Netscaler which is the access point for the external end-users. Now with this approach we maintain all the control, we can still have desktop or app resources being provisioned in Azure or AWS so we can still deliver cloud resources, and we also maintain control of updates coming. So Citrix now ships about 4 releases a year, which means we would need to upgrade the infrastructure components ourselves.

Citrix Cloud XenApp and XenDesktop Service
Now with a Citrix Cloud deployment the responsibility on some of the different features above are moved to Citrix. Now the different desktops are apps can still continue to be hosted on-premises or in Azure or AWS. We can still leverage the different provisioing engines as we could in a regular on-premises deployment of XenDesktop.

image
Howver with Citrix Cloud, Citrix will host the management plane and will take care of the Delivery Controllers, Site Databse hosting, Monitoring with Director for instance. You still use your existing Active Directory hosting. And when it comes to Storefront and NetScaler you can still use your existing Storefront and NetScaler or you can use a Cloud hosted Storefront which Citrix will manage and you can utilize NetScaler Gateway as a Service (Which I have a post about here –> http://msandbu.org/using-citrix-cloud-with-remote-access-to-azure-using-netscaler-gateway-services/
Now the advantages with this approach is that licensing is handled automatically, management and infrastucture components are handled by Citrix you still get the same benefits as regular on-prem XenDesktop, but you also get continuous upgrades when Citrix do update on Cloud which is about one every two weeks.

The different Cloud subscriptions that Citrix has also includes Smart Tools where especially Smart Scale is quite useful for cloud based deployments since it help shutdown unused resources (Post about here –> http://msandbu.org/citrix-smartscale-and-microsoft-azure/)

Important to note that all Citrix Cloud deployments including Essentials editions will require a cloud connector which integrates with the IaaS provider and Citrix Cloud

XenDesktop Essentials
Citrix has been working closely with Microsoft do create a VDI offering on Azure, and this is where XenDesktop Essentials comes in which will the first of its kind to deliver Windows 10 VDI on Microsoft Azure. Now this service has alot of similarities with Citrix Cloud, and this is possible only if the customer has Windows 10 Enterprise Current Branch for Business in per user mode.

image

Now we still have the different options when it comes to Storefront and NetScaler with the choose between as a Service delivered from Citrix or we can host it ourselves in Azure where the resources will be as regular virtual machines. We will also get access to Citrix Studio which is a bit limited compared to full Citrix Cloud. And when it comes to provisioning part we are now limited to MCS against one resource connection which is Azure, and it will be Desktops only. Citrix will still have the full management of the infrastructure pieces like with Citrix Cloud, we only handle the Windows 10 VDI instances and optionally the NetScaler and Storefront.

The pricing info on XenDestkop essentials can be found here –> https://www.citrix.com/blogs/2017/03/01/xendesktop-essentials-1-faq-pricing-info/

XenApp Essentials
XenApp Essentials is aimed to be the replacement for Azure remote App is also aiming to provide the simple UI of Azure RemoteApp when it comes to provisioing resources. Now unlike XenDesktop Essentials and Citrix Cloud it does not provide us with Studio capabilities, it has its own builtin simple management UI

image

This also has a built-in provisioning engine locked into Azure to do Session based application delivery. So we can use it elsewhere. It also provides with a simple monitoring option which resembles Director. Since it also wants to resemble Azure RemoteApp with its simplicity is also comes with NetScaler Gateway as a service which means that we don’t need to configure and set up NetScalers, it also comes with Cloud hosted Storefront. The only thing we actually need to think about is having a Azure Subscription and have an identity solution in place which is either a virtual machine running Active Directory or maybe even Azure Active Directory Domain Services.

So hopefully this blog post clears some confusion about the different Citrix offerings and the new products they are releasing now within a couple of months.