Monthly Archives: September 2016

Citrix and Azure what options do we have? Explore the new options

Now with Ignite over there have been alot of fuzz about the partnership between Citrix and Microsoft and therefore I wanted to explain in more overview what the different technologies that have been announced and what kind of options we have as of now in Azure combined with Citrix, and what are the advantanges between the different options and of course the disadvantages since Azure still has alot of limitations we need to be aware of.

Now first of I want to start with the following scenario, why choose Citrix at all over regular RDS which works just fine in Azure?

Well there are multiple options we need to be aware of

  • Automatic provisioning options against Azure with master image
    Protocol enhancements which allows for better user experience across WAN which will often be the case when setting up anything in Azure anyways.
    Better handling of printers (An issue that many fail to address is if our apps in azure and we need to print how will our connection handle that printer job??)
  • Multiple Connection options (VPN, Clientless Access, ICA-proxy, HTML5)
  • These are some of the advantages but of course cost is always an factor we need to take into account.

    Now  RDS has gotten alot of improvements in 2016 especially towards graphics users but after some initial testing RDS used about 2x the bandwidth compared to Citrix in high-end graphics usage which means a higher bandwidth cost (even though that bandwidth is not the largest cost factor in Azure)

  • I would love to get some feedback from people who have chosen to setup Citrix in Azure and why they choose to do so. So lets go ahead with the scenariones.

1: Running Citrix as regular virtual machine instances in Azure
This is the classical scenario where we have deployed Citrix in Azure as regular virtual machines in Azure, there is a sample setup with Citrix in Azure from the marketplace which consists of 7.11 version which now has support for Windows Server 2016 as well.  A typical deployment of Citrix in Azure also consists of a NetScaler appliance to handle all connections between endpoints and the internal VDA agents.
Citrix also already has some whitepapers on how to deploy these solutions.  Now  with 7.11 Citrix also has support for provisioning using Azure Resource Manager which allows for MCS like setup of servers in Azure. And with the introduction of N-series we can now deploy virtual machines with GPU-passtrough (DDA in Hyper-V) where we only need to pay for each minute we use it.
Now the benefits with using Citrix running as regular virtual machines in Azure is that we have full control over the management pieces, infrastructure components and such, and with the Citrix integration with Azure we can have easy provisioning and power control over the RSDH servers.

Now we can also use regular JSON templates which can be leveraged to do deployment, and combining it with Azure automation opens up for a whole load of different options. Only downside is that NetScaler is limited in Azure because of the network restrictions there. For larger enviroments you need to be aware of the following

1: The default setup for VPX in Azure is 2 vCPU which will have 1 dedicated CPU for management and 1 CPU for packet engines for larger enviroments you should have atleast 4 vCPU and 200/1000 licenses to handle the incoming packets

2: LIcenses! Since a regular virtual machine instance in Azure has the Server license and the Server license CAL as part of the subscription but on the other hand you need to have RDS CAL, Citrix licenses and NetScaler platform liceses there as well (And also Universal licenses if you plan to use Smart Access or other advanced gateway features in the NetScaler)

3: You are the still stuck with the management, you will need to do patching, security, backup and so on. Alot of features in Azure can be leveraged to make this easier such as OMS for Patch monitoring and Azure Backup for backup of your DDCs, SQL servers and such

4: IOPS if customers are used with PVS with overflow to RAM they might get disappointed with Azure since by default a VHD has a limit of 500 IOPS pr datadisk! Now by default C:\ drive has read/write cache which boosts read and write performance but since this uses RAM as a buffer a hardware failure might result in permament data loss. There are of course options here to use Premium Storage with has up to 5000 IOPS but comes at a much higher cost again, and be aware that SQL should be stored on a dedicated data disk without write cache enabled

5: Latency: The issue we have here in the Nordics is that the closest Azure datacenter is either in Ireland or Holland, which make latency level uptowards 80 – 110 MS which of course affects the end-user performance. This can also be fixed by investing in a dedicated line using ExpressRoute to an Azure Datacenter.

But of course using Azure automation and AzureRM PowerShell we can pretty much automate the entire process  from the first domain controller to the entire Citrix infrastructure combined with NetScaler and we can scale whenever we want to, this is for customers with multiple locations can make it alot easier to setup new  sites on their closest datacenter, and of course given that we have alot of hardware options in Azure we can be quite flexible.

2: Citix Cloud

Now Citrix Cloud is an subscription based offering from Citrix, where management is handled by Citrix with alot of new features and capabilities are being introduced there first before they appear on the on-premises offerings of Citrix. Now compared to having Citrix running on a regular virtual machine instance, Citrix will in this case handle the management and HA for your Citrix infrastructure. So Storefront, DDC, SQL and such will be delivered from their cloud offering. This allow for a simple way to provisiong Terminal servers in Azure since this can be done within the Citrix Cloud management portal.

Only thing we need in Azure is to have a cloud connector component installed which will act as the DDC for the VDA agent and connect with the local active directory. So now we are not required to take backup and manage and update the Citrix infrastructure this is handled for us. Another thing is that in most cases we would also need to setup NetScaler manually as well since this is not something that Citrix cloud handles, but! they have a new component which is in beta now called NetScaler Gateway Services which allows the CloudConnector compoent to act as the NetScaler Gateway and communicate with the VDA agents in the back. Which of course would not require us to actually have a NetScaler appliance at all and reduce the virtual machine cost.

But this as mentioned is still in tech preview and is aimed for smaller customers since the traffic is routed via Citrix Cloud and to the cloudConnector and then to the VDA agents and will not give the same performance as a regular NetScaler Appliance.

But on the downside we still need to think about licenses for RDS and management and backup of other components outside of Citrix such as AD, fileservers and such.

3: XenApp Express
So this is the new option coming from Citrix which will replace Azure RemoteApp, now the cool thing about RemoteApp is that it was a per user per month model and included all the licenses (No RDS, no Storage, no Bandwidth) cost to worry about, there was of course the minimum of 20 users and lack of flexibility in some cases but they continued to irmprove the product over the last years.

Now  Azure RemoteApp was also integrated with AzureAD which made user administration easy for customers who for instnace were just using Office365 and needed access to some Windows applications. They could continue to use their same username & password when connecting to Azure RemoteApp.

Now since Azure RemoteApp is set to EoS (End-of-Sale) and XenApp Express will offer a migration program to migrate existing customers from RemoteApp to XenApp Express. Now some interesting things come to mind

1: Does it integrate with AzureAD?
2: What is the minimum user count?
3: Does it have the hybrid option as RemoteApp has?
4: How is it managed and from where?
5: What is the price going to look like?
6: Is it going to leverage NetScaler as well ?

Now there is tech preview that is going to be launched in Q4 which we can sign up for, but not all the details have been disclosed yet. But there was a session about it at Ignite which showed the steps for setting up XenApp Express which can be viewed here — https://myignite.microsoft.com/videos/2792?es_p=2724000

Now the feature is going to be available in the Azure Marketplace and when a users activates the feature he will be redirected to the Citrix Cloud management portal, you create an App Collection where you can choose a pre defined image or you can upload your custom image with LoB. Or you can use domain or non-domained joined machines, but this feature is going to create a set of resources within an existing Azure Resource Group and a virtual network and it also seems like it is integrated with regular Active Directory and not AzureAD.

So I have to say that XenApp Express so far is not impressing me compared with the simplicity that Azure RemoteApp has, of course Citrix has alot of more protocol advancements compared to RDS but the main reason that people or customers choose RemoteApp is because of the pricing, simplicity and autoscaling.

It actually seems like a regular Citrix Cloud offering with Image management using Citrix Cloud and joining the VDA to domains uattended.

4: Windows 10 in Azure
So with the announcement of Windows 10 in Azure delivered from Citrix as well I was really looking forward to be able to provision Windows 10 virtual machine instances in Azure with a fixed price model and tell customers here are you VDI machines where you can run your Windows apps and still use Chromebooks or Macs or whatever locally

But… There are some caveats to this as well. First of this will also be a service within Citrix Cloud, and the main thing to be aware of here is the licenseing requirements (Which are current a restriction from Microsoft) a customers who wants to leverage Window s in 10 has some requirements they need to fullfill before they are allowed to use this feature.

1: It needs to be on a Microsoft EA subscription
2: Has an active Windows SA and have bought Windows 10 in a per user mode.
3: Needs to be Windows 10 Enterprise CBB (Current Branch for Buisness.)

So underneath it is the same Citrix Cloud architecture, you need to pay for licenses and Azure IaaS consumption, so this feature will be able to deliver provision virtual machines directly within Citrix Studio and it can be able to leverage any instance types in Azure so for instance we can leverage Nvidia GPU in N-series in conjunction with Windows 10 service from Citrix Cloud.

So it is not any different from using Citrix Cloud, just another Windows OS type we can use in Azure which is actually unique because Azure is the only public cloud that can run Windows 10.

Summary:

So far now we have alot of options available in Azure, the problem so far is the wierd licensing retrictions from Microsoft which excludes alot of customers to deploy Windows 10 in Azure, and with XenApp Express it seems to be so far that there are alot of additional requirements which made RemoteApp so good option because of the simplicity, now  we need to be aware of the additional costs that it going to come in addition to the user subscription.

Hopefully Citrix and Microsoft together can make the licensing model for Windows 10 in azure alot simpler, I would love to setup Windows 10 VDI machines for smaller customers who are not part of EA just to be able to leveage Windows 10 applications in Azure, and also for XenApp Express I would love to have an easier per user per month cost and not having to worry about management which I didnt need to worry about with Azure RemoteApp but of course on the timeline we have 6 months until this feature becomes GA, but Citrix has so far done alot of work on these features so this might change

Access and authentication methods in a Citrix enviroment

So this blog post is based upon my presentation on VirtrualExpo earlier this week. Where I talked about different authentication and access methods in a Citrix enviroment. Which is a pretty broad subject. So my presentation discused different methods so this blog post is going trough some the different ones and talk about features how to configure and how it looks from the client side, as the user-experience! The issue is that depending on the use-case we might need to setup one or multiple access options which are combined with one or multiple security features or authentication options, and also identity sources might be from one or other sources like LDAP, RADIUS, AzureAD, Google Accounts etc, and an important issue is how we can leverege these identity sources in our Citrix enviroment.

In my presentation I talked about the following access methods:

Direct Access
ICA-Proxy
Double-hop
Optimal Gateway Routing (w/Zones and GSLB setup)
Smart Access
Unified Gateway
AlwaysON using NetScaler 11.1
Clientless Access and RFWebUI
Windows 10 Azure AD with ADFS and Federated Authentication Service
Citrix Cloud and NetScaler Gateway Services

Direct Access
So when I’m talking about Direct Access I mean that you want to access a Citrix enviroment within the same domain/forest which is trusted. Which can be as simple as pointing your browser to a local Storefront website, or pointing your Citrix Reciever to the local Storefront site to start remote applications against a Citrix enviroment. A good thing about trying to access a Citrix enviroment within the same domain that your workstation resides is having the ability to get single-sign on. SSO is supported for both Storefront and the VDA agents for XenDesktop 7.0+++ (But you need to have SSO component installed as part of Receiver

image

In order to setup this feature you need to have copied Group Policy templates from the ICA client installation (Which is usualy under C:\Program Files (x86)\Citrix\ICA Client\Configuration (From there you need the receiver.admx and .adml file and import if to your central repository for group policy \\sysvol\domainname\sysvol\policydefinitions (The Receiver.admx file is placed here) the ADML file is placed under the en-US folder (ADML files are just language files for ADMX policies) These policies can define Passtrough ICA traffic, and we can also define the Store directly in the receiver policies as well so users do not need to type in the server address.

The video below shows how Direct Access using SSO works from an end-user perspective.

ICA-Proxy
Using ICA-Proxy is actually having pretty much the same deployment but instead of connecting to Storefront and having direct traffic to the VDA, the traffic is being proxied via the NetScaler (Which can be either a virtual or physical appliance). So the NetScaler will act as an authentication layer, forward the credentials to Storefront and then authenticate on behalf of the user. Then when a user clicks on an application Storefront will generate a .ICA file where it is specified which address the Citrix receiver client should communicate with, which will in this case be the NetScaler which will handle the ICA traffic between the client and the backend VDA agent.

Now ICA-proxy can be used with a combination of authentication solutions like LDAP, RADIUS, Client Cert, SAML HTTP-Basic but ill come back to those later in the post. image

Since it is the common deployment type for remote users, there are some settings that needs to be in place on the NetScaler. You can read more about setting up NetScaler access here –> http://bit.ly/29lRVUH

Now setting up ICA-proxy requires that the endpoints that are connecting to Citrix should be setup using email based discovery for instance which allows end-users to enter their email address instead of server name (Which they don’t remember anyways) to connect to a NetScaler Gateway. Now it is actually part of the latest Citrix Receiver policy the ability to define NetScaler Gateway address directly but this ofc requires that we have the ability to control the endpoint with group policy.

When setting up ICA-proxy there are some things that you need to be aware of whatshould be defined on the NetScaler  to optimize traffic, because the default TCP parameters are pretty old and will not give optimal troughtput and the latency might vary depending on the endpoint and congestion

image

This video shows setting up email based discovery and accessing using Citrix Reciever and ICA-proxy

Optimal Gateway Routing
Optimal Gateway Routing is actually an StoreFront feature, which can be used in multiple scenarios. For instance it can be used to “force” external and internal users to communicate with their XenDesktop servers using the NetScaler Gateway. So when we configure this feature on Storefront and define all internal and external users to be proxied using the NetScaler, what is going to happen is when a users click on an application or desktop,  Storefront will generate an ICA-file where it is stated that traffic should be routed via the NetScaler Gateway. So all authentication is handled by Storefront and not by NetScaler.  This allows for other options as well since we can actually present out Storefront as a VIP directly and have multiple stores with HTTP rewrites on NetScaler to have multiple customers on the same URL or multiple URLs on the same IP, and depending on the store the customer has will be pointed to different NetScaler Gateway virtual servers

image

It can also be used externally in conjunction with GSLB and Storefront Zones to redirect users to their “closest” datacenter. An problem with GSLB is that it might not be the endpoint that is communicating directly with the GSLB vServer but it might be the LDNS server for the endpoints. So when an endpoint is redirected to one of the sites vServer the GSLB Service is going to forward the endpoint to storefront and with an HTTP rewrite rule which is going to contain the client ip address directly. So when the endpoint communicates with Storefront, Storefront looks at the client IP and talks with the resources which is closest to the endpoint and generates an list of applications and desktops available to it and when the users click on an application or desktop, Storefront is going to generate a ICA file containing a optimal gateway routing parameter which points the enduser to the closest datacenter based upon the endpoint IP.

image

This video shows how the user experince looks like from Optimal Gateway routing for external users, which are connecting directly to Storefront and from there launching sessions

Double-hop
This is a scenario if for instance customers have multiple DMZ zones for instance an internal and an external DMZ zone and you need to enable remote access to your XenDesktop enviroment. In this scenario the first Netscaler will act as an authentication point,  after authentication a user will be redirected to Storefront like in a normal ICA-proxy configuration. When a user clicks on an application it will generate an ICA file which points to the first NetScaler. When the Receiver client is establishing a connection it will connect to the first NetScaler Gateway, which will in turn connect to another NetScaler Gateway virtual server which is located on the internal DMZ which will in turn talk with the VDA agents on the Inside.

image

HTML 5 based Access
HTML 5 based access is not a new feature, it allows for anyone having a HTML5 capable browser which means having websockets and JS script support to establish a connection directly to an application or desktop without having Citrix Receiver installed. This is useful if you do not have receiver installed or do not have admin rights to install it on the endpoint or have an endpoint which does not support receiver. Having HTML5 access exteranlly is also not a new thing, NetScaler has supported WebSockets for quite some time you just need to enable it in the HTTP profile and bind the profile to the virtual server.

Now what is new on the other hand is that with Windows Server 2016, HTTP/2 is by default enabled on IIS now compared with HTTP/1.1 which is now 15 years and using clear text transfers, HTTP/2 uses binary based transfer and is much more effecient

image

So to use HTML 5 + HTTP/2 access remotely, you need to enable WebSockets in Citrix policies, enable HTML5 on Storerfront to have access to it, enable HTTP/2 and WebSockets on NetScaler in the HTTP profiel and bind the profile to the NetScaler Gateway vServer

image

image

image

Smart Access and client choices
So up until now I have looked at using plain NetScaler Gateway Virtual Server (Double-hop, Optimal Gateway Routing and ICA Proxy) all these settings do not require any additional licenses outside of the platform license. Now we have Smart Access and Client Choices which require Universal Licenses for each connection. (This is defined by remove the ICA only setting on a virtual server which will mark it as Smart Access mode)Now using client choices a user has multiple options when logging in, for instance a user can get multiple options available for instance a user can go into clientless access mode which is a browser only based VPN connection, where a user can gain access to file shares, mail, and internal resources and bookmarks. Or a user can start a full VPN connection and have layer 2 network access ot the corporate network in that case when a user starts a Citrix application it will go across the VPN link so performance will not be as good as using regular ICA-proxy.

image

Some cool other features that can be used in Smart Access mode is for instance using he
* RDP Proxy which will allow access to RDP Connections using the NetScaler Gateway using the same external port that the NetScaler Gateway vServer runs on.
* Pre-authentication and endpoint scan policies with OPSWAT (Which can for instance be used to check if a client has antivirus and antimalware software running before they are allowed to connecgt
* Smart Access and Smart Policy
* ICA latency settings (New with NetScaler 11.1) which can be used to detect if an endpoint which a certain amount of latency should have Smart policies applied or not for instance if they have <10 MS latency.

But here we can mix alot of different settings in the same vServer, we can also publish web resources directly into the clientless access portal

Unified Gateway
So up until now I’ve been talking about using the NetScaler Gateway option to deliver remote access to Citrix users. All the options above (except for nfactor) can be delivered using a regular NetScaler Gateway appliance. Now Unified Gateway was a new feature which was introduced in version 11. The problem that has been with NetScaler Gateway was that people wanted to have multiple services behind a single IP-address and port.  If you setup a regular Netscaaler Gateway you are limited to having Citrix ICA proxy on that service. Using Unified Gateway you are introducing another way of access. Firstly when a client connects to to a Unified Gateway server it will hit a Content Switching virtual server, and if the URL is correct the users will be redirected to the NetScaler Gateway Virtual server. Since the Gatewya vServer is non-adressable (IP = 0.0.0.0) you can access it via the same public IP. Now since we have a Content virtual server in front we can also add other services such as Exchange email, SharePoint or Web based applications behind the same IP and Port number.

image

So this video example just shows a Unified Gateway setup and having other resources like RDWeb behind the same URL and port

AlwaysON VPN NetScaler 11.1
So this is a new feature which was introduced in NetScaler 11.1 which allows for automatic sign-on against a NetScaler Gateway in Smart Access mode using a new endpoint client which came in 11.1. The automatic connection happens after a user is logged on the device, now based upon the AlwaysON policy on the NetScaler it should pop-up automatically and open up the homepage and allow for instant Citrix Receiver connection to the backend resources. Since this feature uses the credentials of the logged in user it is best used for SSO if for instance a user takes a corpoate laptop or computer home and want to access the enviroment from home. We need to define DNS suffix which will be used to check if the endpoint is on the corporate network or external. We can also define if the enduser should be able to disconnect from the tunnel or not.

image

The video below shows the user-experience from logging on from a Windows 10 computer and automatic setting up the VPN tunnel against the virtual server and opening up the homepage and doing direct SSO.

Clientless Access with RFwebUI

In most of the previous scenarioes we have been connecting to NetScaler and then with ICA-proxy on it will forward the credentials to Storefront and it will generate a application list in the Storefront Portal. Using Clientless Access with RFWebUI allows the NetScaler to act as the application portal instead of Storefront. Now NetScaler will communicate with Storefront to get a list of applications and desktops that the user has access to but instead NetScaler will generate the list, now the advantage with this is that we can now add web based applications like other load balanced virtual servers on the NetScaler and present this as a bookmark directly within the clientless access portal so combine (Citrix Apps + SaaS + WebApps) directly in the same portal, so were actually bypassing Storefront, the downside is that we lose the customization options that we have available in Storefront.

image

Now previously we needed to change some files on the web.config on Storefront but this is not needed anymore, we only need to speify the theme, add clientless access domain and set Clientless Access to ON. So from an enduser perspective it will look like this

NetScaler Gateway Service and Citrix Cloud
This service is still in beta so it has not been directly optimized yet. But for those that have tried Citrix Cloud so far knows that it deliveries the management bits of a Citrix infrastructure and then we need to have components installed locally known as CloudConnectors which act as DDC for the VDA agents. Now previously we needed to have a NetScaler somewhere to access these resources, Storefront was already hosted by Citrix Cloud so when a user clicks on an application it would generate an ICA file and connect to the NetScaler and access those VDA agents. The NetScaler ofc needed to have a public IP and digital certificate installed and also it needs to run either physically or as a virtual appliance. Using NetScaler Gateway Service some additional service will be installed on the CloudConnector virtual machine which will communicate directly with Citrix Cloud and act as a “Gateway”  Now on the up side now we do not require any dedicated virtual appliance, no public IP or certificate just need to use the same VM we use for CloudConnector service. So traffic from an endpoint will be routed like this PC —-> CItrix Cloud NetScaler Service –> CloudConnector Windows VM (With Gateway Service installed) –> VDA and then back again so this will generate a lot higher RTT compared to having direct access to the NetScaler

image

But on the other side now they have implemented a form of GSLB which will point the enduser to the cloest Citrix Cloud service within the region, but it is pretty easy to setup as well

Make sure it is activated in Citrix Cloud

image

Make sure the service is installed and running on the cloud gateway VM

image

And access using the Cloud based storefront.

Windows 10 Azure AD Join with ADFS and federated authentication Service
So far in the previous scenarioes we have looked at using regular Active Directory as the authentication source against a Citrix enviroment, alot of customers are today moving towards Office365 and there they leverage AzureAD as part of it. Most customers use ADFS setup with AzureAD which means that they can get SSO within the same domain and same sign-on outside of the office. Now the cool thing with setting up AzureAD and ADFS is that we can now leverage SAML based authetication, which is cool because for instance NetScaler supports SAML but the problem has been up to now that Storefront does not support SAML until Citrix came with federated authentication service, so what this service does is that it maps a SAML token to a virtual smart card using Active Directory Certificate Services. So in the use case of Windows 10 which can be AzureAD joined a user can logon their Windows 10 computer using Microsoft Hello which leverages Microsoft Passport which can actually do two-factor authentication based upon just sitting in front of the computer, then it can be directly authenticated to the AzureAD portal because it will do SSO directly to ADFS, so now then the computer has a SAML token a user can go into Storefront and present the SAML token authenticate directly to Storefront which will map the SAML token to a virtual smart card to allow to so SSO directly to Storefront and backend VDA agents JUST BY SITTING INFRONT OF THE COMPUTER?! how cool is that?!

image

So this is just one example of how we can configure this to work, in this scenario the NetScaler Gateway Virtual Server is the SAML service provider and gets a SAML iDP trust to AzureAD.

image

So the videoclip below shows how it looks like from an enduser perspective when they try to login to their Windows 10 computer and wants to access their Citrix enviroment

Now of course there are other options that I haven’t consiered as well, and also there are new options coming here pretty soon as well!

Citrix & Windows 10 from Azure
XenApp Express
NetScaler and EMS integration

All these options which are because of the tight partnership between Microsoft and Citrix will interesting to take a closer look at when they arrive and will most likely be integrating with AzureAD and leverage more Azure features as well.

What’s new from Microsoft Ignite 2016

So as someone that is not at Ignite this year, I had to watch everything remotely but not that I am complaining Smilefjes so this post is a summary of what is announced at Ignite so far from day 1. Just some quick notes until I get some more details on each topic.

Azure Stuff
Azure IPv6 Support:
H-series Azure: Which is aimed at HPC with mulitple vNICs and RDMA options (https://azure.microsoft.com/en-us/blog/availability-of-h-series-vms-in-microsoft-azure/)

vNet peering is now GA
Active-Active Virtual Network Gateways
Updates to Application Gateway to do SSL end-to-end configurable SSL profiles, also includes WAF policies against OWASP top 10
Public Preview of Azure Monitor: https://azure.microsoft.com/en-us/blog/announcing-the-public-preview-of-azure-monitor/
Azure Service Fabric for Windows Server now GA: https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/
Mulitple updates to Azure Security Center: https://azure.microsoft.com/en-us/blog/strengthen-your-cloud-security-posture-with-new-capabilities-from-azure-security-center/
Azure Disk Encryption now GA: https://azure.microsoft.com/en-us/blog/cloud-innovations-empowering-it-for-business-transformation/

Other Stuff:
Vmware Monitoring Public Preview for OMS https://azure.microsoft.com/en-us/blog/cloud-innovations-empowering-it-for-business-transformation/
Application Insight Connector in OMS: https://blogs.technet.microsoft.com/msoms/2016/09/26/application-insights-connector-in-oms/
Azure Stack Tech Preview 2 launched: https://azure.microsoft.com/en-us/overview/azure-stack/
Azure iDNS for AzureStack: https://azure.microsoft.com/en-us/documentation/articles/azure-stack-understanding-dns-in-tp2/
Windows Server 2016 and System Center 2016 GA: https://blogs.technet.microsoft.com/hybridcloud/2016/09/26/announcing-the-launch-of-system-center-2016-and-new-services-for-operations-management-suite/ https://blogs.technet.microsoft.com/hybridcloud/2016/09/26/announcing-the-launch-of-windows-server-2016/
Improved OneDrive for Buisness to sync SharePoint Document libraries as well!
EMS being added to Azure Portal directly:

Activity Log Analytics solution from Azure part of OMS:
Windows 10 and partnership with Bromium to deliver Virtualization-Based Security https://blogs.bromium.com/2016/09/26/introducing-virtualization-based-security-next/

Nutanix and Storage Options

So with the latest enhancements to Nutanix which have arrived so far and what’s coming in the next release with Asterix, Nutanix has stepped up their game to go beyond regular HCI which they have already done for a while now. So the purpose of this post is just to give an overview of what kind of Storage functionality you can actually do with Nutanix.

Starting with the introduction of volume groups they can present a virtual disk as a iscsi share directly to a virtual machine which would also benefit from the distributed storage system underneath. Nutanix also introduced File Service which was a highly-redudant, scaleable file server solution, which would present itself using SMB 2.1 protocol using an addiotional service VM but still manageable from PRISM

And also Nutanix since they support multiple hypervisors can present itself using SMB or NFS protocol directly to the hypervisor (VMware or Hyper-V) but also they can use their own hypervisor where management is bundled in directly.

So far the only issue with Volume Groups was that it didn’t support physical machines, until they now announced Acroplis Block Service, which is an further developed Volume Group which now allows us to present out storage directly to physical servers from within PRISM using iSCSI, which also leverages the distributed storage fabric and all the other features as well.

Nutanix also has other options for instance we can do asyncronous or syncronous replication from one Nutanix cluster to another, we can also integrate with either AWS or Azure to do DR or Backup.

Also this week Nutanix also introduced a feature which is a hypervisor agnostic change block tracking feature which is directly available from within the REST API, which would allow partners to get incremental data from the storage layer (Nutanix) instead of the Hypervisor (Hyper-V 2012 R2 which does not have any CBT function) or VMware ESXi which has had some issues with it over the last months. Also it will allow Nutanix to make it easier to support other Hypervisors and still allow backup vendors to more easily integrate to do backup.

So with all these different options available it might be difficult to get the full picture, luckily I’m great with Visio, so therefore I decided to add this drawning which might help full the gapes on some of the storage features included in Nutanix

image

Running Nutanix with Hyper-V – Things to consider and moving forward with Server 2016

Now Nutanix is one of the few HCI vendors out there that actually support Hyper-V so kudos for that! They’ve also joined the CPS club with their fininshed appliance shipment which comes with Azure Pack which is also one of the selected few partners out there.

Now after having had alot of conversations with customers & consultants running Nutanix and Hyper-V I decided to write a blog post about the pitfalls you should be aware of when using Nutanix and Hyper-V.
Now some key concepts first, when it comes to Hyper-V, Nutanix has built a custom SMB 3 provider which is presented out to the hypervisor which acts like a regular SMB 3 share to Hyper-V, but since this SMB 3 protocol is presented out from the Controller VM which is a linux appliance it does not have the same capabilities that regular SMB 3 has, and also Hyper-V has some limitations that we need to be aware of

  • 1: No CBT support in Hyper-V
  • Since Hyper-V does not have any built-in CBT (Change Block Tracking) Capabilities into the hypervisor, Microsoft has been dependant on backup vendors to provide their own CBT feature typically using a filter driver to detect block changes from day to day (now this will change from Windows Server 2016 with RTC) the problem with for instance using Veeam is that Veeam’s CBT filter only works against native SMB shares from a Windows operating system and not the SMB share that Nutanix presents out to the Hypervisor. So if we want to take backup using Veeam it will have to read the entire VM every day and then filter out what data has not changed instead of just reading the blocks that have changed. Now Nutanix just published a new API for changed block tracking which is hypervisor agnostic, yay! this might allow for things to get fixed without updating to 2016.

    2: No support for NGT
    The Nutanix Guest-tools which allows for instance to do cross-hypervisor conversion and file-level restore is only available for Acropolis Hypervisor and ESXi, which means as of now we cannot use File-level restore function on Nutanix in Hyper-V

    3: No support for Acropolis File Services
    Acropolis File Services which is the ability to deliver distributed SMB share using the same distruted file system underneath using service VM’s to the infrastructure is only available for Acropolis Hypervisor and VMware ESXi

    4: No Crosshypervisor support yet

  • This feature is still in Tech Preview, but as of now it is supported for moving from ESXi –> AHV and back again but not against Hyper-V yet, this is also partly because there is no support for the Nutanix Guest tools yet

    5: No support for Application Consistant Snapshots
    When defining Protection domains and schedules you also have the option to setup if you want to have application consistant snapshots, while this feature is available for AHV and ESXi again this is not supported on Hyper-V since the NGT cannot be used on Hyper-V

    6: ESXi and AHV management from PRISM
    Now Nutanix is working on integrating ESXi Management into PRISM which would allow for an easy way to manage both ESXi and AHV and rest of the infrastructure from PRISM, but it seems there is no immidiate plans to support Hyper-V management from PRISM which is a shame because Hyper-V management in larger enviroments could been alot easier from PRISM (Sorry Microsoft but you should learn something from here…)

    7: Some issues mixing Hyper-V snapshots and Nutanix snapshots
    It also looks like there are some issues mixing Hyper-V snapshots and Nutanix based snapshots (http://next.nutanix.com/t5/How-It-Works/VSS-Backup-on-Hyper-V-mixed-with-Nutanix-Native-Data-Protection/td-p/8508) not sure if this still applies.

  • So those are the points I just wanted to share about, Nutanix and Hyper-V still works great, but the main focus is on ESXi and AHV for new features.

    Now of course there are alot of limitations in Hyper-V compared with ESXi when it comes to handling backup and such, and of course based upon market shared VMware is still on top so it is understandable that new features come to play with ESXi and AHV first and foremost. My guess is that Nutanix will step it up a notch when Windows Server 2016 comes and now has support for Nano Server, RCT and other Hyper-V features like REFS as well which might make it an more robust solution on 2016, and with Rolling cluster upgrades which can be leveraged using one-click upgrade is it going to be interesting how Nutanix will come to play with AzureStack as well.

Test run of Teradici Cloud Access Software on Azure N-series

Earlier this year, Microsoft announced that they were partnering with Teradici on their N-series virtual machine instances in Azure, which I’ve blogged about previously here –> http://msandbu.org/n-series-testing-in-microsoft-azure-with-nvidia-with-k80/

Teradici are the creators behind the  PCoIP protocol, which its often leveraged in VMware Horizon View and supported by multple thin client vendors, which in most cases are also leveraging a Teradici SOC (System  on a chip) which provides a hardware decode of the pixel stream enabling faster frame rates and highly secure, simple to manage updates.

Now I have previously tested PCoIP vs for instance TCP Blast –> http://msandbu.org/remote-protocols-benchmarking-citrix-vmware-and-rdppart-one-pcoip-vs-blast-extreme/ just to compare TCP Blast which is by default TCP and PCoIP which is based upon using UDP. Now the downside with using UDP is that it does not have any form of reliable transport. That means that Teradici has to provide that within the PCoIP protocol itself, which might translate into “artifacts” when working on the workstation.  Also most UDP based remote protocols use MTU of 1200, to ensure that there is no fragmentation of packets during transmission. PCoIP has some good ways of sending reliable data, for instance USB packets are always reliable, dropped image packets are resent only if they have not been written over by a subsequent display update (so, for instance Heaven benchmark which is part of the video clip further down below, almost never since it is constantly updating the screen), and audio packets are too latency sensitive to retransmit, so PCoIP uses forward error correction (FEC) to correct missing audio packets and never retransmit dropped audio packets.

So I was curious to see how it worked on Azure with the N-series and support for Windows Server 2016, since the Azure datacenters are not located anywhere close to the nordics, it is crucial to have a remote display protocol which are able to leverage the GPU and deliver the best end-user experience without causing any huge overhead on the server, and of course being able to use it without having a large Azure infrastructure.

NOTE: In order for the agent to connect to the server, the following ports need to be open on the VM in Azure

o TCP 443
o TCP 4172
o UDP 4172

The way to fix this is to go into the Network Security Group of the VM and adjust the incoming rules with the addtional ports (RDP is default rule)image

The setup of Teradici Cloud Access is pretty simple, it consists of an endpoint client which can run on Mac or Windows, or any mobile platform (Android, Chrome and iOS). It is also embedded on a list of different Zero clients, but in this case I used an regular Windows endpoint which was a Windows 10.  We also need an agent install on the Windows Server we want to connect to. With this release Teradici supports Windows Server 2016 which is going to be my test platform on the N-series in Azure.

image

NOTE: That the agent installation process is going to be a lot more streamlined when it fully supports Azure later this year, and will be available as an extension when deploying virtual machines in Azure which can be easy leveraged when deploying using Azure ARM

image

After installing the agent, you need to reboot the virtual machine in Azure. Now if we want to do any customizations to the teradici agent we need to import the ADM files which are stored locally on the virtual machine C:\Program Files (x86)\Teradici\PCoIP Agent\configuration.

NOTE: Using N-series can only be used with a single 1920 Display and we also need to NVIDIA GRID drivers installed and they can be downloaded from the NVIDIA drivers support page.

image

For instance there is an initial bandwidth estimate is set to 10 MBps, which makes PCoIP appear sluggish to begin with, but this will ramp up and I’ve been told that this will be fixed in Q1 2017 release from Teradici

After the agent is installed and running we can connect to the virtual machine using the public IP/hostname set for the virtual machine in Azure (All virtual machines get a public IP address from Azure, by default this should be set to a static address which are by default dynamic.

image

After you have authenticated using domain credentials, you have the option to specify view mode which can be either windows or full-screen. The client can store the connection settings.

image

Now that since the high-image quality, the bandwidth usage might be high at times but this is to ensure optimal user-experience, you can also see in the graph that it ramps up slow and steady because of the 10 MBps initial bandwidth estimate, this can be changed using the group policy templates. I ran two tests during 60 seconds, therefore the graph spikes and start up again.  We could reduce the image quality to reduce the bandwidth usage.

image

So this video below, shows the user experience from my Windows 10 client to my Azure N-series instance which is running in the US Central datacenter where the N-series is currently available, which it needs also to be taken info consideration that I have about 150 MS latency from my location to the VM instance.

New opportunities! Cloud Architect at Evry

So last year I switched jobs to Exclusive Networks here in Norway, which was distributor which focused on Nutanix, Arista, vArmour, AviNetworks, VMturbo as part of their BigTec portfolio and so on which was a looooong gap from my regular day-to-day work which focused on Microsoft, and ohhh boy did I learn a lot during this one year time! especially on HCI and pure data center networking.

But for personal reasons I’m again decided to switch to another opportunity which was as a Cloud Architect at Evry. Evry is one of largest IT companies in Evry, and they just so happen to have an office 2 minutes walking distance from my house!

Of course that is convenient but the main driver is what they are doing moving forward! They are focusing on AzureStack https://www.evry.com/globalassets/marked/it-galla-pres-2016/evry-forst-i-norge-med-azure-stack.pdf and plan to be one of the first deployment of AzureStack in Norway, which I wanted to be part of moving forward. They also have a new initiative focusing purely on Cloud Services which I hope to play a part in as well http://www.digi.no/artikler/evry-varsler-okt-satsing-pa-nettskyen-ibm-avtalen-var-bare-starten/350634 and hopefully I can do a little bit of other stuff as well which I have a desire for Smilefjes

So I look forward to meeting new colleagues and learning even more in the upcoming months!

XenDesktop 7.11 released! and testing with N-series in Azure using DDA

So last night, Citrix released XenDestkop 7.11 which now already supports Windows Server 2016! Which is now availble here –> https://www.citrix.com/downloads/xenapp-and-xendesktop/product-software/xenapp-and-xendesktop-711.html
But that not all! it has alot of new capabilities as well!

XenDesktop 7.11:
* Support for Hyper-V DDA
* Support for Azure Resource Manager deployments (Which I have blogged about here –> http://msandbu.org/delivering-xendesktop-from-microsoft-azure-using-azure-resource-manager/ 
* When using Machine Creation Services to create a Machine Catalog containing desktop OS VMs, you can now choose whether MCS provisions thin (fast copy) clones or thick (full copy) clones.
* Deployment using you can now choose whether MCS provisions thin (fast copy) clones or thick (full copy) clones.
* Publishing of Windows Universal Applications
* Enhancement to caching behavior of video content in Thinwire
* TWAIN 2 support
*  Relative mouse support
* Updated Client USB device optimization rules policy setting
* Generic USB redirection for mass storage devices on XenApp (Now enabled by default!)
* Access to a high-performance video encoder for NVIDIA GPUs using NVENC
* Zone preference (We can now specify if a desktop or application group should be mapped to a users home zone

image

  • Storefront 3.7:
    It also comes with StoreFront 3.7 which now support Windows Server 2016 as well!
  • Did I mention that Windows Server 2016 comes with HTTP/2 by default? Which will increase the performance of HTML 5 based Web Receiver rather dramatically

    Storefront 3.7 also now support publishing of content directly into the Portal (This can be doc files, html files etc)

  • Now publishing content using Storefront can only be done using PowerShell and not the UI, for instance this command will publish a web shortcut to Storefront
  • New-BrokerApplication -ApplicationType PublishedContent -Name “Citrix.com” -CommandLineExecutable “https://www.citrix.com/” -DesktopGroup CHANGEWITHDESKTOPGROUP

This command publishes a TXT file located locally on Storefront to the portal and publishes it to the desktopgroup that I specify

New-BrokerApplication –Name ReadMe -PublishedName “ReadMe Document” –ApplicationType PublishedContent –CommandLineExecutable \\sf01\test.txt -DesktopGroup Desktop

  • image
  • NOTE: That the client connecting needs to have access to that particular location that the resource is located on. It will not leverage Storefront to get ahold of the content.
  • PVS 7.11
    Now supports Windows Server 2016 and also REFS as well for quick vDisk versioning! Which is leveraging the Block Cloning API of REFS

Also AppDisks now support SCVMM and Hyper-V 2016!
Citrix Director now also integrated directly into Octoblu based upon Notifications
AppDNA also now supports Windows Server 2016

Hyper-V DDA support: Now I have previously blogged about the Azure N-series which uses NVIDIA GRID GPU’s to deliver enhanced graphics performance. http://msandbu.org/n-series-testing-in-microsoft-azure-with-nvidia-with-k80/

image

Now some quick notes on setting up Hyper-V DDA setup on a Citrix VDA agent. Firstly we need to specify that we want to enable hardware GPU usage within Group Policy, if not it is going to fall back to software rendering!

image

Now also make sure when checking DXDIAG that it reports back the proper configuration, if you are not seeing this. You need to run a GPUPDATE to make sure that the policy is in place.

image

And it works! XenDesktop 7.11 running on N-series with Hyper-V DDA. Now I will have another deep-dive blog upcoming on N-series as well.

image

 

Overview of Microsoft Operations Management Suite and comparison with SCOM

So you think you know all the features in OMS? Think again! So I’ve been working closely with OMS the last couple of years and to be honest I haven’t yet grasped how much features Microsoft has introduced over the last couple of years and more is coming!

I have blogged already about some of the capabilities.

Leverage OMS to detect bad networks –> http://msandbu.org/leveraing-oms-network-performance-monitor-to-detect-network-loss/

So if you now look at all the possible solutions that OMS now provides, it is alot! If you look at the complete picture OMS integrates directly into Azure to be able to monitor Azure specific features like Azure Network Analytics. It can also leverage agents to do monitoring against Windows and Linux machines. Now the strenght of OMS is that you are able to gather all that information (logs/counters/events) from different sources and being able to structure that data to get valuable information, which is much fo the logic that is in place behind the different Intelligence packs from OMS. Since OMS is leveraging Azure, it is easy for them to be able to instantly search between thousands thousands of logs.  It also allows them to detect security breaches since it integrates with other Microsoft online services to detect bad known IP’s, leaked accounts and so on. Leveraging OMS allows for Microsoft to add more logic and allows to be more proactive instead of OpsMgr which is more reactive, since it consists of Management Packs and a set of rules which determine how it should react to a specific event. But for a while OMS has been more a competitor to ELK & Splunk then it has been to OpsMgr
image

Now if we compare on the evolution that has happend in Operations Manager, well its not so much, for instance http://blog.orneling.se/2015/05/system-center-2016-operations-manager-whats-new/
There are of course support for the latest versions and added management packs, but I belive that Microsoft is pushing more focus on OMS, instead of OpsMgr why shouldn’t they? Microsoft has enormous growth on Office365, Azure and even EMS so having a operations tool which is fully directly integrated into all those cloud solutions makes it easier for customers as well as Microsoft to be able to add features directly.

Now we also have alert management which can be added to specific events and can trigger Runbooks or Webhooks to remidiate alarms. I’m also thinking that in the future that Microsoft will integrate OMS with Intune to do more event based monitoring and also integrate with Microsoft Defender Advanced Threat Protection to get a common solution for security pre & post breach events, combining the logic.

So with all these feature there and more coming, why leverage Operations Manager?

Of course it is not a one-size fits all appoarch, Operations Manager provides more in-depth monitoring like Windows Service and Processes, there is a larger ecosystem of third party vendors which have their own Management Pack to do monitoring of their hardware or services running, and also for larger enterprises they might not just have the adequate bandwidth available to be able to upload data collected from 100,000 of endpoints (Switches,Routers,Firewall,Linux, Windows) but of course you need to think about that with OpsMgr it will require alot of management to have it work properly unlike OMS which is a online based tool. Also OMS does not have any APM capabilities yet, and of course not everyone can use cloud services to do event monitoring as well.

But moving forward I’m pretty sure that Microsoft will be adding more of the features which are now available in OpsMgr into OMS and that OMS will be more integrated directly into Azure (For instance auditing and with Intune and even ATP) which will make even more of complete monitoring tool.

Getting started with DataDog monitoring against Microsoft Azure

So after getting constantly bombarded with a sponsored tweet on my timelime for a while, I decided to take a hint and take a closer look at what DataDog actually was.  So this post is actually just a small introduction to it.

image

Now DataDog is a cloud based monitoring tool, so you can take it for a test drive for 15 days at datadoghq.com

To my suprise they did quite a bit! I got an email from one of their sales guys pretty fast which notified me that they had pretty good support for monitoring Azure enviroments, so I decided to take a closer look at how it could monitor my Azure enviroment.

Now for Azure they had two options to do monitoring, which was either leveraging an Azure AD user account which has read only access to the all the resources and resource groups, or it was using guest extensions on Windows VMs or Linux VMs. Of course I did a combination of both, which allows me to get the complete overview of VM guest metrics and host maps with the information from Azure.

Just some of the integrations that are possible in DataDog

image

You can also notice that DataDog is an option when you deploy virtual machines since it a default extension for Linux and Windows virtual machines, do add the agent using JSON templates, is pretty simple since you just need api_key in the template to authenticate the agent against DataDog

{
       “publisher”: “Datadog.Agent”,
       “type”: “DatadogWindowsAgent”,
       “typeHandlerVersion”: “0.5”,
       “settings”: {
         “api_key” : “API Key from https://app.datadoghq.com/account/settings#api”
       }
     }

So now  by getting information from the Azure subscription it can generate host maps based upon geographic location automatically, it can also monitor other Azure resources like SQL and App Services like Web Services, Logic Apps and so on.

So from the dashboard I can for instance go into a virtual machine directly and see that stats, it will automatically detection the region, resource group which is contained it and add tags to it, which can be used to filter afterwards

19

It also gave me a nice host view map, where resources was displayed within their own region they were located in

 

I could also go into a more detailed view of a host and get more specific metric of a virtual machine directly. Note that datadog uses something called integration packs, which can be enabled for different purposes. For instance there is an own integration pack for Event Log tracking and one for IIS for instance, there is also monitors for WMI and Windows Services.  After enabling the integration in the console I would need to modify the agent settings on each agent as well, this could easily be done using DSC for instance to update the agent config.

21

It was also pretty simple to add specific monitor to detect for instnace 404 error codes on my Web Service running as a service within Azure

image

So this is just from my initial testing, but I found the product pretty slick! and the UI is great! and with a large list of supported integrations like VMware, AWS, Docker, Apache, OpenStack etc Im definitely going to take even a closer look at this with more workloads. But still its a cloud service monitoring tool and therefore the pricing might be a bit of for larger organizations depending on what kind of pricing they offer.