Author Archives: Marius Sandbu

Citrix FAS with Azure AD and Error 404 Not Found

So a short blogpost on a issue I faced this week.

Working at a customer this week we were working on setting up Citrix with SAML based authentication from MyApps Portal using Azure Active Directory. In order to setup this properly we needed to implement Citrix FAS in order to do SSO directly from a Azure AD Joined Windows 10 device. One of the issues we were facing was when a user clicked on Citrix app from myapps portal and opening multiple tabs or closing the existing tab where Citrix application was opened. The end user received a standard 404 error from Citrix Storefront

The reason for this was because of the Gateway session cookie was inserted when the user was trying to access Gateway from Azure MyApps. The request from Azure AD was redirecting to /cgi/samlauth and forwarded to the IIS server since Session cookie matched with an existing connection the connection failed. So my initial idea was to use Responder or rewrite policies but after some thinking I noticied that they were ignored due to AAA processing in the NetScaler packet flow take precedence of those feature.

The end solution was quite simple. We created a virtual directory on Storefront IIS.

iis2

and created a redirect on that virtual directory back to the netscaler gateway setup.

iis2

After I did this, the end user could open up the application as normal.

Cloud Wars IBM vs Microsoft vs Amazon vs Google – Part 1 Overview

Looking back and 2017 there has not been as much activity as I’ve planned on my blog and one of the main reasons for this is because I have been quite caught up in work. Again one of the main reasons is because my work takes me back and forth between multiple products, platforms and customer cases. It can be from a DevOps project on Azure, to IoT project on GCP to a DR solution on IBM and a HPC setup on AWS. So after plunging into most of these platforms I’ve decided to start my blog with 2018 on some fresh perspectives on the major cloud platforms and focus on their strengths and weaknesses. So this post will reflect my personal experiences with the platform and showing some of the core capabilities, since one of the most frequent questions I get at work is “Where do I start? and why should I choose X over Y?”

I like to compare Cloud Platforms to cars. Most of them can drive you from Place A to B, but all models have different exterior and comfort levels and maybe seven seats in the car, and some have a faster and stronger engine. So the point is that most platforms provide most of the same services, some have a better quality, different prices and different options. So like for instance all the four vendors provide a simliar form of Cloud orchestration language such as Cloud Deployment Manager, Azure Resource Manager, Cloud Formation and IBM Cloud Schematics

cloudwars

So let us start of this post series with an overview of the four major cloud platform on the market. (Also note that I’ve been part on the technical comparison on the major cloud platform on whatmatrix.com which you can see here
–> https://www.whatmatrix.com/comparison/Public-Cloud-Platforms#) and ill get back into a more technical comparison on part two of this blog series and focus a bit more into some different levels such as IaaS/Bare-Metal, Identity, PaaS, Bigdata & IoT, ML and Containers.

IBM Cloud:
ibmcloud

Historically IBM has been focusing a lot on IaaS services, with its Softlayer platform, which has been IBM’s public cloud offering on Iaas and bare-metal offerings. On the other hand, IBM has been building up Bluemix as well which has been focusing on the PaaS services which is based upon CloudFoundry. This is also where ML/AI service Watson has its home as well. The problem is that Public Cloud on IBM has been available two different platforms with Bluemix and Softlayer, and also IBM has multiple regions where they other both but some places where they only offered IaaS and not the other. This has been really confusing at times, and has been noticied by Gartner as well since they haven’t had the complete service offering compared to the others. IBM is now focusing a lot on merging these two platforms to provide all cloud functionality from what is now called IBM Cloud.

IBM unlike the other competitors when it comes to PaaS services is mostly building their own services using third party open-source products. Like for instance  the serverless feature in IBM is based upon Apache OpenWhisk unlike Azure which as Functions, Amazon which has Lambda and so on which are closed. Also they have other IaaS options based upon VMware and Veeam for instance where they are a lot further in the race against AWS, and last but not least their underlying orchestration tool for infrastructure as code is based upon Terraform.

Also one of the things I value when working with a platform is the community around it, especially on stack overflow and other social media channels such as Twitter and such, unfortunately IBM has the smallest community
based upon statistics I’ve seen on Stackoverflow, Social media and looking at meetups in the Nordics.

When it comes to PaaS Services even if IBM is focusing alot on reusing open-source platforms such as CloudFoundry also they are standardizing on Kubernetes, they are nowhere the same functionality offerings as the others in the marked. The core
strength of IBM Cloud at the moment as I see it is the IaaS/Bare-metal and VMware offering that they have. One of the core strengths they also have is the focus on Private Cloud with their IBM Private Cloud solution where they can provide a scale able PaaS solution for on-premises solutions.

Google Cloud Platform:


An example of Google with Google Cloud Shell
To be honest I haven’t done a lot of work on GCP before I started working with it about 1,5 year ago, and what I see is that Google’s cloud platform is pretty similar to their search engine, focus on ease of use and speed.
Also I’ve seen that in those cases I’ve been working on GCP, it has come out as the cheapest option between the four vendors, also that Google offers the fastest (compute, storage, network) infrastructure as well, but don’t take my word for it, see for yourself –> https://www.cloudbenchmark.com/

Also Google offers the most flexible IaaS offering, where we can define custom VM instances, any type of disk configuration and we can also use Skylake processors and multiple GPU offerings as well. So I can easily say that Google has the most impressive core infrastructure. However Google does not have any bare metal offerings such as IBM has with Softlayer, and also compared with Microsoft and IBM, Google has no private cloud offering and have therefore went into a partnership with Nutanix in order to bridge the gap –> https://www.nutanix.com/press-releases/2017/06/28/nutanix-teams-google-cloud-fuse-cloud-environments-enterprise-apps/ and they have limited support and integrations with on-premises infrastructure. Then again this allows them to focus entirely on their public cloud offering.

Also Google is missing some of the PaaS services compared to what AWS and Azure is providing. I think that Google’s strategy is not to provide a bunch of different PaaS services which can overlap, but to streamline on a few selected services. Some serviecs that Google provide a quite unique like for instance BigQuery which is one my favorite services! Now I also belive that one of the “weaknesses” that Google currently has, is the ecosystem surrouding it. Many third party companies and vendors today support or have one form of integration or support with AWS and Azure, but not with Google (and the same goes with IBM)

Another thing with Google is the community. Unlike IBM I see there is a lot more meetups in the Nordics in particular and many partners focusing on it as well, but little activity on social media.The last thing I want to mention is since Google is the home of Kubernetes they also have the best managed container engine for it on GCP, but unlike Azure for instance they do not have support for other orchestration frameworks such as Swarm or DC/OS as a service. Also with the release of Azure Container Instances and AWS Fargate now released into the wild as well which focuses more on containers itself and now on managing a cluster consisting of a set of virtual machines underneath changes the game a little, so I hope that Google will release something here soon.

Microsoft Azure
azure

So much has happened in Azure the last year, they have announced multiple new regions (which makes Microsoft the one vendor with the most regions, but not the largest) to cover more ground. We can also see based upon all the announcements from Microsoft Ignite is that their core focus in Azure, Azure and Azure moving forward, with little to no announcements around their current private cloud core products. Also more focus has shifted into Private/Public Cloud offerings with Azure Stack as well.

Microsoft now provides an impressive list of virtual machine instances (however not as flexible and scale able as the other vendors) and a impressive list of different PaaS services and that they have done a great job on the container focus in Azure. Based upon all the announcements from 2017, I believe that Microsoft has done the largest investment into containers & devops features of the four vendors. Microsoft is also building close integrations with existing software that they sell to customers today to make it east for them to move resources to Microsoft Azure moving forward, and for some customers make it the only logical choice. I can also see based upon ETL tools that they provide to make it easy for customers to move and transform data from multiple sources to Microsoft Azure. Also that Microsoft in my opinion has the best visualization options with PowerBI.

Of the 4 vendors, I see that Microsoft is most focused on the regular infrastructure customers, where Azure provides easy support for delivering backup services for IaaS and on-prem with integrates directly with Hyper-V and VMware, and also migration tools which makes it easy to migrate workloads from on-prem or other cloud providers into Azure, and with all the different integrations options with Azure AD as well and building new features such as new modern version of RDS, SQL Server with Stretched database and other scenarios such as Hybrid Active Directory, makes Azure a strong player in that market for hybrid cloud scenarios, both from pure IaaS, Big Data and Identity options as well. Lastly Microsoft has also released Azure Stack where they try to bring the Azure ecosystem to on-premises workloads as well, which makes Azure even more the logical choice when a customer wants to move to public cloud.

One of the downsides with Azure is that performance is not their key asset on certain features, and also limited options on certain areas on IaaS makes it somewhat difficult at times. Might be that they are focusing to much on adding value add services that they are forgetting to focus on the underlying platform itself.

Amazon Web Services
aws
Amazon is the clear market leader when it comes to public cloud. Even if they do not have the most regions, they still have the largest market share. Looking at the technical capabilities and also the range of different PaaS features, most of the others are nowhere near when it comes to the PaaS ecosystem they have, for instance just looking at Amazon RDS and Amazon S3 and how extensive the service capability is. Also having customers such as Netflix using their cloud platform is a good pretty statement of their status. What I also see on Amazon since they have spent the long time in this marketspace is the community around it. Looking at all the user-group and meetups and different communities on AWS it is huge! Also I forgot to mention is that Amazon is ranked as the clear market leader in both IaaS and Storage in the Gartner magic quadrant. Also what I often see is that most third party vendors (if they support cloud or have some form of cloud integration) you can bet 10$ that it is mostly integration with AWS.

One of the things I’ve also noticed at Re:Invent is there was a high focus on DevOps and Containers with support for Kubernetes and AWS Fargate (Which is focusing on Container instances instead of a managed container cluster) but also on the partnership with VMware which will now allow customers to provision VMware ESXi hosts running on AWS infrastructure (Combining the market leader in private cloud and public clouds) which is a strong statement when it comes to hybrid cloud. Still a bit behind IBM especially on the VMware support and worldwide availability. Also one of the large focus areas in AWS was also on machine learning capabilities and media services (Which mostly has been services which Azure have had the upperhand on) 

One of the downsides of AWS has so far been the lacking interest for hybrid or private and some limited offering for on-prem solutions (one of the features is the storage gateway which)

Other ramblings
So that was a short introduction on some of the core strengts and weaknesses when it comes to the four vendors. Looking at the community and ecosystem on the vendors there is no denying that the largest is focused on AWS. This is just a screenshot from the stackoverflow developer report from last year. Showing AWS and Azure in the top 10 of categories on questions asked.

caputre2

Also showing all the threads on reddit, seems like there is alot more activity focused on AWS then on the other providers as well.

reddit

But of course I have focused a bit too much on the IaaS and PaaS offerings on a cloud provider, there is also no denying that the close integration between a cloud provider and other SaaS offerings to provide a consistent Identity and access control across the solutions is something that Microsoft and Google especially are quite good that. For instance having one account to access Collaboration tools and with the Cloud Platform. So this has been a somewhat short introduction to the Public Cloud Vendors and some of my experience on these, I would also love to get any feedback on your own experinces on these platforms and your take on it, the next blog in the series will focus a bit more in-depth on IaaS and look a bit more in details on the differences between the vendors.

 

 

 

So why choose Citrix over Microsoft RDS?

A question came a couple of days ago, to do a refresh on this blogpost since this is a topic that appears frequently on Twitter from time to time so therefore I decided to do a rewrite of this blogpost.  So why should we choose Citrix over Microsoft RDS? Isn’t RDS good enough in many circumstances? and has Citrix out-played its role in the application/desktop delivery marked?  Not yet… So this questions has also appeard in my head many times over the last year, what is an RDS customer missing out on compared to XenDesktop? So therefore I decided to write this blogpost showing the different features which IS not included in RDS and an architectual overview of the different solutions and strenghts to both of them. NOTE: However I’m not interested in discussing the pricing here, I’m a technologist and therefore this is mostly going to be a feature matrix show-off

Architecture Overview

Microsoft RDS has become alot better over the years, especially with the 2012 release and actually having central management in Server Manager, but alot of the architecture is still the same. Also that we can now have the Connection broker in Active/Active deployment as lon as we have a SQL server (Note: 2016 TP5 now supports Azure Database for that part) External access is being driven by the Remote Desktop Gateway (Which is a web service to forward proxy TCP and UDP traffic to the actual servers / vdi sessions) and we also have the web interface role where users can get applications and desktop and allow them to start remote connection.

image

But still the remote desktop application which is built-into the operating system still does not have a good integration with a RDS deployment to show “buisness applications” and with Microsoft pushing alot to Azure they should have a better integration there to show buisness applications and web applications from the same kind of portal.

From a management perspective as I mentioned still done using Server Manager (Which is a GUI addon to PowerShell where also alot is done, but server manager is still kinda clunky for larger deployments and also it does not give any good insight in how a session is being handled or such, you would require to have System Center or digg into events logs or third party tools to get more information. But we can now centrally provision the different roles directly from Server Manager and the same with application publishing which makes things alot easier!

Microsoft is coming with RDmi as well most likely next year, which will also introduce a easier way to deliver RDP using App Services in Azure which allows us to host services such as RDmi Gateway, web, connection broker and diagnostics in Azure and place our RDSH servers anywhere with most likely using some form of connector between local servers and Azure Web Apps  (Quite similar to what Citrix is doing with Citrix Cloud and Cloud Connectors as well)

image

Also Microsoft has released Honolulu which is a modern take on server manager which is based upon HTML5 and has support for extensions where RDmi will be supported when it is released.

image

Citrix has adopted the FMA architecture from the previous XenDesktop versions, but the architecture might still resemble RDS. NOTE: That the overview is quite simplified but this is because I will dig into the features later in the blog. With Citrix we have more moving parts. Yet a bit simplified. With RDS I would need a load balancer for my Gateways and Web Interface servers. With Citrix in larger deployments you have NetScaler which can serve as an Proxy server and load balance the requires Citrix services as well. Also with Citrix we have a better management solution using Desktop Studio, which also allows for easy integration with other platforms and also simple image management using MCS  plus that we have Director as well which can be used for troubleshooting and monitoring of the Citrix infrastructure as well and can also be used to troubleshoot and do define end-user support.

image

The Protocol

So in most cases, and what I often see as well is HOW GOOD IS THE PROTOCOL? Again and again I’ve seen many people state that RDS is as good as Citrix ICA, but again ill just post this picture and let it state the obvious. You need facts!

Luckily I’ve done my research on this part.

While RDP as mostly a one-trick pony which we can do some adjustments in Group Policy to adjust the bandwidth usage or using regular QoS, it is still quite limited to the networking stack of the Windows NDIS architecture, which is not really adjustable. NOTE: That with Windows Server 2016 most traffic is being redirected trough the UDP port, but it is difficult to define what kind of remoting channel should use in terms of KB/s

(ThinWire vs Framehawk vs RDP) https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/
Now with Citrix we can have different protocols depends on the use-case, for instance me and a good friend of mine, did an Citrix session over a 1800 MS latency connection using ThinWire+ and it worked pretty well, while RDP didn’t work that well, on another hand we tried Framehawk on a 20% packet loss connection where it worked fine and RDP didn’t work at ALL.

But again this shows that we have different protocols that we can use for different use-cases, or different flavours if you will. 

clip_image002

Another trick to it is that in most cases, XenDesktop is deployed behind a NetScaler Gateway, which has loads of options to customize TCP settings at more granular level then we could ever do in Windows without messing in Registry in some cases. So is RDP a good enough protocol for end-users? Sure it is! but remember a couple of things

  • Mobile users access using a crappy Hotel Wifi (Latency, packet loss)
  • Roaming users on 3G/4G connection (TCP retransmissions, packet loss)
  • Users with HIGH requirements in terms of performance (Consuming alot of bandwidth)
  • Connections without using UDP (Firewall requirements)
  • Multimedia requirements (3D, CAD applications)

With these types of end-users, Citrix has the better options also now with Adaptive Transport.

UPDATE: Now by default, Citrix has released EDT which by default uses UDP as the transport mechanism  ( you can see a bit more about protocol benchmarking here –> http://msandbu.org/xendesktop-edt-over-netscaler-benchmarking/ ) which performs alot better then regular TCP is most scenarioes.  You can also see a comparison of HDX versus RDP here as well –> https://bramwolfs.com/2017/11/29/a-comparison-between-display-protocols-and-codecs/ note that RDP operates at 4:4:4

Also as of late Citrix now also supports H.265 (Which is the successor to 2.64 –> https://docs.citrix.com/en-us/receiver/windows/current-release/about.html, note however that this requires a physical GPU server side)

Image management

Image management is the top crown, being able to easily update images and roll-out the changes when updates are needed in a timely fashion without causing to much downtime / maintance.

With RDS there is no straight forward solution do to image management. Yes RDS has single-image management but this is mainly for VDI setups running on Hyper-V which is now the supported solution for it. But a downside to this is that it requires Hyper-V in order to be able to do this using Server Manager. It is still not shown yet how this will be affected with RDmi, but against Azure it is possible to do ARM based templates to deploy RDS servers automatically.

Citrix on the other hand has many more options in terms of management OS image management. For instance Citrix has Machine Creation Services which is a Storage way to handle OS provisioning and changes to virtual machines, which I described in my other post on MCS and Shadow Clones ( https://msandbu.wordpress.com/2016/05/13/nutanix-citrix-better-together-with-shadow-clones/ )

image

Also Citrix has Provisioning Services, which allows Images to be distributed / streamed using the network. So virtual machines and physical machines can be configured with PXE boot and stream and operating system down and store in RAM. Doing updates to the image just requires an reboot.

Another thing to think about here is the hypervisor support, where in most cases PXE supports both physical and virtual. MCS is dependant on doing API calls to the Hypervisor layer, but it already has support for

  • * VMware
  • * XenServer
  • * Hyper-v w SCVMM
  • * Azure (With native support for most of the azure components)
  • * Amazon EC2
  • * Cloudplatform
  • * Nutanix

Other features that Citrix has:
* Cloud based services available now (Services such as Citrix Cloud, XenApp Essentials, XenDesktop Essentials)

  • * RemotePC (This golden gem which allows a physical computer to be accessed remotely using the same Citrix infrastructure) just need to install an VDA agent and publish it and can then be accessed using Citrix * Receiver. Even thou if Microsoft has RDP built into each OS there is not central management of it and there is no support to add these to the gateway builtin, each user has to remember the IP and FQDN in case.
  • * App-V and Configuration Manager integration and management (Citrix actually has App-V management capabilities directly from Studio, they also have an integration pack with Configuration Manager which allows for use of WoL for RemotePC for instance. It can also leverage the Configuration Manager integration do to application distirbution and direct publishing for that leverage Configuration Manager heavily
  • * App Layering which allows us to do application and user layers (based upon Unidesk)
  • * WEM – Workspace Enviroment Manager to allow more in-depth policy control and system resource management.
  • * NetScaler Insight – To allow better insight on the HDX channel to see how the traffic flow is distributed between screen, printer, audio, video for instance.
    * Smart Tools – Allows us too use for instance smart scale which works flawlessly in Cloud Settings to stop/start XenApp hosts based upon a schedule http://msandbu.org/citrix-smartscale-and-microsoft-azure/
  • * VM hosted application (allows us to publish applications which for under some scenariones can only be installed on a client computer)
  • * Linux support (Citrix can also deliver virtual desktops or dedicated virtual desktops from Linux using the same infrastructure)
  • * Full 3D support (Microsoft still has alot of limitations here using RemoteFX vGPU, and it can also support DDI using Hyper-V also on Azure) but Citrix has multiple solutions for instance to do vGPU from NVidia or do GPU-passtrough directly from XenServer, VMware or even AHV.
  • * Full VPN and endpoint analysis using NetScaler Gateway (NetScaler Gateway using Smart Access has alot of different options to do endpoint analysis using OPSWAT before clients are allowed access to a Citrix enviroment.
    * Integration with Citrix NetScaler and Intune to deliver Conditional Access – Many are adopting EMS with Intune for MDM which now supports Citrix deployment and access via NetScaler and Azure AD integration
  • * Skype for Buisness HDX optimization pack (Allows to offload Skype audio and video directly to an endpoint from the servers)
  • * Universal Print Services (Allows for easier management of print drivers)
  • * System Center Operations Manager management packs (Part of the Comtrade deal which allows platinum customers to use management packs from ComTrade to get a full overview of the Citrix infrastructure. Citrix now also provides OMS modules to leverage OMS to do monitoring of Citrix enviroments as well
  • * More granluar control using Citrix Policies (Which allows us to define more settings on Flash redirection, Sound quality, bandwidth QoS and much more)
  • * Browser content redirection
  • * HTML5 based access (Storefront supports HTML 5 based access, which opens up for Chromebook access, Microsoft is still developing their HTML 5 web front-end)
  • * Hell of a lot better management and insight using Director!
  • * Local App Access (Allows us to “present” locally installed applications into a remote session)
  • * Better Group policy filtering (based upon where resources are connecting from and using Smart Access filters from NetScaler)
  • * Performance optimization (Using for instance PVS and Write Cache to RAM with Overflow to Disk you don’t have to be restrained to the resources on the backend infrastructure, but allows for a better user experience
  • * Zone based deployment which allows users to be redirected to their closest datacenter based upon RTT
  • Mix of different OS-versions, with Citrix we have an VDA agent that can be used on different OS versions and be managed from the same infrastructure while Microsoft has limited management for each OS version.
  • * SAML based authentication to provide SSO directly to a Citrix enviroment.

NOTE: Did I forget a crucial feature or something in partciular please let me know!

One of the things however I do feel that Microsoft is doing right now is with Project Honolulu and developing a more HTML5 / REST based UI to make server management easier, so I sure hope that Citrix is also moving in that direction as well.

Summary

So why choose Citrix over Microsoft RDS? Well to be honest Citrix has a lot of feature which makes it more enterprise friendly.

  • Easier management and monitoring capabilities
  • Better image-management and broad hypervisor/cloud support + Performance Optimization
  • Better protocol which is multi-purpose (ThinWire, EDT, Adaptive Transport, etc)
  • Broader support for other ecosystem (Linux, HTML5 Chromebooks)
  • NetScaler (Optimized TCP, Smart Access, Load balancing)
  • GPU support for different workloads
  • Remote PC support
  • Collabaration support with Skype for Buisness
  • Zone based deployment
  • Layering capabilities (Personlization and Application)

But it is also no denying that RDS works in most cases and it all comes down to requirements of the business, but the most important fact in any type of app delivery platform is that it provides the best possible end-user experience.

So to sum it up, you can have a Toyota Yaris which can get you from A to B just fine or you can have a garage filled with different cars depending on requirements with bunch of different features which makes the driver experience better, because that is what matters in the end… End-user experience!

Review – Goliath application availability monitor

One of the issues with a RDS/Citrix/Horizon enviroment is actually capturing how the experience feels like for an end-user and being able to detect and see how the end-user sees the logon process. Most monitoring tools today focus on the performance on the terminal servers looking at CPU/Memory and storage available or looking at services that are actually running using service monitoring tools like System Center Operations Manager and so on. The issue with these is that they are infrastructure focused which of course is an important aspect but we also need to look at the end-user layer as well. This is something that Goliath have worked closely on with the release of Application Availability Monitor, which allows us to do monitor of enduser applications and desktop using realtime logon test as an end-user from different locations. They also provide visibility into all applications and desktops being launched, with reports and drilldown analytics detailing whether logons succeeded, failed, or where slow.

They also provide screenshots of each process to make it easier for helpdesk to determine where the issue lies.

The architecture is pretty product is pretty simple, it consists of the Goliath Availabilty server which stores and maintance state of the connectivity and stores the result in a SQL Server database which can either be locally installed as part of the goliath server or using a remote setup NOTE: If you download the trial from their website the product will by default install with SQL Express embedded with the installation. We also have the availability agents which actually performs the tests against the different enviroment regardless if it is Microsoft RDS, Citrix XenDesktop or Horizon View.

image

Of course depending on what kind of enviroment you want to do testing against there are some small differences on what we need to configure on the endpoint and configure the enviroment we want it to test against. So we define a schedule to check application availablity from each of our enviroments, and Goliath will do a step by step interaction and take screenshots to determine where any type of error might occur. For instance in this example below we can see that my resource Administrative Desktop is suddenly not available.

image

The test is based upon a schedule which I have defined and which agent it is run from. Here we can see an example from where a desktop is not available but all other components are available and are working hence what we see in the availability analysis. In a scenario where there are issues further in the session you will get a screenshot which shows where the issues lies.

Citrix-Epic-Login-Monitoring

So using Application Availability Monitor from Goliath it can allow us to get a clearer image of how the enviroment is doing but not just by monitoring individual services and processes, but actually combining this with a simulated end-user logon process to see where the process stops.

Accessing Azure Advisor using REST API

One of the cool still faily new services in Azure is the Azure Advisor feature. Azure Advisor is a service which provides insight into your subscription based upon high-availability, cost, performance or security (which is using Azure Security Center)

image

Using the UI (Portal) You can also download recommendations either using Excel or using PDF as well. However logging into each of these portal if you have multiple customers is a bit time consuming.

Luckily Azure Advisor is also accessable using the REST API, which can easily be scripted combined with Azure CLI from any linux host.

In order to use if you first need to send a POST command which will generate the recommendations, which I’m been using curl to achive

This script need two variables which is subscription and token (which is a service principal in Azure AD)

#Define variables
subscription=subscriptionid
token=$(az account get-access-token –output tsv | cut -f1)

#Generate Recommendations

curl -v –silent –request POST‘ ‘’
–url ‘https://management.azure.com/subscriptions/${subscription}/providers/Microsoft.Advisor/generateRecommendations?api-version=2017-03-31’ \
–header ‘authorization: Bearer $token’ \
–header ‘cache-control: no-cache’ \
-d “Content-Lenght: 0” -d “Accept: application/json” –stderr – | grep x-ms-request-id | cut -d “:” -f2 | sed ‘s: ::g’

#Get result from query

sec=https://management.azure.com/subscriptions/${subscription}/providers/microsoft.Security/tasks?api-version=2015-06-01-preview
# Getting Azure Security Center feedback
curl –request GET \
–url “$sec” \
–header “authorization: Bearer $token” \
–header ‘cache-control: no-cache’ | json_pp

This will return all recommendations are JSON based text which which is being image

defined using json_pp allows us to much easier parse it as JSON text and much easier allows us to parse each recommendation as a service ticket and allow the service desk to follow-up on each recommendations that is being returned.

Microsoft Azure and VMware? Not so fast!

In August this year, VMware and Amazon announced that VMware Cloud on AWS was available! (Atleast from Oregon) Which essentially means VMware based infrastructure running on AWS hardware. And this was the highest attending session on VMworld this year, so the interest is quite huge!

And the path to cloud for many buisnesses need to start with something familiar, and not a complex beast like what AWS is, so for many it makes sense.  It is important to note here that this is a fully managed service. That is to say, VMware will install, manage and maintain the underlying ESXi, VSAN, vCenter and NSX infrastructure. Routine operations like patching or hardware failure remediation will be taken care of by VMware as part of the service. Customers will have delegated permissions to things like vCenter and will be able to use vCenter to perform administrative tasks but there will be some actions like patching which VMware will provide to you as part of the service. This means that VMware takes care of the core infrastructure in partnership with AWS.

Also during VMworld this year, VMware and IBM also announced a partnership and released a new product called HCX, which I’ve blogged more about here –> http://msandbu.org/more-info-on-vmware-hcx/ which also allows for seamless DR options as well.

So for VMware to partner up with AWS and the long time partnership with IBM makes sense. It provides customers with the ability to use the marketleading hypervisor (According to Gartner: The market remains dominated by VMware, however, Microsoft has worked its way in as a mainstream contender for enterprise use.) In a cloud scenario.

Earlier this week, Microsoft also had some big news to announce, seems like they want a piece of the cake…
https://azure.microsoft.com/nb-no/blog/transforming-your-vmware-environment-with-microsoft-azure/
Host VMware infrastructure with VMware virtualization on Azure. Most workloads can be migrated to Azure easily using the above services; however, there may be specific VMware workloads that are initially more challenging to migrate to the cloud. For these workloads, you may need the option to run the VMware stack on Azure as an intermediate step. Today, we’re excited to announce the preview of VMware virtualization on Azure, a bare-metal solution that runs the full VMware stack on Azure hardware, co-located with other Azure services. We are delivering this offering in partnership with premier VMware-certified partners. General availability is expected in the coming year. Please contact your Microsoft sales representative if you’d like to participate in this preview.  Hosting the VMware stack in public cloud doesn’t offer the same cost savings and agility of using cloud-native services, but this option provides you additional flexibility on your path to Azure.

So this means that we will be able to provision VMware Infrastructure on Azure as well, but note here that this is not done together with VMware and they issued a statement saying VMware does not recommend and will not support customers running on the Azure announced partner offering.(https://www.theregister.co.uk/2017/11/23/vmware_refuses_to_support_vmware_on_azure)

There are a couple of things to consider on this part.

* If this is just running VMware infrastructure in a Azure datacentre and is following the VMware HCL and is following everything by the book, how can VMware deny support?
* How will the partnership work, will the partner be managing everything inside Azure such as VMware is doing with AWS?
* We know that this is entirely different from the VMonAWS setup

image

I will bet that Microsoft will not have announced anything like this if they haven’t figured out stuff like support and management and such so stay tuned.

Cisco Umbrella–What is it?

I’ve just been introduced to Cisco Umbrella now even though I’ve heard the name before, I haven’t actually tried it yet until now.  Umbrella comes from the OpenDNS Business purchase that Cisco did a while back, and is essentially a service to secure traffic trough proxying DNS requests. So in essence it is to setup clients to use the public Umbrella DNS servers which are 208.67.222.222 & 208.67.220.220 where we have a set of policies which define what end-users are allowed to access or not.

6c98178-umbrella_VA_network_resize

So when you access your favorite website or newspaper online or such your computer will do 20+ DNS requests where their are different 3.party ads or other content which needs to be rendered inside the browser session which you don’t actually see. What if one of these domains actually contain malware or some form of bitcoin mining JS code? That is kind of hard to know, there has of course been traditional ways to handle and securing web traffic which has been using a forward web proxy where all traffic is forwareded trough a network appliance, but this doesn’t scale to that degree and has some implications for remote workers. This might also place a bottleneck on your proxy since all layer 7 traffic is tunneled trough it. Umbrella works on a smart level since it only checks the DNS requests a client has and makes sure that the domain does not fall into a category that is blocked in a policy. If there is a domain that Umbrella finds suspicious it will do a more in-depth analytics of the content it provides. 

Umbrella can either be deployed using Umbrella virtual appliance utilized as conditional DNS forwarders on your network, Virtual Appliances record the internal IP address information of DNS requests for usage in Reports, also the VA provide more granular control.

Or you can also just point the DNS servers to the umbrella DNS servers or use the lightweight client which can be installed on endpoints and protect remote workers.

So what about when malware authors do hardcoded downloads that point to an IP address instead of DNS name? Umbrella also has a IP Layer Enforcement which works at IP level to detect suspicious addresses. The Umbrella roaming client retrieves a list of suspicious IP addresses from Umbrella Cloud Services, and automatically checks again for any new IP addresses several times an hour from the Umbrella API, but again most services are in different tiers of Umbrella (http://bit.ly/2B2hh2k)

image

The UI is pretty slick and simple to configure where we can define block and allow lists also just specify categories which domains should be allowed/blocked. For instance it blocks Malware based domains which is a list maintained by Cisco.

image

So when an end-user browses to an external website which is blocked by Umbrella, they will get this 302 redirect message instead. This is because that the domain is blocked and the DNS request will route the enduser to a Cisco website instead.

image

Umbrella is a really cool interesting product which can enforce alot of security on endpoints without an “hit” to the end-user experience, however you need to be aware of that Umbrella is not intended to enforce data loss prevention policies, which address compliance concerns due to accidental disclosure of company or customer data, and  is not intended to completely replace a firewall, which is designed to secure both internal and external network connections.

Microsoft Azure Reserved instances and pitfalls

Microosft recently released Reserved Instances back to Microsoft Azure (Yes it was the before, but was pulled and is now back) which can provide a huge discount on running virtual machines in Azure which are static in nature. With Reserved instances you commit for a certain amount of compute capacity either 1 year or 3 year upfront. So how much is the difference on a single virtual machine? A single virtual machine running D4 v3 in West Europe without any discount will cost about $175 a month.

A price example from the MIcrosoft Azure price calculator, does not reflect EA prices.

image

Using 3 year reserved it will only cost 76$ (almost 56% discount) if with Windows is will cost 211$ discount (meaning only 32% lower cost) which shows that we get some discount of Windows as well but if we run with Hybrid Use Benefit it will cost the same 76$. So if we are running static workloads in Azure, meaning virtual machines that are running 24/7 and not being powered on/off it sounds like a best pratice to enable RI for those instances. This feature should not be enabled for virtual machines that are powered off during the night because with RI you will need to pay for the virtual machine regardless if it is running or not. This does give you some predictability when it comes to compute cost, but it does not apply to other services such as storage / bandwidth and such. However there are some limitations to RI as it is now.

* It is only available for Pay-as-you-go and EA agreements (no CSP and Open support, CSP coming Q1 2018)
* RI only apply to VMs, VM Scale Sets, and other services that spin up VMs in a customer subscription, such as Azure Batch in customer subscription mode
* Azure RIs are available for all VM families other than A-series, A_v2 series, or G-series (and also VM-series in Preview such as B-series)
* Enterprise Agreement (EA) customers, Azure Monetary Commitment can be used to purchase Azure Reserved VM Instances. In scenarios where EA customers have used all of their monetary commitment, RI’s can still be purchased, and those purchases will be invoiced on their next overage bill. For customers using pay-as-you-go Azure.com, at the time of purchase, the credit card on file will be charged for the full upfront payment of the Azure Reserved Instances.
* RI are scoped to a Azure region and instnace type (No option to choose amount of vCPU and RAM, but you need to choose instance type such as D2_v2)

You can enable reserved instances by going into the Microsoft Azure portal and going into the reserved instances panel in the portal, and from there selecting the amount of instances and size type and choosing accept.

image

It is a shame however that Microsoft still needs to have RI tied to a select set of virtual machines. GCP for instance allows us to apply commited discount use to the aggregate number of vCPUs or memory within a region so no need to define an amount of instances. This also allows us to be more flexible when it comes to the amount of virtual machines we need to use for static workloads as well. Also GCP allows us to still continue to use the pay-as-you-go model, since the Committed use discounts are applied to our bill every month.

Comparison between Horizon Cloud and Citrix Cloud on Azure

The last couple of weeks I have been working with VMware Horizon Cloud for Microsoft Azure, and tesing the bits and pieces about the platform, and especially I’ve been looking at how it compares against Citrix Cloud in general. Therefore I decided to write this blog post to maybe enlighten how it differs in terms of deployment and operations and how to get it up and running. you can review the requirements for Horizon Cloud for Azure deployment here –> https://docs.vmware.com/en/VMware-Horizon-Cloud-Service/services/com.vmware.hconmsazure.getstarted.doc/GUID-DC011997-CE9E-4B38-9C4F-57104226218C.html#GUID-DC011997-CE9E-4B38-9C4F-57104226218C 

One thing I want to highlight that moving VDI to the cloud does not bring any real value unless it is for the proper reasons, in most cases the public cloud is still more expensive then running it on local infrastructure. The most common use-case if you can benefit from the automatic scaleability that cloud provides such as companies where the amount of users is fluctuating going from 10 – 100 users during working hours ( 7 AM – 5 PM) where you only need to pay for what you use in terms of infrastructure cost and licensing.

The architecture is quite simple, as Citrix Cloud it requres that we have an existing Azure subscription and with an existing Active Directory virtual machine running and an virtual network defined. After you have setup the connection it will deploy a Horizon Cloud Node(Node Manager) which acts as the hub between Horizon Cloud Control Plane and your servers and Active Directory.

It also provides simple update mechanism, so when an new version is available the node will automatically upgrade itself and the unified access gateway running in parralell and configuration information and system state is copied from the running SmartNode and Unified Access Gateways to the new ones. After the configuration information is copied and checks completed, the new SmartNode and Unified Access Gateways become active.

Architecture illustration of the node's resource groups, VMs, and subnets

To begin with let’s take a closer look at some of the capabilities that are included in the initial release of Horizon Cloud on Microsoft Azure.

* Application & Session Desktop Delivery
Ability to publish and manage RDS-hosted applications and desktops on Microsoft Azure while leveraging on-premises and cloud resource (VDI not available that is coming later)
* Hybrid Architecture
Support for both Horizon Cloud with on-premises infrastructure and Horizon Cloud on AzureMicrosoft Azure, in a single solution.
* User Experience & Access
Identity-based end-user catalog access via VMware Workspace ONE
Secure remote access for end users with integrated VMware Unified Access Gateway
Support for Blast Extreme, Blast Extreme Adaptive Transport (BEAT) protocol.
* Power Management
Ability to track and manage Microsoft Azure capacity consumption to keep costs low, allowing for scaling based upon sessions or schedule.
* Easy Deployment
Automated deployment of Horizon Cloud service components Integration with Microsoft Azure Marketplace to allow importing a Windows Server image on which the necessary agents get automatically applied.
* Simplified Management
Horizon Cloud always maintained at latest versions Under five-minutes, self-scheduled upgrades for components on Microsoft Azure via Blue-Green upgrades.
Unified Access Gateway deployed automatically in Microsoft Azure.
* Pricing
Horizon Cloud Apps
Named User – $8/month
Concurrent User – $13/month

One of the first inital things that struck me was the price model that they have for cloud. With is named user or concurrent user. If we are thinking about a global organization where task workers are roaming across different regions concurrent user would make a lot more sense also combined with the pay-as-you-go model that is in the cloud. Also that XenApp Essentials from Citrix cost 12$/month for each named user.
Another detail was that VMWare chooses to do automatic deployment of their Unified Access Gateway as a virtual appliance directly to Microsoft Azure, while in Citrix you would need to deploy this on your own, or using NGaaS service from Citrix. However the NGaaS Service all traffic is routed trough Citrix Cloud POPs which the unified gateway provides direct communication from the endpoint to the applications.

Another thing is when setting up agents in Azure, VMware has a limited set of virtual machine instances that they support  which are Standard_D2_v2, Standard_D3_v2, Standard_D4_v2 & Standard_NV6 not sure why they only have this list, Citrix Cloud supports all available instance types on Azure. Also one thing with the NV series. With this release, GPU is supported for use only in Microsoft Windows Server 2012 R2 due to a driver limitation in the Horizon agent in Microsoft Windows Server 2016.
1

Setting up Horizon Cloud against Azure we need to create an application service principal in our Azure AD account and this application ( service principal ) needs to have contributer right on the Azure subscription.
NOTE: is is important that the sign-on URL is http://localhost:8000 or else the wizard will fail.

Create App Registration screen with values for Hzn-Cloud-Principal

Doing all this work on setting up the service principal should be automated however, Citrix Cloud uses an Azure AD account to create a service account for the use. This way we don’t need to get all the info like App ID, Directory ID and such.

The initial wizard also requires us to have a precreated vNET. The wizard will automatically create the subnets within the vNET( Management, Desktop and DMZ). It will also handle the deployment og the access gateway.

2

Also the wizard will also automatically deploy a unified access gateway which will be accessable behind an Azure load balancer also equipped with a certificate. The only piece we need to fix is the public DNS record.

If you have a fresh account it will also validate the quota setup for the Azure account both to ensure the certificate, quota of users and make sure that the subsets are not already defined.

3

After you are done with the initial wizard it will start to provision a jumpbox server on the Azure account and start downloading agents and other VHD files. After the jumpbox server is up and running it will start to setup the node manager. The jumpbox will then self destruct after the node manager is up and running and is only provisioned/used when there is an update or building up a node manager.

image

After the node manager is up and it has successfully connected back to the control plane (Horizon Cloud) you just need to complete the wizard setup, and setup integration with Active Directory.

image

After you have integrated Horizon Cloud with Active Directory will need to reauthenticate to VMware cloud and also after login again you will also need to authenticate against Active Directory which the node manager is integrated with.

image

After you are authenticated you need to create an image which will be used to deploy your applications. You can either bring you own image or you can import a VM from the marketplace.

    • image
      • Horizon will essentially create a VM using a image from the Azure marketplace (Which is either 2012 R2 or 2016) and it will preinstall the agent and such which we then can convert to an image.
        • image
        • After the desktop from the marketplace was created we can go ahead and convert to an image after we have adjustments to it. This makes it easy to create a master image with doing just a small piece of the image setup.
        • After that I need to create an farm based upon the image, where I have the same list of machine models that are supported. I also specify what kind of protocol, domain and client type I want to use. Further down I also specify the logon idle timeout value as well (before a session is kicked out)
  • image

    Next I specify the update/maintance sequence, where it will do automatic draning of each server, as best practice for virtual machine maintenance is to restart the VMs from time to time, to clear out cached resources or any memory leaks from third-party applications in the VM. I can also specify what the servers should do during maintance window, such as restart or rebuild.

    • image
  • so after I’ve specified the amount of VM’s it will start to provision the farm based upon the image and machine instance type in Azure.
  • image
  • And last but not least, do an assigment of a desktop to a set of users.
  • image

One thing I notice is that I love the dashboard showing issues directly related to Azure such as quota management, since most subscriptions in Azure have a soft quota which should be increased. image

Summary:
From the first impression, I do love the work that VMware has done with Azure in terms of integration. It does provide and supports many of the Azure features.
* Using Azure AD Service Principal for authenticating with Azure and also checking the storage quota.
* Using Managed Disks for VM provisioned on the farms
* Power Management for virtual machines using ARM underlying API.

* Automatic starting of another node in a farm if one goes down suddenly.
Also that they provide the simple deployment of the Unified Access Gateway and certificate management can be done using the Horizon Cloud HTML5 portal which makes it easy to manage the remote access. Now I enjoy working with NetScaler, but Citrix should do something simliar to have simple deployment of remote access where they just deploy a VPX instance directly to Azure.

  • A couple of things I would like to see for the future setup.
    * Support for Encrypted Disks in Azure
  • * Support for other machine model and instances in Azure
    * Be able to define my own resource grups.
    * Provide OMS module for Monitoring ( yes please! )
    * Specify disk size use of managed disks.
  • Looking forward to seeing this develop moving forward!

More info on VMware HCX

After looking into the blog post announcements on VMware HCX after VMworld I decided to get a bit more info what HCX actually is. This blog post will try to summarize what it is and what it can do. HCX is not a single product, but a combination of multiple VMware products which will be available is a single solution.
HCX is a also a cloud product which is delivered together with HCX Providers such as IBM or OVH.

So what can HCX Provide? It is essentially works as an extension between your existing infrastructure and a HCX Provider or a bridge. This allows for instance use of

  • Disaster Recovery 
  • Hybrid Cloud 
  • Migration to newer platforms

On the HCX provider side we have a VMware Cloud Foundation setup. Cloud Foundation is based off of  VMware vSphere, vSAN, NSX, and SDDC Manager, where the last part automates and orchestrates the entire deployment process on the providers end. Using NSX on the providers end opens up for a new way to do software-defined network where all traffic is wrapped into VXLAN traffic. On the client’s or customers end we only need to deploy a single virtual machine (which is the HCX Client) this runs on your existing VMware infrastructure. The HCX client will be backwards compatible with as old versions ESX 5.1, and allow for management from the existing VMware console.

HCX will also come with WAN optimization and will allow us to connect our existing DC over regular internet or we can use a direct connection with the cloud provider. Regardless all traffic will be encrypted using AES 256 bits encryption.

So HCX will provide secure Live vMotion – HCX proxies vMotion, resulting in a secure, zero downtime live migration to the cloud, over the HCX interconnect fabric described above.  It will also provide built-in business continuity – HCX provides DRaaS to enable business continuity while migrating/moving applications, and will allow customers can define as low as 5-minute RPO/RTO for VMs.

So some question that I am left with (Which I’m guessing will be answered when it will be released.)

1: How will the HCX Client provide redudancy on the customer side? can we setup multiple HCX clients which can load balance across the traffic?
2: How can we handle disaster recovery when it comes to layer two network failure?
3: How does it integrate with older versions of VMware where we don’t have web based console?
4: Since it doesn’t require us to have NSX on the customer side, and we pay for the license as part of the cloud offering what kind of functionality will we get?

When will it be available?
November – 2017 ( Later this month ) so looking forward to testing this on IBM Bluemix especially. IBM is also consolidating both of their platforms (Bluemix and Softlayer) into a single platform from a management perspective as well. So it should be available from the different IBM Softlayer locations pretty soon –> http://www.softlayer.com/data-centers%20