Monthly Archives: April 2017

Welcome to Citrix User Group Norway Spring 2017!

As part of the local Citrix User Group Board, and being part of planning the agenda for the last couple of month. I can finally say we have an pretty good agenda for the summer conference some this post is to give you some of the highlights on the agenda so far.

The conference is 6 – 8th June in Strømstad, Sweden at Strømstad Spa Hotel

Bilderesultat for strømstad spa hotel

The conference starts on the 6h with our annual masterclasses which this year has been split up into 4 parts,  XenServer, Unidesk, XenMobile and NetScaler SD-WAN. This year we are fortunate to have James Bulpin from the Citrix  XenServer team and Ron Oglesby from Unidesk now Director for Advanced Tech at Citrix to host the masterclasses on XenServer and Unidesk for us!

Then during the next couple of days we will have great in-depth sessions on Microsoft Azure with Citrix, Worspace Enviroment Manager, PowerShell and Citrix,  Workspace Enviroment Manager Deepdive,  NetScaler and identity integrations, What’s new in XenDesktop and of course Geekspeak and a great social interaction!

More information on the agenda, speakers and sign up here –> http://bit.ly/2bbH40B

Citrix OMS monitoring Tech Preview

So Citrix recently released a Tech Preview which allows monitoring of your Citrix infrastructure using Microsoft OMS. Now I’ve been a fan for OMS for quite some time, and I try to use it as much as possible. OMS also comes in a free tier which allows uploading of 500 MB of data each day and it can also be used for log gathering of Azure resources and actions being done there as well, and has a wide range of support when it comes to different agents.

image

Now this module from Citrix requires an existing Citrix infrastructure and we need to install an aditional agent on the DDC which will communicate with OMS and deliver information such as Logon time and such directly.

NOTE: This service is not available for CSP subscriptions yet, and there are some license requirements to be able to use this OMS module.

License requirements: Licensing of Citrix management solutions for OMS depends on the licensing of Citrix XenApp and XenDesktop. For Citrix management solutions for OMS to operate properly and remain fully functional, its licensing must be covered. Covered licensing means that the following conditions are fulfilled:  The XenApp and XenDesktop Site that you plan to monitor uses a Platinum XenApp and XenDesktop license within the maintenance program: Subscription Advantage or Software Maintenance.  Subscription Advantage Date (of the Platinum XenApp and XenDesktop license) falls after the Subscription Advantage Eligibility date that is embedded into binary files of the current version of the Citrix Agent for OMS for XenApp and XenDesktop

Now there are two things you need for this, first you need to download the OMS agent from Citrix which can be found here –>

https://www.citrix.com/downloads/xenapp-and-xendesktop/betas-and-tech-previews/Citrix-OMS-solution-technology-preview.html

Next we need to install the OMS modules in Azure, which can be found from the Azure marketplace.  So we need to configure both of these modules since to deliver different metrics and information directly.

image

image

image

image

When we click on the OMS portal we get access to the log analytics portal which is going to be a pretty empty shell to begin with.

image

Go into settings –> Connected Sources –> and find the Workspace ID and Primary Key which we will use when we install the module on the workers.

image

image

Now when you enter this information in the Citrix Agent setup and click Test connection the setup could go green and do ok!

Now there is a suppor tool included to verify if everything is setup properly, which is located under the install path of the OMS agent and there is executable called support.exe which can be run with the parameters

support.exe /user usernameindoamin /domain localdoamin /checkprereq
It will generate a support view and show if there is anything that is not in place (like licenses for instance)

image

Now if you do not get any error messages you will see that information will start to be pushed to OMS.

image

What is your backup strategy for Cloud based IaaS deployments?

When moving to public cloud I often find that people I talk with misunderstands what services they actually get when they buy a service in for instance Azure, AWS or GCP for that matter. That is of course something I often find doing myself and not actually reading the different service descriptions. One common misconception is the redundancy == backup. All vendors provide built-in redundancy of their services, for instance with x amount of  data copies of each storage object.

Now redudancy allows our VMs to continue to run in case of a physical drive failure for instance, but it does not provide us with the ability to get back old data if we managed to overwrite or delete a file which wereally needed.

And this is exactly the issue when running virtual machines in any of the large cloud providers, they have the redundancy but  backup of the data is completely our own responsibility.

So what options do we have?

Amazon Web Services
When deploying virtual machines on Amazon Web Services, the virtual hard drives are deployed on a service called EBS (Elastic Block Storage) which is a block based storage solution which can be used for structured data, such as databases, or unstructured data, such as files in a file system on the volume.

Amazon EBS provides the ability to create snapshots (backups) of any Amazon EBS volume. It takes a copy of the volume and places it in Amazon S3, where it is stored redundantly in multiple Availability Zones. The first snapshot is a full copy of the volume; ongoing snapshots store incremental block-level changes only.
Now there are some issues with this solution.

Now since this is a pure snapshot solution of a EBS drive which is not utilizing VSS (Volume Shadow Copy Service) and it is pretty blunt that it is not direct backup service and only provides simple restore capabilities which are limited to full restore of snapshot with limited granularity.

Google Cloud Compute
Google has pretty much the same approach as AWS. Google also provides snapshots of their persistent disks running in GCP, however GCP has one tiny advantage and that it is able to do VSS based disk snapshots directly from a persistent disk.

But again they suffer from the same downside like AWS. It is still only a snapshot solution, limited restore options and no file level granularity and no ability to define a backup schedule or retention policy.

Microsoft Azure
Microsoft has gotten alot further in this space with the backup offerings. Microsoft is the only vendor of the three which has backup and Disaster recovery solutions as a finished service. Azure Backup provides backup capabilities to on-premises solutions which could be clients, physical servers and virtualized workloads.

But they actually have a pure backup solution for virtual machines in Azure where we have the ability to define a backup schedule, retention policy and allows for more granular level of restore, for instance the ability to do item-level restore –> https://azure.microsoft.com/nb-no/blog/instant-file-recovery-from-cloud-backups-using-azure-backup/

The backup service is based upon protected instance and storage cost in terms of billing and allows for full application consistent backup of virtual machines running in Azure (Both Windows and Linux)

It is still missing some capabilities which I feel should be there.
* Simple file level restore directly in Azure portal to have a native experience to find files and folders
* Better notification options! Much of the services in Azure have webhooks or Azure automation jobs which can be triggered in case of failure this should include Azure backup as well.
* File level restore for certain applications, Microsoft should have some better integration options with for instance Active directory running in-guest do to restore of a AD database directly from within Azure portal
* Integration with Office365! Today we need to use third party tools to do backup of Office365, now this focus is mostly on IaaS but I still needed to mention it.
* Storage policy to move data backup to cold tiers

But compared to the two others, Microsoft is a long way ahead when it comes to a native and good backup solution for Cloud workloads.

In-guest VM Agent – Veeam
Now all three vendors have one thing in common regardless on how well their backup/snapshot solution they have is, and that is that hte service contained within the platform . Now I can for instance use the Windows Agent for Azure backup on instances running in GCP or AWS but that agent is pretty limited in terms of capabilities.

Veeam however which so far has been quite known for virtualized availability suite is now well under way for the agent support for Windows. Since you cannot have direct access to the underlying hypervisor in any cloud vendor you are limited to use agent based backup solutions (or using VSS snapshot combined with storage access API reads) Now the Veeam agent is far from limited when it comes, it will have Microsoft Exchange, Active Directory, SQL Server, SharePoint, Oracle and file server full application-aware processing.

It will also be able to integrate it with a Cloud Connect service provider to do sentralized backup from all customers running in the cloud for instance (GCP, Azure AWS) to a specific region where the cloud connect repository is placed . It will also provide guest file indexing to be easier to integrate file items directly from within the UI, you can even it with regular Veeam backup and replication to do restore of VM’s to Azure directly to provide kind of a Instant-On capability.

Here is a summary of the list of features contained in the different editions
Veeam Agent for Microsoft Windows

Summary
Now this summarizes the backup solutions with the different cloud vendors and what they provide in terms of backup services. One thing is certain and that is that none of the cloud providers deliver you backup by default and it is your responbility to provide a backup solution for your infrastructure.

Security aspects when moving to the public cloud – PaaS

So this is a follow-up on my previous blogpost on Security Aspects on public cloud –> http://msandbu.org/security-aspects-when-moving-to-the-public-cloud-iaas/ in this blog post I will focus more on what security aspects you ned to think about when considering using platform services.

Platform services comes in different shapes and colours, but in the core essence you have a vendor which delivers a service to we as a consumer and we are responsible for the data and or the application which is running on the service.

PaaS consists of a range of different services for instance
* Web Applications / Mobile Applications
* Database services
* IoT services
* Big Data Analytics
* Microservices / Container Services

So what do we need to consider?

1: Secure and protect the data
When using a platform service such as IoT services or Big Data analytics the most important factor is protecting the data which is being generated/processed. The data that is being processed or generated might contain business critical information or that might be your entire business value so it is crucial that external users do not gain access to the data.

And of course it is the other factor to consider that internal users might try to download the information and resell if to the competition, so also ensure if possible to encrypt the data where it is located so that external users cannot read the information directly outside of the platform.

Lastly you might also need x amount of copies to ensure that data is protected since most of the platform services provide data redundancy but not backup of the data. Now many of the platform services like SQL databases have a backup service included but you have to enable it, so you have copies of the data.

Important to remember that not all services include backup services, such as web services or web apps. in That matter the software repository hosting the code for the web server is the crucial piece to have backup of. For instance products like BackHub can be used to backup Github repositories –> https://backhub.co/ and also VSTS has its own built-in backup service to backup specific repositories.

2: Ensure encrypted and secure use of API’s
When working with the different PaaS services, always ensure that you are using secure API access to the different services. most PaaS  services are encrypted by default using TLS/SSL but many of the PaaS services sometimes used an API key or authentication key to access to the service so make sure that this key is not directly available, or using some form of product which can be used to manage secrets such as Vault from Hashicorp –> https://www.vaultproject.io/ or Azure Key Vault or Amazon Key Management Service for instance.

3: Verify application security
Now when it comes to hosting PaaS services such as web applications, more of the responsibility of the service is left up too us. The vendor has the physical security and the guest OS security, but it is our job to ensure that the are not application exploits in our web application which would allow hackers to access for instance the SQL backend database. We could for instance be hosting an eCommerece website and in that case our entire customer base could be stored on a database backend, in that case if someone where to gain access to that database, the damage could be unrepairable.

Now many of the cloud providers provide security tools which can assist in uncovering security exploits such as Microsoft Security Center and most of the vendors also provide Application Level firewalls which can help a great deal on blocking/detecting OWASP top 10 attacks. Make sure to check what the vendors providers in terms of these services.

4: Role based access within the platform services
Having a clear role based access control within the platform is crucial, both to ensure that job responbilities are kept in place but also to ensure that no one has access to something they should not have. Most platform vendors such as Microsoft, Google and AWS have done a great deal tn esure that we can define granular roles based access to different services. For instance we can define a particular user to only have read rights to certain services but again have full rights to a particular service such as database services.

In regards to that, identity lifecycle is important aspect of it to ensure that when a employee leaves the company his/her rights are automatically revoked. This makes sure that they cannot go to a rivaling company and login and fetch information.

5: identity policies and monitoring
If someone gets access to your account which could be a global admin users which has access to all your services in Amazon, Azure or GCP that person could destroy an entire business. Always ensure that you use MFA or some form of two-factor authentication on your cloud users, set a strict password policy. And also monitor your user activity. Many of the providers have tools so monitor user login / actions and so on. You can also often resitrct login access to a certain IP address range to ensure that only known IP addresses are allowed to login.

Screenshot which shows user activity on a single VM in Azure where events are gathered and logged for analytics purposes.

So this sums up some of the different security aspects one should consider when start to use PaaS services at a cloud provider.

Windows Server Containers with Overlay networking for Swarm

With today’s release of Windows Server Network Overlay network driver for Windows Server it is now possible to run Windows Server in a docker swarm mode to have the capability to have containers cross-host communication. This has been available for Windows 10 for some time and with the Creator update. So for those who aren’t aware of the inner workings of Windows and Docker Swarm I want to elaborate on this during the blogpost.

Docker Swam overlay mode allows for some benefits such as
* You can attach multiple services to the same network.
* By default, service discovery assigns a virtual IP address (VIP) and DNS entry to each service in the swarm,    making it available by its service name to containers on the same network.

Now with the introduction of network overlay in Windows 10 Creators update, Microsoft introduced the use of overlay network using the Hyper-V capabilities from the Hyper-V switch and Azure extension called Virtual filtering platform.
This  makes it possible to connect container endpoints running on separate hosts to the same, isolated network.

NOTE: The visio below, shows a simple explanation of the setup where you might have a four node cluster where you have two services on different overlay networks, but it is possible to have multiple container services use the same overlay network. When a container needs to communicate with another host the traffic will be encapsulated with the Host network service and forwarded to the VFP extension on the Hyper-V switch and across to the other host.

63image

There is one thing to note however is that with Windows Server and Windows 10 it only supports DNS Round Robin endpoint service publishing and not Routhing mesh which is the default option in Docker Swarm. Routhing mesh is like a layer 4 load balancing capability where it has a swarm load balacer which takes care of the internal load balancing and that the service will be available on all the nodes in the cluster because of the ingress load balancer solution in swarm. This also makes use of a infront overlay network to publish this VIP.

Now since Windows only support dns round robin how will that affect the load balancing? We can configure the service to use DNS round-robin directly without using a VIP, by setting the --endpoint-mode dnsrr when you create the service which is  the option available as of now, but ill get back to that in a bit.

After you have installed the update below we can go head and install Docker on the servers in this example I have two windows servers running as container hosts.

Install-Module -Name DockerMsftProvider -Repository PSGallery –Force

Install-Package -Name docker -ProviderName DockerMsftProvider

Restart-Computer -Force

Next we need to setup Docker Swarm which will require some firewall openings on the hosts for inter communication

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • TCP and UDP port 4789 for overlay network traffic

From one of the host (Which will be the swarm master) run the following command and insert the HOST ip address of the host

docker swarm init –advertise-addr=<HOSTIPADDRESS> –listen-addr <HOSTIPADDRESS>:2377

After you run this command you will get a command that you will use to join nodes to the swarm

image

By typing docker node ls on the docker swarm leader you will see the list of the nodes in the swarm cluster.

image

Now that we have a cluster we need to create the overlay network.

docker network create –driver=overlay <NETWORKNAME>

image

docker service create --name=win_s1  -replicas=x –endpoint-mode dnsrr –-network=overlaynetwork

image

More info here –> https://blogs.technet.microsoft.com/virtualization/2017/04/18/ws2016-overlay-network-driver/

Download here –> https://support.microsoft.com/en-us/help/4015217/windows-10-update-kb4015217

Status GPU and Cloud providers AWS, GCP and Azure

So as part of the new role I have at my new company I’ve expanded my horizon to include AWS and GCP as well as Azure. Now because of my background on Citrix & VMware I also get a lot of questions around GPU support as part of it.

Now I have previously blogged a bit about N-series in Azure http://msandbu.org/xendesktop-7-11-released-and-testing-with-n-series-in-azure-using-dda/
N-series is great but it has some drawbacks, so therefore I decided to write a short summary of the different cloud vendors and their support for GPU instances, and what kind of GPU series they support.

Microsoft Azure:
Azure has support for GPU with their N-series. The N-series is using DDA feature in Hyper-V Windows Server 2016 which is in essence GPU-passtrough.  Since Microsoft has locked the GPU to the N-series it is on a limited set of instance sizes. N-series has been GA since December 2016, but has some issues, which is disk I/O. As of now the N-series does include an SSD drive which is only available on the temp disk (d:\ drive) and has no support for Premium Storage which only leaves us with regular HDD data disks which as a limit of 500 IOPS. N-series are equipped with NVIDIA cards, using either the NC (K80-cards) or the NV (M60-cards). Now certain instances sizes come with Infiniband which also allows for pure HPC workloads for instance. Of course Azure comes with per minute billing which allows us to easily spin up instances for shortlived workloads.

Google Cloud Platform:
Google also has GPU offers using the same model as Azure using passtrough mode. Today Google offers the K80 cards and the GPU feature is still in beta, but they have also promised support for AMD Firepro S9300 and NVIDIA Tesla P100 shortly. Google on the other hand has support allows you to attach a GPU to any type of instance which offers more flexibility, and they also offer per minute billing as well. Since Google has a more flexible approach, you can for instance easily scale from 1 to 4 cards using their CLI tool. Also Google allows us to use SSD based storage attach to the machine instance which allows us to have GPU combined with high-end IOPS support on Storage. However the feature is still in beta yet, the annoucement came out in February.

Amazon Web Services:
Amazon has two different offerings when it comes to GPU, the P2 series which has GPU-passtrough which can have up to 16 K80 cards (NVIDIA) to a set of predefined instances. on AWS you can also have SSD based storage as part of the configuration or IOPS provisioned storage. You can also use the GPUDirect capabilities which is essence is using RDMA based technology for low latency  high-speed DMA transfers to copy data between the memories of two GPUs on the same system/PCIe bus. Amazon Web Services also announced recently Elastic GPU which now is in preview which allows us to attach a virtual GPU with a set amount of virtual GPU memory to any type of EC2 instance type. Since this s a virtual GPU it might have some limited capabilities when it comes to OpenGL and DirectX support, but AWS promises that it should have good OpenGL and DirectX support.

So what about EUC workloads? The use of GPU is pretty worthless if you do not have a product which does not suppor or work with the platform. Citrix has provisioning support for both Amazon and Azure and can therefore leverage the different GPU instances types directly. Citrix also supports Windows Server 2016 which can benefit the most of the DDI feature in Azure for the N-series. Citrix does not have any direct support for GCP even though if software can be easily installed we do not have NetScaler directly available which wouldn’t provide us with alot of benefits for remote workers. Amazon has their own EUC offers with AppStream using HTML 5 protocol and Workspaces which uses Teradici, my guess is that Amazon will provide the option to boost Workspaces with Flexible GPU when it becomes available. However I can ensure the Microsoft RDS works on all platforms and with the updates in Windows Server 2016 when it comes to GPU use it does provide with a decent user-experience.

The verdict?
Amazon is leading the race, and Google is not far behind. Microsoft needs to get better options when it comes to more flexible GPU support and higher IOPS support with for instance premium disks and SSD on the C:\ drive, and also! RemoteFX vGPU support for Azure would provide a good and maybe cheap option to deliver GPU VDI workloads on Azure.

I’m headed to Citrix Synergy and speaking!

So for the first time in my life I’m actually headed to Citrix Synergy! Now I have always been present at Synergy the last couple of years, using the live streaming options but this time ill actually be there and join a bunch of my fellow CTP’s and other people I’ve been waiting to meet for some time to have a good time. Now besides being there ill be doing some other parts as well.

1: I’m having a fireside chat on Thursday – SYN414: Access and authentication options in a Citrix environment
Which is a casual 30 minutes walktrough of the different access and authentication options in Citrix ranging from NGaaS, AzureAD, Google Oauth, Optimal Gateway Routing, HTML5 and more.

I have no idea what a fireside chat is, so since I’m from Norway ill bring my own firewood we have plenty of it here.

You can read more about it here –>  https://citrix.g2planet.com/citrixsynergy2017/public_session_view.php?agenda_session_id=191&conference=synergy

2: Ill be part of a Hot Topics Roundtables on Tuesday –>  where ill be on the table talking about cloud services together with a whole bunch of smart people, you can see more of the topics and the people here –> https://citrix.g2planet.com/citrixsynergy2017/public_session_view.php?agenda_session_id=246&conference=synergy

So if you are there, drop by and say hi!