Monthly Archives: February 2017

NetScaler Autoscaling

So had a case appear earlier today where we needed to scale out the NetScaler deployment within a container enviroment. More specific we needed a good way for NetScaler to automatic add new entries within a load balanced vserver when a container orchestration enviroment scaled out a stack with new containers.

So in most cases a container orchestration platform already has a built-in load balancing mechanism like HAproxy for instance which is used to load balance between the different containers within a host. In most cases you have multiple container hosts and therefore have multiple HAproxy instances with its own IP address and exposed port which represents a service.

Now we could of course have each HAproxy IP within a DNS entry with round robin, but that would not provide us with a good health monitoring service, so if a host went down and suddenly a service would be unavailable for some users who would connect.

Now we often need a new load balanced layer above the HAproxy tier to provide load balanced between the different HAproxy instances with  proper health monitoring.  The issue is that we need to insert the new container instances when they get provisioned so they are automatically load balanced.

Now regular NetScaler has no built-in integration with Container orchestration enviroments, this is only supported if using for instance MAS or NITROX which has Service Registration / Discovery integration with different container enviroments.

However there is an option to setup DNS based autoscaling in NetScaler that can be used and will populate service members in a load balanced virtual server based upon DNS entries which are part of the same DNS name.

So since many container enviroments manages their own DNS namespace it can be used to publish a service using the same FQDN.

From a NetScaler perspective first we need to define a new service group, define AutoScale mode to DNS based setup.


Then we want to add a service group member select server based, then click on the + sign


Define the FQDN of the host name (A-record) you want to be used to auto scale setup


Now after you have specified a FQDN and clicked OK, you will now see all the servers which are attached to the A-record in the DNS server which the NetScaler is poiting too.


and voila!

Getting started with GDPR and Cloud Providers

Now I had a session at Hackcon this week about security in the cloud, one of the important aspects of it is guidelines and regulations that all cloud provides need to follow. One of this regulations is GDPR which will be taken into effect in May 2018. Now there is alot of information in the regulation but I wanted to summerize the highlights.

* The regulation applies if the data or processor (organization) or the data information  is based in the EU. Furthermore,  the Regulation also applies to organizations based outside the European Union if they process personal data of EU residents. Which many of the current cloud providers do.
* It gives more power back to us consumers in terms right we have about the provider.
* It describes more in detail in how we as consumers can get insight in how a provider handles our data and uses our information.
* It allows us to easier get the ability “get deleted” or be “forgotten” at a provider like Google or Microsoft
* It allows us to ask a provider to move our information from one provider to another
* If data is to be collected or data be used, it would be needed to consented upon and can also be withdrawn at any time

If any data breaches happen, the provider would need to notify the supervisory authority straight away and notify the affected invididuals as well if impact is determined, within 72 hours.

Now why would businesses spend the time to comply with this regulation? Well they couldn’t affoard it. If a business doesn’t have proper documentation to comply with the regulation or have a serious data breach they could be fined with up to 20,000,000 € or 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater

You can read the entire information in the regulation here –>

Now many of the cloud providers are already getting ready or are done with GDPR since it requires a lot of changes to their infrastructure and tools they have in place. Of course getting GDPR approved or certified requires alot of investment and money and is of course a good indication that a vendor has invested to high level of security (they wouldn’t afford otherwise, if a data breach happend for instance, because of lacking routines.

GDPR will also most likely seperate the larger cloud providers from the small ones, since the small ones might not have the money to invest into follow the GDPR guidelines, which might put them in the shadow of the large providers which has technical invested and have guidelines in place.

So this is now close to one year away before going into effect, where are the cloud providers in terms of GDPR?

Microsoft has already written alot of information and guidelines on using their services for GDPR –> on a technical sidenote Microsoft has stated that their services will be GDPR compliant within May 2018.

AWS is also a long step on the way in terms of getting compliant with GDPR –> but AWS does not have some much information available as Microsoft has on GDPR.

But regardless of cloud provider it is important how the shared responsibility is and who is responsible if something happens in a public cloud setup.

AWS Shared Responsibility

So if we look at the shared responsibility model that AWS (Which is the same as other cloud providers has) If we as a customer has a web services running on top of AWS/Azure/GCP and we are using their services to host web services to our customers, for instance an ecommerce website. In this case we might handle multiple end-customers with sensitive information.

If an malicious attacker managed to get trough our web service and access that sensitive information we might store on our web services, we as a customer will be responsible for the data breach and will be therefore subject to the fines that come with GDPR. Now in this case the cloud provider is of course responsible for all the physical aspects of it, and therefore not our responsibility. If we were to manage this ourselves we would be responsible for the entire stack and therefore have a higher level of responsibility.

So to summarize: If you are looking to using cloud services or not, see how the vendor is approaching GDPR which will be an important aspect moving forward. If you are using cloud services think about what is your responsibility, if you are hosting services towards customers see how GDPR affects your services and what countermeasures you need to make.

Is Citrix headed in the right direction?

There has been alot of buzz on Citrix lately with the sale of the Goto brand, and stating that they are going back to their “Core” of their products, also stating that their main strategy is cloud first. Now there has been alot of new features / products being announced / added over the last 6 months. They now even have a roadmap showing when new releases will be coming to the marketXenApp 2017 release schedule

* XenApp Essentials ( Azure only desktop/app deliery)
* Windows 10 on Azure (Azure only Windows 10 VDI)
* Buying Norskale (User enviroment management)
* Buying Unidesk ( Application layering)
* Integration between NetScaler and Intune / AzureAD
* Multiple new releases 7.12 & 7.13
* Write-back cache for Azure MCS deployment
* New adaptive transport protocol EDT (To is better to tackle cloud scenarioes where latency is often higher)
* Authentication support for AzureAD
* Hybrid Use benefits with Azure
* Smart Tools to enable integration against Azure, AWS (Smart Scale)
* SD-WAN announced for Azure
* NetScaler Gateway-as-a-service
* Announced Managed Citrix Receiver for Intune
* NetScaler MAS Hybrid Management (

So when Citrix announced their strategic partnership with Microsoft, I did not expect it to be at this scale. With multiple acquisitions to help them strenghten their core product portfolio, with for instance Unidesk which provides support for both on-premises deployments and support for Azure, we can clearly see which vendor, Citrix is betting on.


And it of course opens up for a lot of options when it comes to designing solutions running on Microsoft Azure. For instance utilizing SD-WAN capabilities to ensure optimal routing paths for ICA sessions and then using Adaptive Transport to utilize UDP as transport to have optimal connections in cloud scenarios were latency is often higher then average. Now adding to the mix that Citrix can now provision Azure VM’s using MCS with write-back one can only guess how Unidesk is going to be added to the mix as well, to have a single consistent way of handling OS, Applications and User profile settings.

This of course can be setup in a fully managed enviroment or leaving more of the control over to Citrix for instance using XenApp Essentials or just using Citrix Cloud in general to leave some of the management to Citrix.

I do belive that Citrix is moving toward the right path, there is a lot of innovation happening now and further enhancements to existing products.  Even though it might be a bit foggy at the current time, with alot of renaming and product mixing and acquisitions think we might need some time for the fog to clear so we can see it properly. 

Also maybe Citrix should looking into the Cloud Access Security Broker as well? Since one of their core features are delivering Secure apps and desktops, and since many applications are now moving away from regular Windows based applications and moving more into web based applications having a way to securely manage web applications using rules in addition to Windows desktops would be a smart move going ahead with their cloud strategy.

Huge leap into 2017! CTP, Vanguard and vExpert

So far in 2017, I’ve started out with a boom and it has been a crazy start. Now a couple of weeks back, I was honored to be one of seven new people worldwide to be  awarded CTP (Citrix Technology Professional) 


Now this is a great personal achivement for me, since I’ve been trying to get this for the last three years, and I’ve been spending a lot of time to get to that goal.
Also a couple of weeks back I was also renewed for Veeam Vanguard, which now makes this my third year in the program. Vanguard is one of my favorite programs and allows me to interact closely with the product group.
And last week I was also notified that I was renewed for Vmware vExpert for my second year. and last year I was also one of the few individuals which was part of the more exclusive vExpert program for VMware NSX. Funny thing that they call it an vExpert even though I don’t feel myself an expert at all Smile 

Also a couple of weeks ago I was also fortuate enough to have a session at NICCONF and talk about Desktop As A Service where I discussed news and updated to Microsoft, Citrix, VMware and other vendors like Teradici, Workspot, & Amazon Workspaces.


And next week I’m having a session at the yearly conference here in Norway called Hackcon, where I will be talking about security aspects of when moving to the public cloud. So a lot happening in 2017 and I’m still only in february, but I am so fortunate that I am part of all these different communities and working with many different things every day and still learning new stuff like the first day I started working in IT.

Stay humble, Stay curious!