Monthly Archives: May 2013

Load-balancing Exchange 2013 on Citrix Netscaler

So I’ve gotten this questions lot the last couple of days, and I see it on the search terms statistics on the blog. So it is possible to load balance Exchange 2013 on Netscaler? Yes!
Now Microsoft usually has a list of “certified” load balancers that can be used on Exchange, but there still hasn’t been made one for Exchange 2013.
You can see the one for Exchange 2010 here à
http://technet.microsoft.com/en-us/exchange/gg176682

now the problem with load balancing Exchange 2010 on a HLB (Hardware Load Balancer) was that you need to do it on L7 and using persistency why? because of the way that Exchange 2010 operated was that when a user
connected to OWA or other Exchange protocols, it was bound to that particular CAS server for the time of the connection. (Since the CAS rendered the mailbox, and if the connection moved to another CAS the user would need to reauthenticate)
You can see the old documentation for Exchange 2010 and Netscaler here à
http://www.citrix.com/content/dam/citrix/en_us/documents/products/netscalerexchange2010.pdf

With Exchange 2013 the roles and how it functions have changed. First of we only have two roles. We have the Mailbox and the Client Access Server role. The CAS role now only acts like a proxy, which allows for communication to a mailbox server and does the logic with protocol redirect.
These changes makes it easier to setup load balancing for now we have the option to load balance on L4 and are not dependent on using session persistency (Where we just need to define a VIP, SNIP, and a service. (+ Maybe a certificate for SSL offload purposes.)
You can read more about it here à
http://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx

Here are the different protocols used in Exchange 2013

Port 443: Autodiscover (AS) Exchange ActiveSync (EAS) Exchange Control Panel (ECP) Offline Address Book (OAB) Outlook Anywhere (OA) Outlook Web App (OWA)
Port 110 and 995 (POP3)
Port 143 and 993 (IMAP)

A note thou: SSL offloading is not supported on Exchange 2013 Yet…

Citrix does not have a wizard, which you can go through to set this up, so you need to fill in all the blanks yourself J

here is simple setup for load balancing OWA in Netscaler VPX.

First I define which servers I need to add to the list,

Create a service (In my case I have OWA setup on port 80 (not recommended thou) and bind a monitor to it.

Then I create a virtual server and attack the server I added first to setup load balancing.

And voila!

Now If I needed to setup Netscaler for other Exchange Services such as ActiveSync, SMTP, and so on I would need to use Content Switching to redirect the user to the correct endpoint on the server.
Instead of having one virtual server for each service.

Now this setup also applies for using SSL offload (when this is supported) just add a public certificate and choose port 443 in the virtual service.

Managing DDoS with Citrix Netscaler

Most of the «attacks» on IT-services these days are mostly with DDoS (Disitributed Denial of Service) which is basically a flood of traffic headed to a particular network service.
Think of it this way, if a person says hi to you, you say hi back. This is much like a regular TCP handshake (SYN, SYN ACK, ACK) what happens if a crowd like this happened to yell hi at the same time, multiple times each second ?

Well first of the network traffic is going to get flooded with a lot of bogus traffic and your services might have trouble responding to the traffic. When the group anonymous were active, they targeted large companies like PayPal and Visa with their DDoS attacks.
And regular people could follow in as well, using a tool called LOIC (Low Orbit Ion Cannon) which allowed for several types of DDoS attack (TCP, UDP or HTTP) which have different effect on their targets.
So back to the topic, how can companies protect themselves against these type of attacks?
In most of the cases, the flooded traffic causes a jam in the network and not affecting the backend servers much, not much you can do in those cases.

In cases where the incoming bandwidth is not an issue, the backend servers are much more affected.
I’ve done some testing on a SharePoint site on a Windows Server 2012 IIS (on a VM) and seen how much a single computer using LOIC (On the same LAN for that matter) can affect a web server.

LOIC is pretty simple, enter a IP-address and chose method of attack and you are good to go.

Now if you open WireShark on the target you can see that the network is being spammed with TCP packets (which contain the payload in the message)

How does this affect our performance on the server?
I have done some testing with all the types.

HTTP-attack (Which uses the HTTP GET command)
uses the most CPU (Upwards to 90%) on the endpoint (And is uses more bandwidth responding to the reply then the other attacks)

TCP-attack (Uses the more network traffic, but has a minor impact on the CPU on the backend)

UDP-attack (uses more bandwidth since it is not trying to handshake like regular TCP) so therefore the network is getting more pounded (you can see under the receiving part it’s about to 250 Mbps.
And because of the huge load on the NIC the CPU is also making some extra effort.

So what other types of DDoS attacks can be used?
We have the typical SYN-flood attack, which is typical, used when an attacker uses multiple spoofed IP-addresses and floods a server with multiple SYN packets.
And therefore leaving with half-open connections.

We also have Smurf Attacks, which uses spoofed ICMP packets and sends ping requests to the broadcast IP-address and the reply to address is set to the target.

And then we have other L7 DDoS attacks like Slowloris.

So how can we use Netscaler to mitigate these type of attacks?

SYN Flood using TCP:
This protection feature is enabled by default, instead of having half-open connection with the end-client, the Netscaler appliance sends out an SYN cookie to the end-user, so it does not waste memory on half-open connections. It only uses memory when it receives the final ACK packet.

Surge Protection:

Can be used to define how many TCP-connections a server can manage before It stops dropping requests (This is typical behavior during a DDoS attack and may leave regular users unable to login to a service)
You can enable / disable this option for each service

You can also define a thresholds for each service in terms of bandwidth and / or clients

Sure Connect
Allows you to define a alternative web page or a custom web page (In case the network is full and the back-end servers are unable to process requests )

HTTP DOS Protection
Regular clients using browsers like Firefox, IE or Chrome can understand HTML, JavaScript and cookies. Using attack tools like LOIC or other HTTP DDoS tools you cannot parse this type of data.
So you can define that when HTTP clients try to connect to a service they need to respond to a JS challenge which is sent with the HTTP data.

This can also be done on a pr service level.

So with these parameters I just set (If there are more than 200 clients in queue, the next 100% of these requests will be sent a JS challenge, if it responds with the correct cookie to the respons it is a valid client)
And last but not least in case of the UDP traffic (If you don’t have any services that use UDP you should just block it using ACLs.

add ns acl restrict DENY –protocol UDP (or restrict by ports)

In case of SMURF attacks, this is something that you should do a the router level (In case you have cisco)

no direct ip-broadcast

Citrix Synergy 2013 What happend ?

So most of the people I know was attending Synergy this year, either physical or virtual.
Myself I could not attend either of the two so I was stuck with watching the twitter feed and keeping the scroll button in motion, so what happened at this year Synergy and the events before this ? The big announcement was XenDesktop 7 and some other stuff in going to summarize in a table form.

Citrix XenDesktop 7
(Project Excalibur) removes the old XenApp architecture and uses the FMA architecture, think of it as XenDesktop with XenApp support.
* Supports Windows Server 2012 and Windows 8
* Integrated vGPU solution from NVIDIA
* Enhancements for Microsoft Lync
* Citrix Streaming Profiler is gone (Use Microsoft App-v instead)
* Introduction of Edgesight in Citrix Director (Which can get data from regular Edgesight and HDX insight (Which is coming for Netscaler)
* AppDNA is part of XenDesktop (In some editions)
* Improved Director
* Storefront 2.0
* Multitouch gestures (HDX mobile)
* HDX 3D pro with OpenGL
* Cloudbridge (Earlier Branch Repeater is included in some editions)

You can read more about the different features around XenDesktop 7 here à
http://www.citrix.com/content/dam/citrix/en_us/documents/products/introducing-xendesktop-built-on-avalon-platform.pdf
You can also get some news here around Project Merlin (Which is the next main release after XenDesktop 7 à
http://www.citrix.com/tv/#videos/8392

Citrix Netscaler
Was announced that it was the largest growing product in the company and no wonder!
Came in a new version (release 10.1) which announced the following benefits
* Support for SPDY
* Support for MSSQL 2012 Datastream
* Support for load balancing TFTP
* More AppExpert for Exchange OWA
* Netscaler HDX Insight (Coming) which integrates with Director
* Cloud integration (Integrate Netscaler with Cloudplatform)
* Offload DNSSEC to Netscaler
* Changes in the GUI
* User monitor for Storefront
* Easier to make changes to the WI

(Might also mention that Branch Repeater was renamed Cloudbridge and is also a feature on Netscaler platinum)

And Netscaler Gateway (Codename DARA) is released in Preview which is the next generation of Access Gateway with an improved Wizard for setup.
http://t.co/09HB2VbT4a

Netscaler SDX opened up for third party vendors to allows their platform running on the SDX solution (Palo Alto, trend are some of the few partners which have a solution that can be used)

Citrix Cloudportal Service Manager
This product recived little attention during Synergy (or nothing at all) but im going to mention it anyways.
Citrix a couple of days before Synergy released Service Manager version 11 which includes
* Support for Windows Server 2012
* Support for Exchange 2013 multitenant
* Support for workflows and approvals
We can expect more from the product and integration in a while.

Citrix Sharefile

Now this is something Citrix is pushing hard these days, and have added numerous features to it.
* StorageZone connector to Azure
* Ability to connect ShareFile to SharePoint
* Ability to edit documents directly from the client (before it was just read-only)
* XenMobile Integration

Citrix VDI-in-a-box 5.3
* Same support for VDA/HDX as XenDesktop 7
* Support for Windows Server 2012 VHDX
* Better support for SSO
* Universal Print service

Citrix XenMobile
The early build of XenMobile was just a little polish on the former ZenPrise console, with this new release a lot has changed.
* New console
* Integration with GotoAssist (Allows for help desk to connect to their mobile devices)
* View and Edit documents directly on the phone (ShareFile)
* Worx Mobile Apps
* Changed editions
http://www.citrix.com/products/xenmobile/features/editions.html

Other Announcements ?

Citrix Desktop Player for Mac which allows you to run your personal VM’s directly on your Mac (Which is synced with your on-premise solution)
Citrix XenApp 6.5 Feature Pack 2( Which will have some new features (performace improvements, Lync support but no Windows Server 2012)
Citrix Receiver for Windows Phone is underway as well,

Most of these new products will be available in June, so stay tuned J

Azure and PowerShell

Been working into Azure and using PowerShell the last couple of weeks, so therefore I thought that I would share some of my scripts that found in handy.
First of you need to make sure you have installed Azure Powershell cmdlets and connected to your account. https://msandbu.wordpress.com/2013/01/09/managing-windows-azure-via-windows-powershell/
You can read my previous post to get started, but some scripts I’ll make post later in the post.
But also make sure that you visit the documentation on Microsoft site à
http://msdn.microsoft.com/en-us/library/windowsazure/jj152841.aspx

I also recommend that you take a look at Michael Washam’s blog à
http://michaelwasham.com/

Get Datacenter Location
Get-AzureLocation

Shows what kind of Locations and what services it has available.
End
List out Image names available from Quick Wizard
Get-AzureVMImage | ft Imagename

Create Quick VM (Before you do this you need to select a storage account Get-AzureStorageAccount | Select StorageAccountName ) or Set-AzureStorageAccount -StorageAccountName
New-AzureQuickVM -Windows
-ServiceName konge -Name msandbutest2222222 -ImageName fb83b3509582419d99629ce476bcb5c8__Microsoft-SQL-Server-2012SP1-Web-CY13SU04-SQL11-SP1-CU3-11.0.3350.0 -Location “West Europe” -Password SupermanUpandAtom.
This will create a VM with with the service name of Konge and the vm name of msandbutest2222222.cloudapp.net and use the default image for SQL server 2012 and located in West Europe and with the password of SupermanUpandAtom.

Add Endpoint to a VM
GetAzureVM ServiceName konge
Name “msandbu222222” | AddAzureEndpoint Name “HttpIn”
Protocol “tcp”
PublicPort 80 LocalPort 8080 | UpdateAzureVM

VM Batch Creation
(First we have to define a config for a VM and create a VM there or we can create a batch)

New-AzureVMConfig -Name $vm1 -InstanceSize Medium -ImageName $img |
Add-AzureProvisioningConfig -Windows
-Password $pwd |
Add-AzureDataDisk -CreateNew
-DiskLabel ‘data’ -DiskSizeInGB 10 -LUN 0 |
Add-AzureEndpoint -Name ‘web’ -PublicPort 80 -LocalPort 80 -Protocol tcp |

New-AzureVM -ServiceName $newSvc -Location $location (Now we could either use all the config here and create a new VM or we could define multiple varibles for batch provisioning with a defined instancesize.

$vm1 = New-AzureVMConfig -Name ‘myvm1’ -InstanceSize ‘Small’ -ImageName $img | Add-AzureProvisioningConfig -Windows
-Password $pwd
$vm2 = New-AzureVMConfig -Name ‘myvm1’ -InstanceSize ‘Small’ -ImageName $img | Add-AzureProvisioningConfig -Windows
-Password $pwd
$vm3 = New-AzureVMConfig -Name ‘myvm1’ -InstanceSize ‘Small’ -ImageName $img | Add-AzureProvisioningConfig -Windows
-Password $pwd

New-AzureVM -CreateService
-ServiceName $cloudSvcName -VMs $vm1,$vm2,$vm3 -Location $dc.

Or we can use an array to create multiple VMs;

$vmcount = 5
$vms = @()
for($i = 0; $i -lt 5; $i++)

{

$vmn = ‘myvm’ + $i
$vms += New-AzureVMConfig -Name $vmn -InstanceSize ‘Small’ -ImageName $img |
Add-AzureProvisioningConfig -Windows
-Password $pwd |
Add-AzureDataDisk -CreateNew
-DiskLabel ‘data’ -DiskSizeInGB 10 -LUN 0 |
Add-AzureDataDisk -CreateNew
-DiskLabel ‘logs’ -DiskSizeInGB 10 -LUN 1

}

New-AzureVM -ServiceName $cloudSvcName -VMs $vms -Location $dc.


VM Provisioning config setup
New-AzureVMConfig -Name "MyDomainVM" -InstanceSize Small -ImageName $img ` | Add-AzureProvisioningConfig -WindowsDomain –Password $Password -ResetPasswordOnFirstLogon -JoinDomain "test.local" -Domain "test" -DomainUserName "domainadminuser" -DomainPassword "domainPassword" -MachineObjectOU 'OU=AzureVMs,DC=test,DC=no' | New-AzureVM -ServiceName $svcName

(Note that the domain part here requires that a DNS server can fully locate the domain controller)

or we can also define DNS server settings

Deploy a new VM and join it to the domain

#Specify DC's DNS IP (10.4.3.1)

$myDNS = New-AzureDNS -Name 'testDC13' -IPAddress '10.4.3.1'

# Operating System Image to Use

$image = 'MSFT__Sql-Server-11EVAL-11.0.2215.0-08022012-en-us-30GB.vhd'

$service = 'myazuresvcindomainM1'

$AG = 'YourAffinityGroup'

$vnet = 'YourVirtualNetwork'

$pwd = 'p@$$w0rd'

$size = 'Small'

#VM Configuration

$vmname = 'MyTestVM1'

$MyVM1 = New-AzureVMConfig -name $vmname -InstanceSize $size -ImageName $image | Add-AzureProvisioningConfig -WindowsDomain -Password $pwd -Domain 'corp' -DomainPassword 'p@$$w0rd' -DomainUserName 'Administrator' -JoinDomain 'test.local '| Set-AzureSubnet -SubnetNames 'SubnetName'

New-AzureVM -ServiceName $service -AffinityGroup $AG -VMs $MyVM1 -DnsSettings $myDNS -VNetName $vnet


Add new Data Disk to existing Virtual Machine (Make note that a datadisk has a 1 TB max limit)

Get-AzureVM -ServiceName 'myvm1' |   Add-AzureDataDisk -CreateNew
					-DiskSizeInGB 10  -DiskLabel 'myddisk' -LUN 1 |   Update-AzureVM 

Get RDP file for VM
Get-AzureRemoteDesktopFile -ServiceName "myservice"
											-Name "MyVM-01_IN_0" –Launch 


To be updated…..

Managing Azure with Linux

Microsoft has done a lot of work behind and Azure and particularly on the management part. I have previously written about how to manage Microsoft Azure via PowerShell in Windows,
this post is going to show how to manage it using Linux (In this case the latest release of Ubuntu) 

 

First, we need to install some prerequisites, open terminal and install node.js

 

sudo apt-get update

sudo apt-get install python-software-properties python g++ make
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update sudo apt-get install nodejs

After that is done, you can install the azure-cli

 

sudo npm install azure-cli –g

 

Now after that is done you can run the azure command from terminal.

Now the command prefix is “azure command” you can use azure help to get a list of commands available. Now in order to actually do something against our Azure account we need to download our publish settings.
Go get it we have to run the command

azure account download

This command will redirect you to a website and there you will need to login and it will generate a publish file.

Now we have to import the publish file. Run the command

azure account import filename

Now that we have that in place we are now free to play around. Let’s start by creating an VM from one of the images in the Azure store.
For instance 2008R2, we start by listing out the images.

azure vm image list (This will show all the images available from the Azure Store)

Next we should to have an affinity-group to bind the VM to, in my case I already had a group in place, if you need to create one just run the command

azure account affinity-group create

If not we can just specify a location during the creation. So lets create an VM with the 2008r2 image with the command

azure vm create “nameofvm” “imagename” “username” –location “West US” and then you need to specify a password during the creation.

We can now see that the VM is running in the management portal

if we use the command azure vm list we can see all the vm’s

Now I did create a endpoint for this computer in the management portal (but you can do so by using the command)
azure vm “vmname” endpoint create 3389 3389 (This will create an endpoint which is public accessible on 3389 (which is the RDP port)

Then I can fire up rdesktop to my Azure server.

Now that is great I’m all set. I have RDP available and I have CLI based management, so what about Linux VM’s?
Linux is mostly managed using SSH and in order to use this against Azure we need to create a digital certificate.
So by using the openssl tool we create a certificate file that we need to upload to Azure

Run the command

openssl –req –x509 –nodes –days 365 –newkey rsa:2048 –keyout myPrivateKey.key –out myCert.pem

(The Pem files needs to be uploaded to Azure and we use the Privatekey to authenticate.

Run chmod 600 to change the security of the key file (For safety reasons)
Now we can either create a linux vm with the management portal or using CLI

If we go with the CLI approach we using the same command as before but use the –e 22 (for enabling SSH on port 22) and –t and specify the cert file.

With the management portal we have a option to upload a certificate file.

After the VM is provisioned and running to can use any SSH client to authenticate against it, (Just remember to specify the key file)

ssh –l “username” –i mykeyfile –p portnr dnsname.

And there we go, SSH available as well.
A bit concerning that Azure supports rdesktop (Don’t get me wrong that good) by that means that NLA is not activated by default and the last year we had a lot of security holes with the RDP protocol where NLA was not enabled.

Dell Integration with SCCM 2012

There

A lot out there are using some sort of Dell hardware; it is either regular clients (laptops etc) or servers. What many do not know is that Dell has a bunch of integration packs that you can use directly in System Center. I thought I would give you a quick walkthrough of what these integrations can do and what else you can do in general with Dell and System Center 2012 SP1.

During MMS Dell promised that, they would release the Integration pack for Configuration Manager any time soon, and it was recently released.
It is free and it can be downloaded from Dell here à http://dell.to/15baoKx

This pack gives us
* Warranty Status
* Out-of-band management
* OMCI
* CCTK (Which is Dell’s solution to BIOS configuration)
* New Task sequences and reports.

And we also have the server deployment pack à
http://dell.to/161KDvM

* Which can be used to create task sequences
* Import drivers from ISO images
* Create RAID setup for Dell Servers.

Now after you have installed these “addons” on the Configuration Manager Server you will get some new views in the Console.
Under Packages you now have the possibility to create a PowerEdge Server Deployment.

The Deployment Toolkit Configuration Wizard allows you to integrate a DTK package into a custom image.
The two Client Integration Packs offer you an import function from an existing config that is created for the two.

For instance CCTK can be used to create an client package on clients computers on what options it should have, for instance create BIOS Passwords. Enable TPM on computers and passwords etc.
You can see more about it here à
http://en.community.dell.com/techcenter/extras/m/white_papers/20209083.aspx

If we check on the OS deployment pane we have a lot of options for Server deployments.

When creating a PE Server deployment template you can automatically create much of the config.

If you are unsure of how you create the XML files needed you can just click view on the sample files, they are pretty intuitive.

The Client Integration pack also comes with a Intel AMT plugin which can be used to create USB drives with an AMT configuration (For instance deploying CA files in order to set it up)

There is also an other integration pack for Servers, which can be used to communicate with the OpenManage products à

http://en.community.dell.com/dell-blogs/software/b/software/archive/2012/04/02/dellopenmanageintegrationsuiteformicrosoftsystemcenter.aspx
Make note that this does not work with Service Pack 1 of System Center.

For instance the LifeCycle controller integration is still not Service Pack 1 ready as well à
http://www.dell.com/support/drivers/us/en/04/DriverDetails?driverId=G9KT7

Other info:
SCVMM 2012 and Dell Equallogic integration à
http://dell.to/18eoaLo
Leveraging PowerShell and Dell CIM à
http://dell.to/ZMmEJt
Management Pack for SC Operations manager à
http://dell.to/11E2nre

Atlantis ILIO

So I just recently attended a technical training of Atlantis ILIO and I had just a minor clue of how the stuff worked before I attended the training. Atlantis has been in the marked a couple of years and has already won a lot of awards at both VMworld and Citrix Synergy. Therefore since I didn’t fully understand it myself I thought I would spend this post to explain how Atlantis works.

Their main products are Atlantis ILIO Persistent VDI and Atlantis ILIO Diskless VDI. (In addition, a couple of other products, which I will come back to later.
Now the entire idea behind Atlantis is using RAM as Storage for VDI environments. Sound like a good idea right?
In a traditional VDI environment, I would have a pretty decent SAN where I would store all my VDI’s and a virtualization host and some redundant network equipment in between.

So here I would need a good setup between the desktops running on the virtualization hosts, the network and the SAN in the backend. So If I deploy a VDI environment on really high performance virtualization hosts and a really high speed network but on a slow SAN solution im screwed.
The desktop for the users, would most likely behave like an old faction computer running on a 7200 RPM disk. If you remember those running on PATA cables they can deliver around 75 – 100 IOPS. Most users today are used to using SSD on their laptop computers and they except that a centralized computer environment with expensive equipment should be at least as quick as their regular computer (because if it is slower then they are used to, they will switch back to their regular computers)

Now an SSD drive have a MUCH higher IOPS then a regular drives since they don’t have spindles. My SSD drive can deliver about 6700 IOPS (Via IOmeter). (On 4KB) so which of them deliver the best performance my SSD or the 7200 RPM disk? J

So back to Atlantis, what it does is that it uses RAM as primary storage for VDI. Meaning that it exposes RAM on the virtualization host as a storage unit for the virtual desktops (Now RAM is volatile meaning that data is erased when the system is turned off but I’ll come back to the later how Atlantis handles this) Now Atlantis is a virtual appliance which runs on the virtualization host (Vmware, XenServer or Hyper-V 2) and you give it as much RAM as possible for use at storage.

So we setup Atlantis we define how we want to expose the RAM disk to the hypervisor (Which is either accessible via iSCSI or NFS) and then we need to connect this “storage” to the hypervisor and then start creating virtual desktops on the newly created storage. Atlantis also has a couple of features like inline deduplication & compression, which reduces the actual usage of ram to a minimum. (Since in many virtual desktops environment a lot of the data is duplicate (OS, Data, apps) we can save up to maybe 80% data.

Brian Madden has done a quick test to show actually how much a virtual desktop uses.
(On a per desktop basis, this means that each VM is using 28GB on average (40GB allocated) of virtual storage, but that is consuming only 1.5GB of physical RAM per desktop.)

Now we can increase the density of virtual desktops on a hypervisor and they will have major improved IOPS since they all run in RAM. You can look at a test done here à
http://vimeo.com/34231558 to see the difference,
with and without ILIO.
So this means that it easy to “move” to using Atlantis in an environment since it’s just a piece of software. Just have to create a VM with the amount of ram and create new desktops and you are ready to go.

A quick calculation of how many VD you can have on one host à

Virtual Desktops Supported = ( 512 GB in the host.
– 6 GB Reserved for Atlantis ILIO
             – 2 GB Reserved for the hypervisor) /
( 2 GB (For east VD) + 0.6 GB for RAM disk allocation) = 193 Desktops
This is with a 40 GB master image for Windows 7.

Now do not think that you now don’t need storage (You still do!) The Atlantis VM and the Hypervisor still needs to be placed somewhere, and if you are doing persistent VMs, you need to deploy Atlantis in another mode.
In the earlier releases, Atlantis did not do persistent desktops, only stateless but! they have a own release for persistent desktops where they have sorted out the placement and HA function of the VMs.

In this case (with persistent) we would need a Replication host in between that would sync all the persistent data from the other ILIO instances and deliver it to our SAN solution. This data is compressed and deduped so it does not use much storage.
We can also use this solution with XenApp and PVS. In that case, you would need to redirect the Write cache to the RAM disk.
Now we are talking cool stuff! J

So far, we have only been speaking about VDI solution, what if we could use this for other solutions? What if we could use this for other products like SQL databases, exchange or file servers?
Well its coming à
http://www.atlantiscomputing.com/products/flexcloud

BTW: You can read a reference guide for a huge Atlantis implementation here à
http://www.atlantiscomputing.com/downloads/10kSeat_Diskless_Reference_Implementation.pdf