Monthly Archives: May 2014

Veeam B&R 7 a list of issues and solutions

Now  I’ve been working with Veeam for a while now, and I’ve seen thatt mostly the case that when a backup job fails (or a surebackup job fails) or something fails, its most often not Veeam’s fault.

Veeam is a powerful product but it is dependant on alot of external features to function properly in order to do its job right. For instance in order to backup from a Vmware host, you need a vmware license in place in order to allow Veeam to access the Vmware VADP API’s.
If not Veeam can’t backup your virtual machines running on Vmware.

Also in order to do incremental backups properly Veeam is also dependant on CBT working properly on the hypervisor. So the real purpose of this blog post is mostly for my own part, but having a list of problems/errors that I come across in Veeam and what the fix is for it.

Now in most cases, when running jobs the job indicator will give a good pinpoint what the problem is. If not look into the Veeam logs which are located under C:ProgramdataVeeamLogs (Programdata is a hidden folder) there is also possible to generate support logs directly from the Veeam console –>

Issue nr 1# Cannot use CBT when running backup jobs
Cannot use CBT: Soap fault. A specified parameter was not correct. . deviceKeyDetail: ‘<InvalidArgumentFault xmlns=”urn:internalvim25″ xsi:type=”InvalidArgument”><invalidProperty>deviceKey</invalidProperty></InvalidArgumentFault>’, endpoint: ”

If CBT is for some reason not available and it not being used, Veeam has its own filter which it uses in these cases. Veeam will then process the entire VM and then on its own compare the block of the VM and the backup and see which blocks have changed, and the copy only the changed blocks to the repository. This makes processing time alooooot longer. Now in order to fix this you need to reset CBT on the guest VM. This can be done by following the instructions here –> and one for Hyper-V CBT

Issue nr 2# Sure backup jobs fail with error code 10061 when running applications tests. This is most likey when a firewall is configured on the guest VM which only allows specific VMs. I have also seen this when a guestVM is a restarting state. If you do not have a guestVM firewall active, doing a restart of the guestVM and then do a new backup should allow the surebackup job to run successfully.

Issue nr 3# WAN accelerator failes to install. This might happen if a previous Veeam install has failed on a server. When you try to install the WAN accelerator the setup just stops without no parent reason. Something makes the installpath of the WAN cache folder to the wrong drive. You need to go into the registry of the VM and change the default paths as seen here –>

Issue nr 4# Backup of GuestVMs running on a hyper-v server with Windows Server 2012 R2 update 1, this is a known issue from Microsoft which requires an update from Microsoft –>

Issue nr 5# Application-aware image processing skipped on Microsoft Hyper-V server, this is of course related to many possible features. In most cases it is integration services, a list of the different causes and solutions are listed here –>

Issue nr 6# Logs not getting truncated on Exchange/SQL guest VMs, this requires application aware image processing and define that the backup job should truncate logs –>

Issue nr 7# Backup of vCenter servers –>

Issue nr 8# Backup using Hyper-V and Dell Equallogic VSS –>

Issue nr 9# Incredible slow backup over the network and no load on the servers, make sure that all network switches are full-duplex.

Issue nr 10# Win32 error: the network path was not found. When doing application aware image processing veeam needs to access the VM using the admin share with the credentials that are defined in the backup job. (For Vmware if the VM does not have network access Vmware VIX is used) It is possible to change the priority of these protocols –>

Software defined storage and delivering performance

I had no idea what kind of title I should use for this post, since this is more about to talk about different solutions which I find interesting for the time beeing.

The last couple of years have shown a huge growth in both converged solutions and software defined X solutions (Where the X can stand for different types of hardware layers, such as Storage, Networking etc)

With this huge growth, there are alot of new “player in the field” which are in this space, this post is more to show some of these new players and what their capabilities are, and most importantly where they fit in. Now I work mostly with Citrix/Microsoft products and as such there is often a discussion of VDI(meaning stateless/persistent/rdsh/remote app functionality)

and a couple of years ago when deploying a VDI solution you needed to have a clustered virtual infrastructure running on a SAN, and the VMs where constricted to the troughput of the SAN.

Now traditional SAN’s mostly run with spindel drives since they are cheap, and has huge storage spaces. For instace a PS6110E Array Has the ability to house up to 24x 3,5” 7,200 RPM disks.

Which can then be upwards to 96TB of data. Now if you think about it, regular spindel disks have about roughly 120 IOPS (Depending on buffers, latency and spindels) and we should have a kind of RAID set running on the array for redundancy across disks as well. Using 24x drivers with RAID 6 and double parity (not really a good example but just to prove a point) gives us a total IOPS of 2380, which is lower then my SSD drive in my laptop. Now of course most arrays come with buffers and caches in different forms and flavors so my calculation is not 100% accurate. Another issue with using a regular SAN deployment is that you are dependant on having a solid networking infrastructure and if you have some latency there as well it affects the speed of the virtual machines. So in summary

* regular SAN’s are built for storage space and not for speed
* SAN’s also in most cases need their own backend networking infrastructure

And based upon these two “issues” many new companies have their starting grounds. One thing I need to cover first is that both Microsoft and VMware have both created their own way to deal with these issues. First Microsoft has created a solution with Storage Spaces with SMB 3.0. Storage Spaces is a kind of software raid solution running on top of the operating system and with features such as deduplication and storage tiering which allows data to be moved from fast SSD’s to regular HDD depending on if the data is hot or not. Storage spaces can either be using JBOD SAS or internal disks depending on the setup you want.  And with using SMB 3.0 we have features such as multichannel, RDMA. Both of these solutions makes it easier for us to build our own “SAN” using our regular networking infrastructure. But note that this still requires we have a solid networking infrastructure, but this allows us to create a low cost SAN with a solid performance.

Vmware has choosen a different approach with the VSAN technology. Instead of having the storage layer on the “other” side of the network, they built the storage layer right into the hypervisor.

Meaning that the storage layer is on the physical machine running the hypervisor meaning that we don’t have to think about the network for the virtual machines performance (even thou it is important to have a good networking infrastructure for the VM’s to replicate across different hosts for availability)

Now with VSAN, you need to fullfill some requirements in order to get started, since this solution runs locally on each server you need for instance to have a SSD drive for just the caching part of it, you can read more about the requirements here –>

So its fun to see that for one,
* Microsoft still has the storage layer outside of the host but dramatically improves the networking protocol and add storage features on the file server.
* VMware moves the storage layer ontop of the hypervisor to move the data closer to the compute roles.

Now based on these ideas there are multiple different vendors which in essence bases their solution on the same.

First of we have Atlantis ILIO, which is a virtual applicance which runs on top of the hypervisor. Now I’ve written about Atlantis before but in essence what it does is create a RAM disk on each host, and has the ability to use the SAN for persistent data (of course after the data has been compressed and deduped leaving a very small footprint) Now this solution allows virtual machines to run completely in RAM meaning that each VM has access to huge amounts of IOPS. So Atlantis also runs ontop of each hypervisor so it runs to close to the compute layer as possible and is not dependant on having high-end SAN infrastructure for persistence.

Atlantis has also recently released a new product called USX which is a more software-defined storage solution which allows to create pools of storage containing both local drives and or SAN/NAS (and not just a place to dump persistent data for VDI)

Secondly we have Nutanix, which unlike the others is not a regular software based approach, they deliver a hardware+software platform which has a kind of lego based approach, where you buy a node and compute and storage are locally and you can add more nodes to scale upwards. With Nutanix there are controller VM’s running on each node which are used for redundancy and availability. So in essence Nutanix have a solution which resembles alot of VSAN since you have the storage locally to the hypervisor and you have logic which is used for redundancy/availability.

And we also have PernixData which has their FVP product, which caches and accelerates reads & writes to the backend storage. Writes and reads are stored on the aggregated cache (which consists of either a flash drive such as Fusion-IO or SSD drives locally on each node) which allows IO traffic to be removed from the backend SAN.



Now there are also a bunch of other vendors, which I will cover in time. Gunnar Berger from Gartner also made a blogpost, showing the cost of VDI on different storage vendors But most importantly this post is to give a bit awareness of some of the different products and vendors out there which allows you to think differently. You don’t always need to invest in a new SAN or buy expensive hardware to get the performance needed. There is a bunch of cool products out there just waiting for a test-drive Smilefjes

RemoteFX and vGPU 2012R2 requirements

Now there has been a lot of speculation with RemoteFX with the latest 2012R2 release. RemoteFX is a set of different feature. One of these features is the socalled vGPU.

vGPU is a feature which allows us to share GPU hardware between virtual machines. Now one thing that is important for those that wish to use vGPU feature on RemoteFX with 2012 R2 is that it is ONLY supported on client OS, meaning that is only supports Windows 7/8/8.1 Enterprise editions running as a guest VM on a 2012 R2 Server. Meaning that you cannot run a RDSH server and use the vGPU feature.

Microsoft has made a list of different RemoteFX features and listed the compability matrix here –>

And also important to remember that you can only use RemoteFX adapters on a Generation 1 virtual machine (It is not available on Generation 2) You can read more about the configuration and setup here –>

Microsoft has also made a list of different GPUs which make a good candidate for RemoteFX vGPU

RemoteFX only supports DX h/w acceleration. OpenGL support is a feature under consideration. If you are interested in learning how much vRAM is added to VMs using RemoteFX you can read more about here –>

If you are having some issues with performance make sure that you have the latest drivers from the GPU vendor.

Citrix Synergy 2014 day 1 summary

So Citrix just recently had their first keynote of their annually Synergy conference and there are some exiting new features coming this year. So for those that haven’t seen or read about the updates here is a quick update list.

* Citrix Workspace Suite (Which is a bundle which combines XenDesktop (Platinum) and XenMobile (Enterprise) which is available now.

* Citrix Receiver X1 (Which is a new receiver which combines MDX and HDX technology into one and the same receiver

* Google Receiver (Is coming with a new HTML 5 receiver for Chromebooks which will have USB redirection and stuff like that.

* Citrix XenMobile 9 (With support for stuff like Windows Phone 8 and so on, more here –> and with new WorX apps such as WorxNotes, WorxEdit, WorkDesktop (Which is a gotomypc app.

* Netscaler 10.5 which has more features releated to mobile traffic and with it MobileStream (A blog post is coming later releated to 10.5 when I am allowed to do so Smilefjes) Beta is available now.

* Citrix Workspace Services ( A cloud platform to deliver DaaS and virtual apps) which is cloud agnostics, where Azure is one of the options. You also have Amazon, Softlayer and so on. Which allows you to create services on any type of cloud provider. You can read more about this from Brad Anderson at Microsoft here –> 
A tech Preview of this is coming second half 2014 but you can sign up for the tech preview when available here –> and a bit more info here –>

* Updates for Sharefile with new connectors! ( makes it easier to connect to personal file storage providers such as OneDrive / DropBox etc.)

So far a good day 1, looking forward to day 2 keynote.

Azure Multifactor authentication and Netscaler AAA vServer

Microsoft has done a great job adding features to the cloud platform over the last year, one of which is Azure MFA (Multi Factor Authentication) which allows a user to login with his/hers username and password and a second option which might be a pin-code or one time pin or something else.

Now just to show how we can use Azure MFA with non-windows services I decided to give it a try with Citrix Netscaler AAA vServer. So here is a overview of how the service looks like.

The Azure MFA requires a local server component which proxies authentication attempts between the client and the authentication server. In my case I use the MFA component as an RADIUS server and then proxies RADiUS connections to the AD domain and adds the two-factor component on top.


The Netscaler AAA vServer can be used to proxy authentication attempts to backend services, such as Exchange, RDweb and such. This is the type that is also used when logging into a Netscaler Gateway session.

Now for the purpose of this demonstration, I setup a load balanced web-service which consist of two web servers. The webservers themselves have no authentication providers, so therefore I needed to create an AAA vServer on the Netscaler which users will be redirected to in order to authenticate to see the web content.


So a simple load balanced services, and then I added a AAA vServer to the service.


Note that the aaa.test.local is an internal service on the Netscaler (Make sure that DNS is in place and a nameserver is added to the Netscaler) In order to create the AAA vServer go into Security –> AAA –> Virtual Servers and choose create new.

There we need to create a new server, and make sure that the domain name is correct and that a trusted certificate is added


Then under Authentication we need to define a authentication server. Now this can be setup to forward authentication attempts to RADIUS, LDAP, LOCAL, SAML and so on. Since we want to use Azure FMA here we can use RADIUS.

Now in my case I created a authentication policy where I used the expression ns_true which means that all users going trough the Netscaler are going to recieve this policy


My authentication policy looks like this. The Authentication server here is the server which is going to get the Azure MFA service installed (I also predefined a secret key) Also important that the time-out here is put to 60 seconds, this is to grant enough time for the authentication to finish.


Remember certificates here are important! if the clients does not trust the certificate you will get a HTTP 500 error messages.

Now after this is done we can start setting up Azure MFA. First off, make sure that you have some sort of DirSync solution in place so that you can bind a local user to a user in Azure AD. If you do not have this, just google DirSync + Azure you’ll get a ton of blogposts on the subject Smilefjes

In my case I didn’t have DirSync setup so I created a new local UPN which resembled the usernames@domains in Azure so that the MFA service managed to bind a local user to a azure user.

Firstly you need an Azure AD domain


Then choose create new multi-factor auth provider


After you have created the provider, mark it and choose Manage. from there you can download the software.


Now download the software and make sure that you have an server which you can install it on. When installing the server components you are asked to enter a username and password for authentication, this user can be generated from the Azure portal


You are also asked to join a group, this is the same group that you created when setting up the multi-factor authenticaiton provider in Azure.

During the installation wizard you are asked to use the quick setup, here you can configure the wizard against RADIUS automatically.


Then you are also asked to enter the IP address of the RADIUS client, this is the Netscaler NSIP.


After you are done here, finish the wizard and start the MFA application. Firstly make sure that the RADIUS client info is correct


Then go into Target. Since we want the MFA server to proxy connections between the RADIUS client and the AD domain, choose Windows Domain as target


Then go into Directory Integration and choose either Active Directory or choose specific LDAP config if you need to use another AD username and password.


Next go into Users, and choose which Users are enabled for two-factor authentication. In my case I only want one. Here I can define what type of two-factor I want to use for my user.
If I choose phone-call with PIN I get a auto generated phonecall where I can enter my pin code directly.


Now I have also added my phone number so the service can reach me with a OTP. So after all this is setup I can try to login to my service.


Login with my username and password and voila! I get this text message on my phone.


After I reply with the verification code, I am successfully authenticated to the service.