Monthly Archives: August 2016

Backing up Offic365 email with Veeam Backup for Office365

So a couple of days back I publised that a lot has been going on in Veeam latelcy, and among the news that was published was support for Office365 Exchange backup! and as a Vangard Ive been lucky enough to been given access to the beta of the tool

NOTE: This represents a beta and might not reflect the final product, and to simplify the process I ran this tool on my Windows 10 device, which shows this tool can also be leveraged for SMB clients without needing a dedicated server to do Office365 backup.

Now the tool is pretty straight forward and consists of two parts, the backup tool and the exchange explorer, which is used to do restore against a backup file

When connecting to Office365 with the tool you need to have a username which has Organization Management role in Exchange online (or Global Administrator in Ofifce365 which also inherits that role)


Then it does a connecting directly to Exchange Web Services and creates an role which it will leverage for future backup jobs.


After the connection is done, it will appear with the organization name in the console on the left side. And now  we can start to create backup jobs. Now I have the option to hackup the entire organization, meaning all users will an mailbox, or I can choose individual mailboxes


NOTE: I have an option to exclude certain folders from the backup job, but this is a setting which applies to the entire product itself and not for individual users or organizations.

This setting is defined in the global settings pane of Office365 backup.  And also by default, the tool will backup to a newly created folder under the C:\ drive and will store backup for 3 years.


So it does the backup job and processes all the mailboxes using Exchange web services


NOTE: this was an example trial which I recently setup and didnt have any large data examples to backup.  So after a job had finished I can right click on my organization and do restore to a point in time


And when I starting a explore job it will automatically start the Exchange Explorer, which has had support to restore email to Office365 for some time already.

Now another tool thing is that I can also setup multiple Office365 organizations in the console as long as I have the right permissions.


You can find more info on the Veeam offering here –>

N-series testing in Microsoft Azure with NVIDIA

So I’ve been fortunate enough to been able to test-drive the N-series virtual machine instances in Microsoft Azure. For those who are not aware, the N-series is a GPU enabled series of virtual machines. Which can either be setup with a M60 (Which is the NV-series) or a K80 (Which is the NC-series)

You can find the sizing and the specs here –>

But as of now in the preview I only have access to the entry series which is the NV6 and the NC6 (Which is basically 1x M60 or 1x K80. So since I’ve part of the public preview, my subscription has been enabled for access. So just by going into the Azure Portal I have access to the N-series from the gallery


The virtual machine instances are only available using ARM. And as of now, the N-series are supported on Windows Server 2012 R2, Windows Server 2016 TP5 or Ubuntu 16..04 LTS.  This feature is using Discrete Device Assignment which which allows for a PCI passtrough more from the hypervisor to the virtual machine instance.

Now after the virtual machine is spun up we need to add a driver to it. This is because that it still deploys the same virtual machine image from the gallery.




So after the installation is complete, it should look like this under the device manager.


I just got word from the PM that the NVIDIA drivers will also come as a VM extension which will allow for an easier VM driver installation after the VM has been provisioned. Also the driver installation includes nv-smi which allows us to show (NVIDIA System Management Interface program)


So far so good, the only issue for us in EMEA as of now, is the placement of the GPU instances which is only in South Central US datacenter, which is about 140 – 160 MS latency, and if we combine this with some jitter because of congestion well it is not going to be a viable solution for customers here yet! But I’m guessing we have only seen the start of GPU instances in Azure and I’m guessing that they are going to appear on datacenters elsewhere as well.

NOTE: Just got work from the PM that N-SERIES is also coming to the EMEA Datacenter as well, Before the end of the year

Now I would also like to get more monitoring matrix available in performance monitor so I could leveage OMS to do Performance monitoring on the GPU instance directly from OMS. Note that NV comes with a perfmon counter installed but not with NC yet.

NOTE: The drivers for nv are custom as of now but will be standard when GA and the drivers for nc are standard  which can be downloaded from NVIDIA today.

If you are part of the preview and are having issues, head out to the forum –>

and don’t remember to follow this guy on twitter –> which is the PM for the N-series Smilefjes som blunker

So you want to use N-series to deliver high-end graphics to your users? Well there are some limitations as the moment… First of you have the latency issue to think about, depending on where you are located.  And also using the drivers will give about 90 – 95% over the bare metal performance.

We could setup a Citrix infrastructure and leverage the support PCI passtrough that it has for XenServer and ESXi, since this is a typical PCI passtrough feature that is being leveraged it should work on paper….


But as of now it seems like the VDA does not detect properly the GPU (Seems to be confused by the first adapter which is tagget “Microsoft Hyper-V” Since this is a Windows Server 2016 feature I’m guessing that it should be supported from Citrix when they come with Windows Server 2016 support.

My other guess is that we will be able to use Windows Server 2012 R2 and 2016 as a guest OS when running the vNext version of XenDesktop, time will tell. But when the support comes for N-series from Citrix, we can either leverage Citrix Cloud and NetScaler Gateway Service which is a Windows based NetScaler component or NetScaler appliance.  We can also from Citrix cloud now  leverage the Azure Resource Manager support to do power management as well against our instances.

So what other options we have? If we run native RDP on Windows Server 2012 we are out of luck, since the RDP engine will not be able to leverage the GPU properly, this is only with RemoteFX vGPU that It can use the GPU properly.

NOTE: When setting up any virtual machine in Azure with RDP  you want to leverage RDP with full effect you need to open UDP as well as TCP when setting up the virtual machine (RDP using TCP is added by default) on the security group.


So I’ve setup a sample 2016 TP5 with M60 card and did a benchmark test, of course the issue isn’t that the GPU can process everything properly. Notice the benchmark was about 99 – 120 FPS the entire benchmarking session the issue at the moment is that the procotol has issues with transporting the


and also it is important to note that if running applications that require alot of rendering which requires alot of changes which needs to be transfered down the the client will generate alot of bandwidth usage.

not that this is going to be accurate for designer workloads, but is something you need to think about Smilefjes
So if setting up N-series GPU has a cost per minute, you will also need to think about the bandwidth usage (traffic that goes out of Azure’s datacenters)

The next big thing from Veeam! Veeam Availability Platform

So Veeam has been busy lately and today they showed us what they have been working on with their platform! With the upcoming release of Veeam availabilty suite 9.5 (Which I’ve blogged about previously here  –>

Veeam B&R (or Availbility suites) has always been aimed against virtualized enviroments, in order to become more agile against the move towards clouds, also with the introduction of cloud connect, direct restore to Azure, endpoint protection,  it gets difficult to manage in a larger scale. Also having orchestrated disater recovery options as well which makes management complex. Veeam has talked about the Orchestrator product which will fit nicely into the puzzle, the final piece is Veeam availbilty console which will allow for a centralized management of remote officies, clouds and virtual enviroments.


Veeam have also figured out that agents is needed to ensure compability and full featured backup across all platforms (virtual, physical, cloud)

So starting with Veeam 9.5 you can now have agents installed on any supported Windows or Linux operatingsystem

There are three different agents that can be used, depending on the required features. Where for instance Server and Workstation are paid instances, but if you now look at the features, we can now do direct restore to Hyper-V and Microsoft Azure (Both ASM and ARM) we can also do integration with Veeam Cloud Connect meaning that we can do direct backups from clients and server directly to a service provider or cloud endpoint. And since we can use the agents on a physical server, we can actually use it to do P2V or V2V from one platform to another cloud platform.


Another cool feature that Veeam has integrated is advanced use of ReFS filesystem. For those that are not aware of ReFS it is a new filesystem which came in Window Server 2012, which was aimed at replacing NTFS will continuing to support application compability level of NTFS. Is has enhanced resillency and file corruption detection and use of integrity streams to ensure file integrity.


ReFS on Windows Server 2016 implements block cloning by remapping logical clusters (that is, physical locations on a volume) from the source region to the destination region. It then uses an allocate-on-write mechanism to ensure isolation between those regions. The source and destination regions may be in the same, or different, files. So this allows for low-cost metadata operation, rather than actually reading and writing the underlying file data directly, resulting to drasticly decreasing time to do for instance full synthentic backups. It also implements cost savings since it does not need to rewrite the data it only copies the metadata. So think of the possibilities if we are using a Storage Spaces 2016 ReFS repository volume using for instance the improved deduplication feature as well. We can also use it in conjunction with Storage Spaces Direct (Which is a shared nothing, storage cluster option) to ensure resilliency, this requires ReFS format volumes from Microsoft.

NOTE: In order to use this feature you need to have a Windows Server 2016 format ReFS volume, and requires a 2016 cluster level if used in a Storage Spaces Direct setup. Also the ReFS block cloning is also being used for Windows Server 2016 features such as VHDX acceleration and Snapshot merging acceleration features.

Another piece of the puzzle, which is aimed at service providers is the ability to get chargeback reports directly from Veeam One


And yeah, 9.5 is coming already in October, when Windows Server 2016 is scheduled to launch.
The different agents are scheduled to be released in December 2016 ( For Windows, Linux in November) Veeam availability console is scheduled to Q1 2017.

Now the final piece of the puzzle, Office365.

When using Office365 it is your responsbility to control the data inside it, Microsoft does not take any hackup of Office365. While you have different features that you can configure inside Office365 to ensure you have versioning and such and retention policies defined in Exchange but if data is deleted, it’s gone for good.


Therefore Veeam is also releasing Veeam Backup for Office365! which will allow for iback and restore individual  single objects within mailboxes (aimed at Exchange Online part of Office365)  This feature is aimed at a Q4 release and will have a direct integration with Office365.

So really looking forward to release 9.5 which will come with full support for Windows Server 2016, more support for Azure and Office2016, with greater integration with ReFS and block cloning API and of course the Orchestrator to easier control distaster recovery, support for agents (Already planning to implement Azure AD server with the endpoint using DSC….) and of course enhancements to the backup process itself with processing engine enhancements + full VM restore acceleration tech.

NetScaler CPX Express released!

Up until now, I’ve been writing about the NetScaler CPX, but it actually comes in two editions! where one is a licensed edition and the other one is a free edition. NetScaler CPX which is a containerized version of the NetScaler and can run as a container on a container host such as Ubuntu for instance. The CPX is a headless version of the NetScaler which means that it does not come with any form of UI, so in order to configure and setup load balancing you can either use the CLI or use the REST API. The CPX supports many of the features which are included in the VPX or MPX such as

  • Application availability
  • L4 load balancing and L7 content switching
  • SSL Offloading
  • IPv6 protocol translation
  • Application security
  • L7 rewrite and responder
  • Simple manageability
  • Web logging
  • AppFlow

We also talked a bit about the different versions of the CPX where we have one licensed edition and one free edition. The NetScaler CPX Express is a developer version of CPX that is free and unlicensed.  It supports up to 20 Mbps and 250 SSL connections. CPX Express supports most of the feature set of CPX with the exception of TCP Optimization and L7 DDoS, but still allows you to test and for instance load balace applications directly using NITRO API’s.

The NetScaler CPX Express comes with the following capacity


20 Mbps



SSL Connections


SSL Throughput

10 Mbps


Up until now the CPX has only been available for those that have had access to a MyCitrix account, but now Citrix has made CPX Express is available for all, and you can download it here for FREE HERE –>

And if you want more information on how to get started with CPX on a Docker host you can read more information here –>

Leveraging OMS Network performance monitor to detect network loss

So a new interesting feature came up in my workspace (Network performance monitor) which allows us to monitor network performance between different endpoints. You can read more about it here –>

But in case we can install the OMS agent on two nodes and the OMs agent will then be used to monitor and probe between the two nodes, it then creates a baseline of the health and can also be used to create a network topology map.

NOTE: At this time, this feature is only available for Windows agents, and if you have the OMS agent installed you need to run this PowerShell script to setup iplister on port 8084 and configure some registry keys this feature uses –>

After the PowerShell script has run, it takes about 2 – 5 minutes before the network mapping appears in the OMS workspace

In my case I only have two agents installed on the same virtual network spanning across multiple hosts using VMware NSX so its the same layer two network, but across multiple layer 3 networks. So after you have defined the subnet and given it a network name and attached the subnet and clicked SAVE button you are good to go.


By default the network performance monitor includs a default monitor which doesn’t do much, it just checks for sudden changes (network los etc) but we can add our own rules which we can add to particular networks. Such as look at particular network loss or latency.


Now if we go back to the workspace portal, we play the waiting game…


And we have confirmed!


So to determine if this monitoring worked properly I added a 50 MS latency using a Windows tools on one my agents. Which meant that it would increase the latency between the two agents. And voila! It deteced the latency and triggered an alert!


I can also see the baseline change on that particular subnet


So what if we change the latency back to zero but change the packet loss to 15% ?

And here we go!


I see this feature as an excellent addition to the other features in OMS and with the upcoming release of Wire Data as well, this tool will allow for some great insight, just need some suppor for common flow protocols!

Differentiate TCP profile settings between endpoints using AppQoE

So a new feature that was introduced in 11.1 which I some reason overlooked was the ability to configure seperate TCP profiles using AppQoE actions, which in its essence allows us to configure different AppQoE policies using different TCP profiles based upon which endpoint is connecting using expressions.

Now by default when a client connects to a virtual server, the client and the vserver will communicate using the TCP profile which is defined to the virtual server, or using the default_tcp_profile if nothing is attached to the virtual server


Now with 11.1 as mentioned we can bind different AppQoE policies to a virtual server with difference expressions which will then have a TCP profile bound too it.


This allows for us to much easier adjust TCP traffic based upon where the clients come from, user-agents types etc. To configure this we first need to enable AppQoE feature. Next we need to create two different policies, where one is aimed at local connection coming from the LAN


Where I use the profile internal_apps, then I bind it to a AppQoE policy,


Next I create a similar policy aimed at Android devices,


then I use another TCP profile which is bound to that AppQoE policy


then lastly I bind the policies to a virtual load balanced virtual server


So now when a client connects to this virtual server, depending on what and where the device is coming from with get the AppQoE policy enabled. It is important to note that even if we are defining a seperate TCP profile for each device, some of the TCP profiles parameters such as Window Scaling, SACK etc are evaluted during the initial conneciton before the AppQoe policies are evaluted. Therefore they will not be changed even if we bind another TCP profile using AppQoe where we have other TCP settings for WS and SACK defined. This is where the vServer TCP profile will decide if WS and SACK will be enabled or not.

Increasing Microsoft Edge performance using TCP Fast Open on NetScaler

TCP Fast Open (TFO) is a TCP mechanism that enables speedy data exchange between a client and a server during TCP’s initial handshake. By using the TFO mechanism, you can reduce an application’s network latency by the time required for one full round trip, which significantly reduces the delay experienced in short TCP transfers.

So how does it work? This picture describes it alot better! Smilefjes


It is important however that we need to have a supported client and a supported server to make this feature work. This feature was introduced in NetScaler 11.1 as it just needs some configuration to be able to work properly.

This can be done by adjusting a TCP Profile with the TCP Fast Open value


We can also define how long the TCP cookie should be used, by default this is set to zero (Which is defined in the TCP parameters on the NetScaler


After this setting is configured we need to enable TCP fast open for Microsoft Edge. Note that this feature is not enabled by default. Microsoft wrote a blog about TCP fast open earlier this year –>

But not everything is well documented in the blogpost! first of you need to have 1607 build to get suppor for TCP fast open in the Windows Kernel. If you have TCP fast open you can see that enabled by using this command

netsh interface tcp show global (You will see TCP fast open) if you do not see it present you need to update your Windows 10.


To enable TCP fast open  in Edge you need to open Microsoft Edge (Using build 14352 or higher) and type


Then scroll down and enable TCP fast open, then restart the browser.


Next we need to test this that it is working! by default in Microsoft Edge  it ONLY WORKS UNDER HTTPS/TLS it makes sense but it is not documented.

Here we can see from WireShark the client request going to the web-server
( = Windows 10 client, = NetScaler Virtual Server)


And here I can see the NetScaler responding with the Cookie


And here we can see that the client uses the TCP Open Cookie for second request


So voila! So will this small chance improve web performance? No yet! There are still a number of ISP which blocks the TCP Fast Open cookie header in TCP (ref: which means that it falls back to regular TCP and then triggers a TCP retransmission.

But for those that have TCP fast open enabled on their web-servers, as seen here implementing TCP fast open will allow for fast download of websites


Exam 70-744 Securing Windows Server 2016

this is a new Microsoft exam which is currently under development, but it alot of interesting points which Microsoft should have focus on a long time ago in terms of training! Security!

The exam is of course based upon a training course –>

But the exam it self focuses on Securing network traffic, DNSSEC, Microsoft ATA, Breach detection using SysInternal tools, Credential Guard, JEA (Just enough administration) AppLocker, Use of OMS, BitLocker, Containers and so on.

You can read more about the exam here –> ( I expect some beta coming soon)

And oh, a study guide is in the making! Smilefjes

The future of RemoteApp and Windows 10 VDI from Azure with Citrix Cloud

So today Citrix announced some pretty big news,  that moving ahead Microsoft will stepping down development on their Azure RemoteApp feature in Azure, and that Citrix will taking over with a new feature called XenApp Express or (RemoteApp 2.0) which appears to be some further development of their Citrix Cloud offering.

Microsoft will continue to support existing Azure RemoteApp customers on the service through August 31st, 2017, which is when the service will be wound down, and note: New purchases of Azure RemoteApp will end as of October 1st, 2016.

If you want to sign up for the upcoming tech preview you can do so here –>

Now even though RemoteApp has had its limitations there have been a couple of good things about it, which makes it attractive for SMB.

  • 1: User based cost (Meaning that we can purchase the service per user and not worry about all the other services cost in Azure)
  • 2: Azure Active Directory integration (Meaning that for customers without a on-premises Active Directory they could easily setup RemoteApp using their existing Office365 users.
    3: Master image creation based upon a running virtual machine in Azure.

Now this new feature Xenapp Express is development in conjunction with the Windows 10 on Azure using Citrix Cloud as well, so I expect there is alot of development focus here at the moment.

Now at this time there is not alot of information available on XenApp express, will update the blogpost when I have more information.  Here is the official statement from Citrix –>

Is low latency enough? Optimizing TCP for optimal ICA traffic

So I decided to write this blogpost to actually show the effects of TCP optimization on Citrix.
I’ve been stating that you should always change the TCP profile for NetScaler Gateway because the default is BAD.

So therefore I decided to do a bit research to see what kind of effect will a simple TCP profile have on a NetScaler Gateway vServer. A simple setup where I had two NetScaler Gateway virtual server where one has the default TCP profile and another has the nstcp_xa_xd profile. Now I also used NetScaler Insight to follow the latency between the clients and the server. Both using the same version of NetScaler and the same version of Citrix Receiver to connect to both sites.

So before we begin we also have the latency issue

So latency is time it takes to go end-to-end, in this case it is the endpoint and the VDA agent.

It should be noted that my connection has about 8 – 10 MS (WAN Latency)so it low latency enough to ensure a good user experience?  The problem is not always the latency, but a bunch of different issues (Jitter, Wifi and overhead, packet loss & retransmissions, congestion etc)

So start out with I did a file transfer test which allows me to see how stable the connection is and how it fluxuates during the transfer.

First test: File transfer between endpoint and RSDH host, small file 300 MB using a Citrix Receiver session.

File transfer from the default TCP profile

As you can see it spikes up and down during the entire file transfer, this is because the profile is lacking certain TCP properties like

File transfer from the Optimized TCP profile virtual server

As you can see it has a much better transfer rate even though it stays on about the same bandwidth this is because of the limits on the broadband connection in my lab enviroment.

Information from Insight (Using AppFlow which can only do 1 minute granularity) as you the optimized server gains higher bandwidth alot faster because of the TCP scaling window configured. It since I transfered such as small file it was quick to go down again.


So what if we try to add some additional packet loss to the mix? What if we have additional 5% packet loss and try the same scenario? Because the optimized TCP profile has for instance SACK, DSACK and FACK which makes it easier to resume a TCP stream in event of packet loss, since it does not need to retransmit all packet which have been lost.

Optimized TCP Profile (TCP Window Scaling, Nagle, SACK, DSACK etc)

Still going pretty stable, in the events of file transfers.

Default TCP Profile

Well this isn’t going very well, because of the packet loss, it takes alot more time for the default TCP profile to resume the TCP stream, and also in the event of packet loss the sending end has to restransmit all dropped packets


From Insight we can see spikes using the default TCP profile, while the optimized TCP profile manages to keep a steady flow.  Now it is also important that Insight doesn’t look at packet loss it only looks at the bandwidth from the VDA agent.


Westwood+ defined in the TCP profile (TCP Westwood+ is a sender-side only modification of the TCP Reno protocol stack that optimizes the performance of TCP congestion control over both wireline and wireless networks) Note that the default NSTCP profile and NSTCP XA XD uses the New Reno Congestion Algoritm)



Westwood+ defined in the TCP profile  + TCP Hystart

TCP Hystart: Is  new TCP profile parameter in 11.1. Which is a slow-start algorithm that dynamically determines a safe point at which to terminate (ssthresh). It enables a transition to congestion avoidance without heavy packet losses.  If congestion is detected, Hystart enters a congestion avoidance phase. Enabling it gives you better throughput in high-speed networks with high packet loss. This algorithm helps maintain close to maximum bandwidth while processing transactions.

TCP Profile:

Notice that that using Westwood  + Hystart and optimized TCP profile on the NetScaler Gateway Virtual Server I get higher troughput then I initally did on the Virtual server and have a more steady stream of TCP segments (Notice that this is still on 5% packet loss)

Westwood+ defined in the TCP profile  + TCP Hystart


Only optimized TCP profile

So this has been a little introduction into how to optimize a virtual server for ICA-traffic. On the next articles I will take a closer look at latency and how it affects the TCP traffic flow, and how we can configure the VDA Optimal TCP Receive Window and also how to configure Windows 2016/2012 to ensure better performance.