Monthly Archives: October 2015

Office365 on Terminal Servers done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices


So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience


User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here

NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.


<Add OfficeClientEdition=”64″ Branch=”Current”>

<Product ID=”O365ProPlusRetail”>

<Language ID=”en-us”/>



<Updates Enabled=”TRUE” Branch=”Business” UpdatePath=”\server1office365″ TargetVersion=”16.0.6366.2036″/>

<Display Level=”None” AcceptEULA=”TRUE”/>


Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –>

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –>

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=”\shareoffice” OfficeClientEdition=”32″ Branch=”Business”>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.


If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

“InstallationPath”=”C:\Program Files\Microsoft Office 15”

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:Program Files (x86)Microsoft OfficeOffice15’
cscript.exe .OSPP.VBS /dstatus


This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%MicrosoftOffice15.0Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone


Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host


Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session


Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.


In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=”Groove” />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –>


Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.


(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)


But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –>

Enabling file-level restore on Nutanix in NOS 4.5.1

Ever since I heard that this feature was included in 4.5 I was eager to give it a spin, but since it is still in TP It was difficult to find any particular documentation regarding the feature, the only mention was in the release notes and some blog posts I found. Now since it was TP I was guessing that the feature would not appear in PRISM so I had to dig into the CLI.

I noticed that there was an command for FLR under the virtualmachine list-flr-snapshots, when I ran the command

ncli> virtualmachine list-flr-snapshots vm-id=00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633
Error: Nutanix Guest Tools are not enabled for this VM.

So again, where can I find the Guest Tools to do this ? in the CLI Smilefjes

ncli> nutanix-guest-tools enable vm-id=00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633

    VM Id                     : 00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633
    Nutanix Guest Tools En… : true
    File Level Restore        : false

I saw that the file level restore option was disabled so I needed to enable it for a particular machine, which was in a protection domain.

ncli> nutanix-guest-tools enable-applications vm-id=00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633 application-names=”File Level Restore”

    VM Id                     : 00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633
    Nutanix Guest Tools En… : true
    File Level Restore        : true

Then I needed to mount the guest tools to the VM

ncli> nutanix-guest-tools mount vm-id=00051d07-74fe-2635-0000-00000000698a::5035e717-b404-916d-72d4-a8750120c633                               Successfully mounted Nutanix Guest Tools.

This in essence will mount an ISO under the CD/DVD rom. My first mistake


After installing Java I could continue on with the configuraition. Now in the Nutanix Guest Tools CLI mode I can now look and mount my snapshots.

Using the commands

flr ls-snaps (to list out snapshots

flr attach-disk disk-label=labelname snapshot-id=idname


Then I can do a regular file explorer to my orginal content as it was during the time of the snapshot.

Pin-to-SSD Nutanix EOS 4.5

So earlier today I was looking at the Pin to SSD video from Andre Leibovici , shown here –> and figured I wanted to give this feature a spin..

But no matter where I looked I didn’t find the same feature within Prism, luckily thanks to the twitter gods I got in touch with the Nutanix bible author himself.


So off I went exploring into the CLI, and found that under ncli virtual-disk there was an option called update-pinning. Which has three required attributes


id, tier-name and pinned-space.

In order to get the virtual disk ID we want to pin to ssd we need to use the virtual-disk list command. To get the name of the different tiers we can use the list tier command

So when we got what we need we can use the command

virtual-disk update-pinning id=idofthevdisk tier-name=idofthessdtier pinned-space=amountofGBtopintossd


If we now run virtual-disk list we can see that it has pinned space, but note that this is not visiable in PRISM.

Getting Started With Nutanix and PowerShell

Now that I have my hands on some Nutanix hardware it was about time to play a little bit with the features that are available on the platform. All of the stuff we do in PRISM is linked to the REST API, Nutanix also has a PowerShell cmdlets which also leverages the REST API.

Downloading the Nutanix cmdlets can be done from within PRISM

In order to connect to a cluster use the follwing command line

NOTE: for security reasons we should store our passwords as a secure string, by declaring these as variables before starting PowerShell.

$user = “your prism user”

$password = read-host “Please enter the prism user password:” -AsSecureString

connect-ntnxcluster -server ip -username -password password –acceptinvalidcert (only if you are using the self-signed certificate)

After we have connected we can use other commands such as



Using the command get-command -module NutanixCmdletsPSSNapin will list out all cmdlets available in the snapin. Now most of the cmdlets have the same requirements in form of input as the REST API 

But not all cmdlets are properly documented, so during the course of the week I found out that there was one line of code that was crucial.

Get-ntnxalert | resolve-ntnxalert


And also for instance if someone has read my blogpost on setting up Nutanix monitoring using Operations Manager we can also use PowerShell to setup the SNMP config using these simple commands

add-ntnxsnmptransport –protocol “udp” –port “161” | add-ntnxsnmpuser –username username –authtype SHA –authkey password –privtype AES –privkey password

BTW: Here is a reference poster for all PowerShell cmdlets for Nutanix

Setting up Operations Manager for Nutanix

Nutanix has a management pack available for several monitoring solutions such as Solarwinds, Nagios and of course Operations Manager, which allows us to monitor hardware / DSF / hypervisor directly into Operations Manager. Now combining this with the service management capabilities that Operations Manager has is a killer combo. Now the setup is pretty simple when run from a management server



After the management packs are installed, new monitoring panes should appear within the console.


Now the management pack uses a combination of SNMP and the REST API, first of we can configure the SNMP properties for the management pack, which can be done in PRISM under settings // SNMP


From there we need to enable SNMP, set up a v3 user profile which OpsMgr will use to authenticate and encrypt traffic.


And lastly define transport rule which is UDP on port 161.


Next we can run a discovery wizard from within Operations Manager to search for the CVM machines.


Next we need to add each device and create a specific SNMP user that we can use to contact the Nutanix CVM.




Eventually that devices will appear under the discovered devices pane, which means that we can contact the devices using SNMP.


If we now head back to the monitoring pane we can see that the devices appear healthy.


Next is to add monitoring to the cluster. This uses the REST API to communicate with the cluster IP.



Now we should add both a PRISM account and an IPMI account (Note that I have excluded the IPMI part since I had some minor issues with the IPMI connection on my nodes at the time)


Eventually the nodes will appear in the monitoring pane we can extract out performance information from the cluster as well.


If we go into health explorer of a CVM we can see all the different monitoring objects it checks.


Note: If you upgrade NOS you might/should need to rerun the cluster monitoring wizard again.

New job! Systems Engineer at Exclusive Networks (BigTec)

So I have been on a job hunt for some time now, and I’m quite picky on what job to take because of a lot personal stuff happening which has put alot of strain on me, and that moving two hours away from Oslo to the middle of nowhere in Norway makes thing much more difficult to do from a job perspective.

Even thou, I have now started at Exclusive Networks (BigTec) as a System Engineer..

So what will I be doing there? (Firstly BigTec which is the area I will be focusing on) is a part of Exclusive Networks which is a value add distributor focusing on datacenter change.

Well from a techincal perspective I will be focusing on the different vendors which are part of the BigTec portfolio. Such as Nutanix, vArmour, VMTurbo, SilverPeak and Arista.


So this is not my regular milk and butter… Since I have been focusing on Microsoft related technology for like forever, but for my part It will be a good thing to expand my horizon to new products and other aspects of IT, (and this is most likely going to affect my blogpost forward as well, you have been warned!) and moving more towards pure datacenter releated technologies and security as well.

If you want to know more about what we are doing, head on over to our website

Configuration Manager and Easy Servicing

Now with the later releases of Configuration Manager Microsoft has introduced something called Easy Servicing which allows for updates to be automatically installed and updated within an enviroment. Now there have been some major updates to Configuration Manager already such as

TP3, Build 1509 and Build 1510.

For those that have tried the Easy Servicing and have had issues with running the installation here are some tips. (This applies from going from 1509 to 1510 build.

If you have the update available in the console but are unable to run the 1510 update,  (If you see in the HMAN.log you can find the following error mesage)

*** EXEC spCMUSetUpdatePackageState N’db316362-77fc-46c9-9984-1baeb20615f4′, 262146, N”, N’15.10.2015 05:17:50′   SMS_HIERARCHY_MANAGER     10/15/2015 2:37:59 PM  1032 (0x0408)

*** [42000][8114][Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Error converting data type nvarchar to datetime. : spCMUSetUpdatePackageState SMS_HIERARCHY_MANAGER     10/15/2015 2:37:59 PM         1032 (0x0408)

run the following SQL query against the SQL database.

EXEC spCMUSetUpdatePackageState N’ DB316362-77FC-46C9-9984-1BAEB20615F4′, 262146, N”

Also some others from the TechNet site:

  1. Symptom: The Configuration Manager console displays the rule that failed.

    Solution: Fix the Prerequisite Check rule error. For example, if you do not have Windows 10 ADK installed and the associated prerequisite rule fails, install the Windows 10 ADK. Then, re-run <ConfigMgr_installation_folder>EasySetupPayloaddcd17922-2c96-4bd7-b72d-e9159582cdf2SMSSETUPBINX64prereqchk.exe on the site server. Once the check completes without an error, Version 1509 for Technical Preview will automatically restart.

  2. The installation for Version 1509 for Technical Preview stops unexpectedly.

    Symptom: The Configuration Manager console displays that the Version 1509 for Technical Preview installation has failed and Configuration Manager console no longer shows the update as available for installation. This might occur if a Configuration Manager service has stopped.

    Solution: Identify the error in the CMUpdate.log file and fix the issue, if possible. Then, make sure the Configuration Manager services are running, such as SMS_EXECUTIVE, SMS_SITE_COMPONENT_MANAGER, CONFIGURATION_MANAGER_UPDATE. Then, re-run <ConfigMgr_installation_folder>EasySetupPayloaddcd17922-2c96-4bd7-b72d-e9159582cdf2SMSSETUPBINX64prereqchk.exe on the site server. Once the check completes without an error, Version 1509 for Technical Preview will automatically restart.

If you are having issues that the updates are not showing in the console, you can try to restart the SMS_DMP_DOWNLOADER component, which should trigger the download. You can follow the dmpdownloader.log file.

Now after a while the component should be available in the Console


Which we now can right click and choose check prerequisites. In the right corner we can click and see the status of the update.


Ohh and if you have trouble with the setup not working, and getting error message such as image

You need to have the SQL2012 native client installed, which can be found here –>

Then you can follow the cmupdate.log for further messages and status around the update happening in the background.

But eventually the updated is ready and we can install the update


And voila!


So now that the update is complete, what is new ?

Windows 10 servicing plans!


Allow us to select deployment rings for a specific collection


Yay! Still waiting for the Office365 parts to appear.

Optimizing a crappy web application using Citrix Netscaler

So I have had the pleasure of setting up optimizing of a crappy web application over the last couple of days. This particular web application had the following properties

  • Bound to port 8080
  • 401 based Authentication enabled
  • URL Absolutes
  • Alot of jibber in the Code
  • Default page is hardcoded to a specific URL (Which we do not want to have as first page exposed externally)

So when first looking at this setup my first thought was……

But we wanted to setup this application using AAA module to have Forms-based authentication, redirect the mainpage to another URL, remove uneccesary code and make sure that the URL absolutes are taken care of.

First thing we needed to do is handle SSO and SSO against the application (after setting up the basic load balancing against the internal services)

1: Setup an AAA vServer and bind it to an LDAP policy (simple AD authenticaiton)

2: Setup an Authentication Profile (Which is used to handle the auth session and different authenticaiton levels) Important that we enter a domain name which will be bound to the session


When adding the AAA vServer to the LB vServer it is important to do not choose both Authentication Virtual Server and Authenticaiton Profile (The vServer will the default to Virtual Server and bypass the profile where the domain info is set)


And then set it to Form based Authentication as well, this will give the end-user a Netscaler based login image

Next we had to manage SSO logout for the application, since terminating the session within the application we wanted the endusers to be redirected back to the login page.
This can be done using a traffic policy and by setting initiate logout, first setup an expression which will trigger when the user click on the logout URL, in this application the logout URL was logout-currentuser

My expression looked like this HTTP.REQ.URL.CONTAINS(“logout-currentuser”) make sure that the Initate logout button is enabled


Next we needed to handle the default URL to be redirected to another page. The simplest way to handle this was using responder policy, since we know that the default url was /config1 we could use the responder to redirect it to another custom page.

So we can use an expression HTTP.REQ.URL.CONTAINS(“/config1”) then setup an action to redirect them to the URL we want to


Now another thing we were struggling with was that the application with absolut URLs were redirecting the user requetsts to internal URLs which made the connection fail. In order to change this we needed to use the URL transformer policy


Now everything was almost complete, one piece was missing…. When a user logged on the application it worked fine, redirect to the custom page, url transform rules were working and AAA signout was working… But if a user pressed F5 custom files outside the webapp were not loading (taken from Chrome)


WTF? So then I took a long coffee break and didn’t quite comprehend what was happening… Then I was going deep-dive


Now after some troubleshooting I found out that using CTRL+F5 on the browser made the page refresh and the page was loading as I wanted it to, when comparing the different requests I saw this.


The only difference in the requests using browser F5 (Refresh) and CTRL + F5 was that the HTTP header Cache-Control was set to no-cache in the Request header)

So what I needed to do was to use something to set the Cache-Control to no-cache (Which will basically say that the browser will not cache any content, and since this was a quite sensitive application that was fine)

So using a specific rewrite action I could insert a new HTTP header

Cache Control: No-store


and then bound it to response on the vServer, now if I looked at my new requests going to the virtual server, I can see that the response was containng the no-store HTTP header


End result, application working as intended!

Deep dive Framehawk (From a networking perspective)

Well Citrix has released Framehawk with support for both enterprise WLAN and remote access using Netscaler. In order to setup Framehawk for remote access you need to basically one thing (Enable DTLS) and of course SSL certificate rebound) DTLS is an TLS extenstion on UDP. So basically means that Framehawk is a UDP protocol. So unlike RemoteFX where Microsoft uses TCP/UDP both in a remote session, which means that it uses UDP for graphics and TCP for keystrokes and such.

So what does a Framehawk connection looks like?


External, a client uses DTLS connection to the Netscaler and then the Netscaler will use a UDP connection to the VDA in the backend. The DTLS connections has its own sequence number which is used to keeping track of connections.


There are some issues that you need be aware of before setting up Framehawk.image

Also some other notes which are important to take note of, and that Framehawk will not work properly in a VPN connection, since most VPN solutions will wrap packets inside a TCP layer or GRE tunnel which means that the UDP connection will not function as intended.


Now Framehawk is not designed for low bandwidth connections, it requires more bandwidth use then ThinWire so why is that ?

“For optimal performance, our initial recommendation is a base of 4 or 5 Mbps plus about 150 Kbps per concurrent user. But having said that, you will likely find that Framehawk greatly outperforms Thinwire on a 2 Mbps VSAT satellite connection because of the combination of packet loss and high latency.”

The reason for that is that TCP will try to retransmit packets which are dropped, while UDP which is a much simple protocol without connection setup delays, flow control, and retransmission. And in order to ensure that all mouseclick, keyboard clicks are successfully delivered Framehawk requires more bandwidth since UDP is stateless and there is no guarantee that packets are successfully deliver, I belive that the framehawk component of Citrix Receiver has its own “click” tracker which ensures that clicks are successfully delivered and to ensure that it requires more bandwidth.

Comments from Citrix: 

While SSL VPNs or any other TCP-based tunnelling like SSH re-direction will definitely cause performance problems for the Framehawk protocol, anything that works at the IP layer like GRE or IKE/IPSec will work well with it. We’ve designed the protocol to maintain headroom for two extra layers of encapsulation, so you can even multiple-wrap it without effect. Do keep in mind that right now it won’t do well with any more layers since it can cause fragmentation of the enclosed UDP packets which will effect performance on some networks.

2) While it’s based entirely on UDP the Framehawk protocol does have the ability to send some or all data in a fully reliable and sequenced format. That’s what we’re using the keyboard/mouse/touch input channels. Anything that has to pass from the client to the server in a reliable way (such as keystrokes, mouse clicks and touch up/down events) will always do so inside of the procotol. You should never see loss of these events on the server, even at extremely high loss.

And one last comment for anyone else reading this: The Framehawk protocol is specifically designed for improving the user experience on networks with a high BDP (bandwidth delay product) and random loss. In normal LAN/MAN/WAN networks with either no or predominantly congestive loss and low BDP, Framehawk will basically start acting like TCP and start throttling itself if it does run into congestion. At some point, however, the techniques it uses have a minimal amount of bandwidth (which is hard to describe since we composite multiple workloads on differnt parts of the screen). In those cases other techniques would be needed, like Thinwire Advanced. As we move down the road with our integration into existing Citrix products and start leveraging our network optimizations with bandwidth optimized protocols like Thinwire and Thinware Advanced expect that to just get better!

New Azure backup “agent”

Today I was notified of a new Azure backup agent which was released on Azure and on the download center. As of recently Microsoft did not have support for backing up on-premises Sharepoint, SQL, Exchange, Hyper-V and Azure Backup was limited to files and folders. Now if we go into the Azure portal we can see that they have updated the feature set in the backup vault


Now this points to a download which is called Azure backup which was released yesterday. This new feature allows for backup of on-premises from disk to cloud against Exchange, SQL, Sharepoint and Hyper-V yay!


During the setup we can see that this is a typical rebranded DPM setup, which has support for the most, but it does not include tape support and is most likely aimed at replacing DPM w/Tape and instead move to DPM w/Cloud tier instead.


As we can see the Azure backup wizard is basically DPM, it also includes SQL server 2014.


The wizard will also setup a integration with a backup vault using a vault credential which can be downloaded from the Azure website.


And voila! the end product. So instead of recreating the wheel Microsoft basically rebranded DPM as a Azure product, hence killing the system center DPM ? Time will show when an official blog comes up.