Monthly Archives: February 2016

Goliath IT Analytics for Netscaler

Working with NetScaler an almost a daily basis I usually have a good idea when something it not working as it should or when something is plain out wrong.

Might also be that end-users are experiencing issues with a particular web service or that their Citrix connection stops working. All these problems needs to be addressed quickly and therefore we need a tool that gives us insight into this.

Now Citrix has their own product which is called NetScaler Insight, that is bundled with NetScaler and which pretty much gives us some capability. It has the options to give us Web Insight to see web traffic information and HDX insight which shows detailed information about the ICA sessions.

The problem with Insight is that it has licensing restrictions, which has stopped me from doing many implementations with it.

So depending on the NetScaler license you have, it affects how Insight behaves and how long data is stored in the database. For instance if you have a NetScaler standard you only get real time insight, and if you have Platinum license you can only store data for about 30 days. Also you need to have Platinum to get HDX insight as well.

License/Duration

5 min

1 hour

1 day

1 week

1 month

Standard

No

No

No

No

No

Enterprise

Yes

Yes

No

No

No

Platinum

Yes

Yes

Yes

Yes

Yes

Another thing to remember is the data it collects, for instance Web Insight does not report any HTTP errors codes (HTTP error codes 400, 500 and so on) so it can only give us statistics how our website is doing any how many users and so on. Since the NetScaler is often the bridge between the end-users and the services it delivers I would like to have an easy overview into all the different errors that are occurring on my infrastructure

So this is where Goliath IT analytics for NetScaler comes in. Like NetScaler insight, it also uses the standard the AppFlow protocol and acts like an AppFlow Collector to gathers data from the different NetScalers which is then stores in a MySQL database within the appliance. This also gives us a lot of flexibility since we can then do what we want with the data as long as we are a bit familiar with MySQL.

image

Goliath runs as a virtual appliance on any hypervisor (VMware, XenServer or Hyper-V) and one of the main features, it is that you can pretty much store data for as long as you want to. If we think about it this gives us the benefit to measure the results of optimization changes. For instance we can compare RTT and average latency of a NetScaler gateway session for the last 30 days and the compare from month to month after we have adjusted the TCP settings on the NetScalers.

From the main dashboard, I can also easily see which URL’s are often accessed. This view can also give me a good indication if something is trying to brute force login to a particular URL.

image_thumb4

I can also get a quick overview on what kind of traffic is coming, in for instance see what the server activity is like and which NetScalers have active connections. I can also see the total concurrent transactions happening in real-time

image_thumb6

Another important aspect of it is the ability to give us error alerts and reporting. Let us say that we host an e-commerce website which is published using NetScaler. If a customer is not able to purchase their items on our site because of an error or because of high response time, they will go ahead and buy it somewhere else. Therefore it is crucial that we can quickly get a hold of that information and fix the root cause behind it.

So if a user were to get a 404 error from a particular webserver, it will trigger an exception in AppFlow and be sent to the AppFlow Collector and the data will be seen in the main dashboard.

image

Now for instance I can see from the dashboard that a 404 error has been triggered

image_thumb8

In order to find the root-cause to this error I can go into the reporting pane and choose the Web: Status Code Summary pane and from the click on the different error codes that have been gathered by AppFlow.

image_thumb10

Now if we from there click on the (404 – Not found) error message we can get even more detailed information. From here I can get an detailed overview on which NetScaler this is regarding, what VIP (Virtual Server) that got the error message and which backend servers that was throwing out the 404 error.

image_thumb12

If I scroll down I can also get more information in regards to which URL the end-users were trying to access when they got the 404, and from which IP address they were from.

image_thumb15

Now that I have this information, I can easily share this information with my web developers so they can fix the issue quickly, which can be done using the share button which will generate a static URL which I can then forward.

From the reporting pane I also have access to more historical data, by default it is set to the last hour but I can go in a change it to a specific time range if I want to

image_thumb17

which allows us also to have broader historical data, to see for instance trends on when and how and from where traffic is hitting our services. This gives us valuable insight if we a servicing an ecommerce website and allows for better planning. There is no limitation on time frame for reporting—you can go back a year or more.

Now the use of Goliath IT Analytics for NetScaler just makes sense if you are using NetScaler for web-services and you need to get a clear overview of your traffic with the benefit of error code tracking and longer retention time.

Some of the main issues are still that you require NetScaler platinum if you want to have HDX insight AppFlow for instance, maybe Citrix should consider bundling this with their platinum license like they did with Comtrade to make the license more worth for the customers?

Veeam 9 Scale-out backup repositories

So what did backup with v8 used to look like? A backup-job was attached to a file repository, the problem was that if the repository was getting low on space, which might happen from time to time. We could either clean up, try to expand the space or do even more damage. Even thou we might have multiple repositories available for use, we would need to move the backup data from one repo to another and then update the database, using for instance this KB https://www.veeam.com/kb1729

image

Luckily this has changed from V9 with the introduction of the awesome! Scale-out backup repositories.
This allows us to group together a mix of different repositories or extents into a group, which allows us to group together the size of all our repositories into a single “pool”

To setup a Scale-Out repo there must exist on the repository server, then we can go and create, and choose Add.

image

By default the Use per-VM backup files setting is enabled, this means that it will place a per-vm backup file instead of a full backup job on a single extent.

image

We can also enable full backup, if a required extent (which hosts the existing incremental backup files) is offline. We also have options to define policies, which is either based upon data locality or performance.

With data locality option switched on, Veeam will place all dependant backups files on the same extent, which is typically the case with incremental backups + its full backup file. If we choose performance policy, we can define for instance which extents should have the full backups and which should have the incremental.

image

In my case I have two repositories, one fast and one slow. In that case I want all my full backups to be place on my fast repository and the incrementals on my slow repository. The backup jobs will point to the same “scale-out group” but the repository will handle the data differently.

image

image

So what happens if we need to do maintance or need to move the data from a repository that is being retired?

Then we have the ability to firstly set maintance mode on, when a repository is set into maintaince mode we have the option called “Evacuate” backups. Now when doing evacuation, we have no option to define which extents should get the data being moved. If we have for instance multiple repositories setup and have data locality policy enabled, Veeam will try to honor that policy, same goes if we have define multiple incremental repositories and multiple full backup repositories. If we were to evacuate a repository with only full backups, Veeam will try to move that data to another repository which has full backup policy enabled.

image

Note however that there are some limitations to Scale-Out repository, depending on license and backup type:

  • The scale-out backup repository functionality is available only in Enterprise and Enterprise Plus editions of Veeam Backup & Replication.
  • Configuration backup job
  • Replication jobs
  • Endpoint backup jobs
  • You cannot add a backup repository as an extent to the scale-out backup repository if any job of unsupported type is targeted at this backup repository or if the backup repository contains data produced by jobs of unsupported types (for example, replica metadata). To add such backup repository as an extent, you must first target unsupported jobs to another backup repository and remove the job data from the backup repository.
  • You cannot use a scale-out backup repository as a cloud repository. You cannot add a cloud repository as an extent to the scale-out backup repository.
  • If a backup repository is added as an extent to the scale-out backup repository, you cannot use it as a regular backup repository.
  • You cannot add a scale-out backup repository as an extent to another scale-out backup repository.
  • You cannot add a backup repository as an extent if this backup repository is already added as an extent to another scale-out backup repository.
  • You cannot add a backup repository on which some activity is being performed (for example, a backup job or restore task) as an extent to the scale-out backup repository.
  • If you use Enterprise Edition of Veeam Backup & Replication, you can create 1 scale-out backup repository with 3 extents for this scale-out backup repository. Enterprise Plus Edition has no limitations on the number of scale-out backup repositories or extents.

NetScaler and basic functions, status of vServers and ICMP ARP operations

Sometimes when setting up a new NetScaler and migrating virtual servers from an old one to another, it is quite often that one might forget to disable or shutdown older vServers. Now NetScaler has features to disable different network settings, so this post I want to explain what each option does.

In a layer 2 network, the ARP protocol (In IPV4 network) is reponsible for mapping IP to MAC addresses. So if we have an vServer 192.168.105.200 and we ping it from a host on the same subnet, it will run an ARP request to get the MAC address of that IP

image

So if we have an vServer running on that IP with port 80 and that is enabled. If that host is not in the ARP table, what will happen when we open up a network connection (Internet browser to that IP on that port)

  • ARP will run a broadcast
  • Get the MAC of that IP
  • Initiate a TCP connection to port 80 using HTTP

Next time we open up a connection, that MAC address will most likely be in the MAC table of the host and will no longer require an ARP request.

So let us say that we want to setup a new NetScaler to replace the old one, and we want to setup a new NetScaler using the same IP address. So I’m guessing that we can just disable the vServer right?

image

Wrong, what will happen is that the IP will still be in use and respond to ARP but the service running on port 80 will not be accessable.

Here we can see that NetScaler one and two responds at the same time, even thou the service is disabled.

image

So what if we disable ARP on the VIP on the older NetScaler ?

image

Yay! now only one NetScaler will respond (IF the ARP cache is cleaned up, on Windows is takes 2 minutes before the dynamic ARP table is cleared out) if you want to disable an old vServer (Disable the vServer first TCP service, then disable ARP and ICMP as well) which will not allow it to communicate at all)

Or what I recommend is that you define the response parameters of the VIP.

image

When we define these to ONE_VSERVER they will only respond to ARP and ICMP if one vserver which is attached to the VIP is in state up. If we then would disable a vServer for maintance or something, then ARP and ICMP would automatically be disabled on the VIP, which makes alot more sense when doing maintance, because if services are reponding to ICMP but not on the service itself, people tend to star troubleshooting pretty fast.

Storage Wars–HCI edition

Permalink til innebygd bilde

There is alot of fuzz these days around hyperconverged, software defined storage etc.. especially since VMware announced VSAN 6.2 earlier this week, that trigge alot of good old brawling on social media. Since VMware was clearly stating that they are the marked leader in the HCI marked, if that is true or not I don’t know. So therefore I decided to write this post, just to clear up the confusion on what HCI actually is and what the different vendors are delivering in terms of features and how their  architecture differentiates. Just hopefully someone is as confused as I was in the beginning..

Now after a while now I’ve been working for this quite some time now, so in this post I have decided to focus on 4 different vendors in terms of features and what their architecture looks like.

  • VMware
  • Nutanix
  • Microsoft

PS: Things changes, features get updated, if something is wrong or missing let me know!

The term hyper-converged actually comes from the term converged infrastructure, where vendors started to provide a pre-configured bundle of software and hardware into a single chassis. This was to try minimize compability issues that we would have within the traditional way we did infrastructure, and of course make it easy to setup a new fabric.  So within hyperconverged we integrate these components even further so that they cannot be broken down into seperate components. So by using software-defined storage it allows us to deliver high-performance, highly available storage capacity to our infrastructure without the need of particular/special hardware. So instead of having the traditional three-tier architecture, which was the common case in the converged systems. We have servers where we combine the compute and storage, then we have software on the top which aggreagates the storage between mulitple nodes to create a cluster.

So in conclusion of this part, you cannot get hyperconverged without using some sort of software-defined storage solution.

Now back to the vendors. We have of course Microsoft and VMware which are still doing a tug o war with their relases, but their software-defined storage option has one thing in common. It is in the kernel. Now VMware was the first of the two to release a fully hyperconverged solution and as of today they released version 6.2 which added alot of new features. On the other hand Microsoft is playing this safe, and with Windows Server 2016 they are releasing a new version of Storage Spaces which now has an hyperconverged deployment option. Now belive it or not, Microsoft has had alot of success with the Storage Spaces feature, since it has been a pretty cheap setup and included that with some large needed improvements to the SMB protocol. So therefore let us focus on how VSAN 6.2 and Windows Server 2016 Storage Spaces Direct which both have “in-kernel” ways of delivery HCI.

VMware VSAN 6.2

Deployment types: Hybrid (Using SSD and spinning disks) or All-flash
Protocol support: Uses its own proprietary protocol within the cluster
License required: License either Hybrid or All-Flash
Supported workloads: Virtual machine storage
Hypervisor support: ESXi
Minimum nodes in a cluster: 2 (With a third node as witness)https://blogs.vmware.com/virtualblocks/2015/09/11/vmware-virtual-san-robo-edition/)
Hardware supported: VSAN Ready Nodes, EVO:RAIL and Build Your Own based on the HCL –> http://www.vmware.com/resources/compatibility/search.php?deviceCategory=vsan
Disk requirements: Atleast one SSD and one HDD
Deduplication support: Yes, starting from 6.2 near-line (only within an all flash array only
Compression support: Yes, starting from 6.2 near-line (only within an all flash array only)
Resilliency factor: Resiliency,  Fault Tolerance Method (FTM) Raid-1 Mirroring. Raid 5/6 are in the 6.2 release
Disk scrubbing: Yes, as of 6.2 release.
Storage QoS: Yes, as of 6.2 release. (Based upon a 32KB block size ratio) can be attached to virtual machines or datastores.
Read Cache: 0,4% of Host memory is used for read cache, where the VMs are located.
Data Locality: Sort of, it does not do client-side local read cache.
Network infrastructure needed: 1Gb or 10Gb ethernet network. (10GB only for all-flash) multicast enabled
Maximum number of nodes: 64 nodes pr cluster

Things that are important to remember is that VMware VSAN stores data within an object. So for instance if we are to create a virtual machine on a Virtual SAN datastore, VSAN would create an object for each virtual disk, snapshot and so on. It also creates a container object that stores all the metadata files of the virtual machine. So the availability factor can be configured pr object.  These objects are stored on one or multiple magnetic disks and hosts, and VSAN can access these objects remotely both read and write wise. VSAN does not have the concept of a pure data locality model like others do, a machine can be running on one host but the objects be stored on another, this gives a consistent performance if we for instance were to migrate a virtual machine from one host to another. VSAN has the ability to read for multiple mirror copies at the same time to distribute the IO equally.

Also VSAN has the concept of stripe width, since in many cases we may need to stripe and object across multiple disks. the largest component size in VSAN is 255 GB, so if we have an VMDK which is 1 TB, VSAN needs to stripe that VDMK file out to 4 components. The maximum strip width is 12. The SSD within VSAN act as an read cache and for a write buffer.

image

Windows Server 2016 Storage Spaces Direct*

*Still only in Tech Preview 4

Deployment types: Hybrid (Using SSD and spnning disks) or All-flash
Protocol support: SMB 3
License required: Windows Server 2016 Datacenter
Supported workloads: Virtual machine storage, SQL database, General purpose fileserver support
Hypervisor support: Hyper-V
Hardware supported: Storage Spaces HCL (Not published yet for Windows Server 2016)
Deduplication support: Yes but still only limited support workloads (VDI etc)
Compression support: No
Minimum nodes in a cluster: 2 *And using some form a witness to maintain quorom)
Resilliency factor: Two way mirror, three way mirror and Dual Parity
Disk scrubbing: Yes, part of chkdisk
Storage QoS: Yes, can be attached to virtual machines or shares
Read Cache: CSV Read cache (Which is part of the RAM on the host) also depending on deployment type. In Hybrid mode, SSD is READ & WRITE cache, therefore SSD is not used for persistent storage.
Data Locality: No
Network infrastructure needed: RDMA enabled network adapters, including iWARP and RoCE
Maximum number of nodes: 12 nodes pr cluster as of TP4
You can read more about under the hood about Storage Spaces Direct here –> http://blogs.technet.com/b/clausjor/archive/2015/11/19/storage-spaces-direct-under-the-hood-with-the-software-storage-bus.aspx
Hardware info: http://blogs.technet.com/b/clausjor/archive/2015/11/23/hardware-options-for-evaluating-storage-spaces-direct-in-technical-preview-4.aspx

Important thing to remember here is that we have an CSV volume which is created on top of a SMB file share. Using Storage Spaces Direct, Microsoft leverages mulitple features of the SMB 3 protocol using SMB Direct and SMB Multichannel. Another thing to think about is that since there is no form for data locality here, Microsoft is dependant on using RDMA based technology to with low-overhead read and write data from another host in the network. With has much less overhead then TCP based networks. Unlike VMware, Microsoft uses extents to spread data across nodes these are by default on 1 GB each.

image

Now in terms of difference between these two, well first of its the way the manage reads and writes of their objects. VMware has a distributed read cache, while on the other hand Microsoft requires RDMA but allows to read/write with very low overhead and latency from different hosts. Microsoft does not have any virtual machine policies that define how resillient the virtual machine is, but this is placed on the share (which is virtual disk) which defines what type of redundacy level it is. Now there are still things that are still not documentet on the Storage Spaces Direct solution.

So let us take a closer look at Nutanix.

Nutanix

Deployment types: Hybrid (Using SSD and spinning disks) or All-flash
Protocol support: SMB 3, NFS, iSCSI
Editions: http://www.nutanix.com/products/software-editions/
Supported workloads: Virtual machine storage, general purpose file service* (Tech Preview n
Hypervisor support: ESX, Hyper-V, Acropolis (Cent OS KVM custom build)
Hardware supported: Nutanix uses Supermicro general purpose hardware for their own models, but they have an OEM deal with Dell and Lenovo
Deduplication support: Yes, both Inline and post clusted based
Compression support: Yes both Inline and post process
Resilliency factor: RF2, RF3 and Erasure-Coding
Storage QoS: No, equal share
Read Cache: Unified Cache (Consists of RAM and SSD from the CVM)
Data Locality: Yes, read and writes are aimed at running on the local host which the compute resources is running on.
Network infrastructure needed: 1Gb or 10Gb ethernet network. (10GB only for all-flash)
Maximum number of nodes: ?? (Not sure if there are any max numbers here.

The objects on Nutanix are broken down to vDisks, which are composed of multiple extents.

Source: http://nutanixbible.com/

Unlike Microsoft, Nutanix operates with an extent size of 1MB, and the IO path is in most cases locally on the physical host.

image

When a virtual machine running on a virtualization platform does and write operations it will write to a part of the SSD on the physical machine called the OpLog (Depending on the resilliency factor, OpLog will then replicate the data to other node Oplog to achive the replication factor that is defined in the cluster. Reads are served from the Unified Cache which consists of RAM and SSD from the Controller VM which it runs on. If the data is not available on the cache it can get it from the extent store, or from another node in the cluster.

Source: http://nutanixbible.com/

Now all three vendors all have different ways to achive this. In case of Vmware and Microsoft which both have their solution in-kernel, Microsoft focused on using RDMA based technology to allow for low latency, high bandwidth backbone (which might work in the advantage, when doing alot of reads from other hosts in the network (when the traffic is becoming in-balanced)

Now VMware and Nutanix on the otherhand only require regular ethernet 1/10 GB network. Now Nutanix uses data locality and with new hardware becoming faster and faster that might work in their advantage since the internal data buses on a host can then generate more & more troughput, the issues that might occur which VMware published in their VSAN whitepaper on why they didn’t create VSAN with data locality in mind is when doing alot of vMotion which would then require alot of data to be moved between the different hosts to maintain data locality again.

So what is the right call? don’t know, but boy these are fun times to be in IT!

NB: Thanks to Christian Mohn for some clarity on VSAN! (vNinja.net)

Free eBook on Optimizing Citrix NetScaler and services

So alas, it is here!

This is something I have been working on for some time now, and my intention is that this is just the beginning  of something bigger.. (Hopefully)

For a couple of years now I have been writing for Packt Publishing and authored some books on NetScaler which has been a fun and a good learning experience. The problem with that is… These projects take alot of time! and the problem these days is that the releases are becoming more and more frequent and same goes for other underlying infrastructure which makes it cumbersome to have up-to date content available.

This is the first step in an attempt to create a full (free) NetScaler eBook, for the moment in time I decided to focus on Optimzing NetScaler traffic features. Hopefully other people will tag along as well, since there are so many bright minds in this community!

So what’s included in this initial release.
CPU Sizing
Memory Sizing
NIC Teaming and LACP
VLAN tagging
Jumbo Frames
NetScaler deployment in Azure
NetScaler Packet flow
TCP Profiles
VPX SSL limitations
SSL Profiles
Mobilestream
Compression
Caching
Front-end optimization
HTTP/2 and SPDY
Tuning for ICA traffic

Also I would like to thank my reviewers which actually did the job of reading through it and giving me good feedback! (and of course correcting my grammar as well) a special thanks to Carl Stalhood (http://carlstalhood.com) https://twitter.com/cstalhood a Citrix CTP who contributed with alot of content to this eBook as well.

Also to my other reviewers as well!

Carl Behrent https://twitter.com/cb_24

Dave Brett https://twitter.com/dbretty  (http://bretty.me.uk)

How do I get it?
By signing up using your email in the contact form below, and ill send you an PDF copy after the book is finished editing sometime during the weekend, wanted to get this blogpost out before the weekend to see the interest.

The reason why I want to have an email address is that it makes it easier for me so send update after a new major version is available. Also I want some statistics to see how many are actually using it to see if I should continue on with this project or not. The email addresses I get will not be used to anything else, so no newsletters or selling info to the mafia…

Feedback and how to contribute?
Any feedback/corrections/suggestions please send them to my email address msandbu@gmail.com also if you want to contribute to this eBook please mail me! since I’m not an expert my all means, so any good ideas should be included so it can be shared with others.

[contact-form-7 404 "Not Found"]

Getting started with Web based server management tools in Azure

Yesterday, Microsoft released a public public of some tools that Jeffrey Snover showed of at Microsoft Ignite last year, which was in essence basically just Server manager from within the Azure portal.

This tools is aimed for its first release to manage Windows Server 2016 servers, it can manage both Azure virtual machines and machines on-prem. So some of its capabilities:View and change system configuration

  • View performance across various resources and manage processes and services
  • Manage devices attached to the server
  • View event logs
  • View the list of installed roles and features
  • Use a PowerShell console to manage and automate


Source: http://blogs.technet.com/b/nanoserver/archive/2016/02/09/server-management-tools-is-now-live.aspx

So what we do is that we deploy a Server Manager Gateway which we want to manage our virtual machines (Remember that the Server Gateway needs to have an internet connection)

NOTE: If you want to deploy the Gateway feature on 2012 server you need to have WMF 5 installed, which you can fetch here –> WMF 5.0: https://www.microsoft.com/en-us/download/details.aspx?id=48729

So when we want to deploy –> Go into Azure –> New –> Server Management Tools –> Marketplace image

then we need to define the machine we want to connect to (Internal addresses, IPv4, IPv6 and FQDN)
So for the first run we need to create a gateway as well. If we want to add multiple servers that we want to manage we need to run this wizard again but then choose an existing gateway, for instance.
image

After we have created the instance we need to download the Gateway binaries and install on our enviroment

image

Then run the download from within the enviroment. Also important that if we want to manage non-domain based machines we need to run some parameters to add trusted hosts and such, as an example

winrm set winrm/config/client @{ TrustedHosts=”10.0.0.5″ }

REG ADD HKLMSOFTWAREMicrosoftWindowsCurrentVersionPoliciesSystem /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1

NETSH advfirewall firewall add rule name=”WinRM 5985″ protocol=TCP dir=in localport=5985 action=allow (If you want specify firewall rules)

After you have installed the firewall rules. We need to specify credentials

image

After that is done we can now manage the machine from within Azure.

image

A better explanation on Framehawk

After speaking with Stephen Vilke (One of the brains behind Framehawk) the other day, he elaborated on what Framehawk actually is and what it actually isn’t.

You know I have been caught up thinking about Framehawk as a simple display protocol and ran a bunch of comparisons between it and ThinWire and also with PCoIP and RDP… And my conclusion was simply:
it uses alot more bandwidth, cpu and memory then the other protocols display protocols.

But if we think about it, why is the reason why people implement ThinWire plus for instance? to SAVE bandwidth, because they have limited bandwidth capacity, and in order to get more users on to their platform. Why do people implement Cloudbridge in order to optimize traffic? simple.. to SAVE bandwidth.

When thinking about Framehawk now, I have this simple scenario.

ThinWire is simply put it the Toyota Prius, its cheap, moves from A to B and gives an OK driver experience. And since it is cheap we can get many of these.

Framehawk on the other hand is a friccking truck with Jet engines!!,  its not cheap, but it plows trough everything at a ridiculous speed!! (directly translated it work if we have alot of packet loss) and it gives one hell of a driver experience. So the end goal is actually to focus on increasing productivity, since the apps behave faster and every click gets a response, everytime!

So Framehawk is not about saving anything, on the contrary it uses the bandwidth it can get but its also to give the end-user a much better experience it these mobile days were we are faced with much more packet-loss when we were before ,when latency and bandwidth limits were a much bigger problem. Another thing to think about is that giving a better user experience even thou we are faced with these network issues, might allow our users to be even more productive, which in the end results in more money for our buisness.

Another thing to remember is that unlike other protocols which focues on moving the 0 1 0 1 across the wire and then adapting content along the way, so for instance if we start scrolling on a document under a packet loss connection, the protocol is going to spend alot of time trying to repair every packet on the wire as it goes along even thou the end-user just scrolled down 2 pages and want to read a particular page, why spend bandwidth on sending the entire scrolling sequence down the wire?

All the end users want to see is the page after scrolling down, they don’t care if all the packets get pushed down the wire, the just want to see the end-result, which is basically what Framehawk is focusing on.

To quote a good post from Citrix blogs:

A “perfect bytestream” is a computer thing, not a human one. Humans wants to know “are we there yet, and did I like the view out the window?”, not “are we there yet, and did every packet arrive properly?” 🙂

Now since the introduction of Framehawk there is still a few features I would like to see in the product so it matures a bit.

  • Support for Netscaler Gateway HA
  • Recalibration during connections
  • Support for AppFlow

Other then that, the latest version of NetScaler build 11. 64 introduced suppor for Unified Gateway, so its more stuff the NetScaler team needs to fix.

Hopefully this gives a good explanation of what Framehawk is and what it isn’t.

Vmware Horizon 7 announced

Earlier today I saw on a couple of blogspost that Vmware was going to announce Horizon 7 later today. So when I read the posts I was blown away in what type of features that are coming in the release.

So what’s included in the upcoming release?

  • Project Fargo (VMFork) which in essence the ability to clone a running VM on the fly, just-in-time-desktops. Doing master image updates is as simple as updating the parent virtual machine, a user will automatically get an updated desktop at next login. It is kinda like the same what we have with AppVolumes and delivering of AppStacks, but taken to a whole new level. You can read more about it here –> http://www.yellow-bricks.com/2014/10/07/project-fargo-aka-vmfork-what-is-it/ important to remember this is not like linked-clones, the virtual machines are all running and are updated on the fly. so no composer! but of course this is going to put more strain on the backend storage providers.
    also important is that this does not support NFS as of now

image

  • New Horizon Clients version 4 (With new clients for Windows, Mac and Linux, Android, iOS) with increased performance over WAN etc) also the required version if we want to use the new display protocol)
  • Updated Identity Manager (As part of the stack and will provide the authentication mechanism across the entire infrastructure using SAML)
  • Smart Policies (Customization of desktops and users identity of a running session) application blocking, PCoIP policies and such.
  • URL Content Redirection (Allows for URL redirection from within a remote session to be redirected to a local browser running on the Endpoint
  • AMD Graphics support for vSGA
  • Intel vDGA support with Intel Xeon E3
  • Improved printing experience (Reducing bandwidth and increasing speed of printing)
  • Blast Extreme (New remote display protocol which is optimized for mobile users) Which apparently has alot lower bandwidth requirements then PCoIP. It is also optimized for NVIDIA Grid. Now in terms of WAN performance, PCoIP has not been anywhere near what Citrix can deliver on using ThinWire or Framehawk so I belive that this is a good call that VMware can move ahead with their own display protocol which does more calibration on the fly.

It is going to be interesting to see how the remote display protocol compares to PCoIP and the other on the marked. And my guess is that since Blast is alot more bandwidth friendly. Also looks like they are investing more into the different aspects of the protocol itself.

PCoIP & Blast Extreme: Feature Parity
Source: http://www.vladan.fr/vmware-horizon-7-details-announced

Some other new stuff which is part of the release is the support for Horizon Air Hybrid Mode, which in essence is moving the control plane to in the Cloud (Similar to what Citrix is doing with their Workspace Cloud)

We can also look at the earlier annoucements of AppVolumes 3.0 as well, which is a perfect stack into this mix in terms of flexible application delivery, of course this is not without compromising some features, but it looks like VMware is becoming a provider of the unified stack. Just hope that they can integrate some of the management components a bit so it feels a bit more like an integrated stack.

But! it seems like Vmware has been quite busy with this release, this is also another complete story when combined with NSX and micro-segmentation in terms of delivering a secure desktop to any device, but I just hope that the display protocol is as good as they say. ill belive it when I see it Smilefjes 

Sources:

http://vthoughtsofit.blogspot.no/
http://www.vladan.fr/vmware-horizon-7-details-announced/

Application virtualization vs Application layering

So this is a blogpost which is mostly about the session I had on NIC this year, where I held this presentation. Where I talked about different technologies from the app-virt and app-layering landscape and discussed the benefits / cons using these types of products. In these days alot of buisnesses are virtualizing their applications. In some cases it makes sense, but on the other hand there is a new technology appearing in the landscape which is application layering so this post is about showing the difference. However since this is a pretty long subject im not gonna cover everything in the same post Smilefjes and no its not a VS battle…

So where are we today? We have our VM-template which are used to deploy virtual machines which can use PXE or some built-in hypervisor deployment tool like vCenter or SCVMM.

image

We use that to deploy virtual machines and if we need to update the VM-template we have to start it and deploy patches to it, simple. However we need other tools like System Center or WSUS to have the other machine up to date because there is no link between the VM-template and the machines that we have provisioned. Another thing is application installation, where we for many year have been using Group Policy/scripts/deployment tools/system center to install application to the virtual machines that we deploy. Or we could pre-install these applications onto the VM-template (Golden Image) and save us some trouble. Now installing multiple applications onto a machine also needs to need to have ways to update the applications. This is typically done using an MSI update or using System Center and replace the existing software with a new one. Now installing all these applications onto a machine requires that it writes some registry entries, files to the drive and maybe even some extra components which the software is dependant on.

Now we have been doing this for years so what are the issues??

  • Big golden image (By preinstalling many applications on the golden image we might have longer deployment time, application we don’t need, slowing down the VM-template)
  • Patch management (How are we going to manage patching application across 200 – 500 virtual machines?)
  • Application Compability ( Some application might require different versions of the Visual C# library for instance which are non-compatible)
  • Applicaiton Security (Some applications do wierd shit, what can we do about those?
  • Application lifecycle management (How can we easily add and replace existing applications, we might also need different versions of the same application)
  • Software rot, registry bloating (You know there is a reason why there are registry cleaners, right?)

So what about Application virtualization?

image

Using application virtualization, applicated are isolated within their own “bubble” which includes virtual filesystem and registry and other required components like dll libraries and such. Since each application is isolated their are not allowed to communicate with each other. In some cases we can define so that the application can read/write to the underlaying machine. We also have flexible delivery methods, either using cached mode where the package is stored on a local machine or using streaming mechanisms. In these cases we avoid

  • Bloated filesystem / registry
  • Application conflicts
  • Added application security
  • No application innstalled on the underlying OS
  • Multiple runtime enviroment
  • Easier to do app customization
  • Easier update management

Now there are two products I focused on in the presentation which are ThinApp and App-V5.

App-V

Now App-V requires an agent installed on each host (Has some limitations about the supports OS) but is flexible in terms or using caching or streaming (called Shared Content Store)

And we can either manage it using App-V infrastructure or using standalone using PowerShell cmdlets. We can also integrate it with System Center and even Citrix.
However in many cases we need to have older versions of Internet Explorer running when we for instance are upgrading to another Operating System. However App-V does not support this. We have also seen that adding App-V puts extra I/O traffic on the host compared to other app-virt solutions.

image

Vmware ThinApp

Vmware ThinApp is a bit different, it does not have any infrastructure. You basically have the Captuing agent which you use to create an app-virt package and then the package is self-contained. When you run an application the ThinApp agent is actually running beneath the package that we captured. ThinApp can be created as an MSI or EXE file which allows for easily deployment using existing tools like System Center or other deployment software. Most of the logic of a ThinApp package is stored within the package.ini file.

However if we want some form of management we need horizon setup, and there is no PowerShell support, in order to use that we need to develop some cmdlets using the SDK. Since there is no infrastructure as well we don’t have ny usage tracking feature. However we have a handy update feature using Appsync which is part of each package ini file.

image

Now both of these solution use a sequencing VM which is used to do the package/capturing process. ThinApp however has a larger amount of supported operating systems it supports.

Now what about application layering ? Important to remember that AppVirt runs in user-space, and therefore there are some restrictions to what it can run (Antivirus, VPN, Boot stuff, Kernel Stuff, Drivers and so on)

Application layering is a bit different, it basically uses a filter driver to merge different virtual disks which will then make up a virtual machine.

image

So we have the windows operating system which might be its own layer (master image) and then we have different layers which run on top, which might consist of application layers, which are read only  (VHD disks for instance) and might also consist of a Personalization layer which might be read and write layer.

Now using application layering, application will behave normally like as they would when they are innstalled to an operating system, since its pretty much a merger of different VHD disks.

Since they aren’t isolated, the “capturing process” is much more simple. Unlike when doing app-virt where the sequcing part might take a loooong time. We can then add different applications to different layers and we can for instance distribute the read/write across different layers where are stored different places. This allows for simpler application lifecycle management for instance if we have multiple virtual machines using the same application layer, we can just update the main application layer and the virtual machines will get the new application.

In the application layer space there are four products/vendors I focused on.

  • Unidesk
  • Vmware AppVolumes
  • Citrix AppDisks
  • Liquidware labs

NOTE: There are multiple vendors in this space as well.

UniDesk

Now Unidesk is the clear leader in this space, since they have support for multiple hypervisors and even Azure! They can also do OS-layering as part of the setup.

image

They can layer pretty much everything since they are integrated into the hypervisor stack. So its not entirely correct to state that they are an application layering vendor. They are a layering vendor period Smilefjes
NOTE: The only thing I found out they cannot layer is cake.. Plus they have an Silverlight based console + they don’t have instant app access like some of the other do. But there is a new version around the corner.

Citrix AppDisks

Then we have Citrix AppDisks which is going to be released in the upcoming version 7.8. Now the good thing with Appdisks is that it is integrated into the Citrix Studio Console. AppDisks is Applications only, Citrix has other solutions for the OS-layer (MCS or PVS) which both AppDisks will support. They also have PVD for the profile which can be writeable, and they have profile manager as well which will make Citrix a good all-round solution.

image

Now AppDisks support as of now XenServer and Vmware and suprise! you need an existing Citrix infrastructure. So no RDS/View. AppDisks also has no instant-app delivery, and is only for virtual machines.

Vmware AppVolumes

VMware is also on the list with. which is agent based and runs on top of the operating system. The good thing about this is that they have instant application delivery, they can also function on a physical devices since they are agent based. However you should be aware of the requirements for using AppVolumes on a physcal device (they should be non-persistent and have constant network access, I smell Mirage)

image

Its has a simple HTML5 based management, it only does hypervisor integration with ESX and vCenter however but can be used in RDS / Citrix enviroments, just install the agents and do some management and you are good to go.

Now we have seen some of the different technologies out there, to summurize I would state the following.

Liquidware labs – ProfileUnity FlexApp

This is a new addtion to the list, Liquidware labs ProfileUnity is primarly an UEM solution and therefore not an original part of the stack, but in addition to delivering UEM is also does Application layering capabilities. ProfileUnity is Agent based and therefore is not dependant on reboots to attach an layer to a session. It can deliver two types of different layers (DIA Direct Installed Applications) or  UIA (User Installed Applications). DIA can be sequenced from a packaging console.

image

ProfileUnity has a simple an clean HTML5 management, it can do hypervisor integration to VMware do deliver Direct User Applications also known as FlexDisks. We also have micro isolation feature which handles layering conflicts between different layers.

Application virtulization should be used for the following:

  • Application Isolation
  • Mutliple runtime versions
  • Streaming to non-persistent
  • Application compability

Application layering should be used for the following:

  • Application lifecycle management
  • Image management
  • Profile management (if supported by vendors)
  • Applications which require drivers / boot stuff

UPDATE: 15/03/2016: Added Liquidware Labs FlexApp

And lastly, what is coming in the future? Most likely this will be container based applications, which have their own networking stack, more security requirements and contained within their own kernel space. Here we have providers such as Turbo.net which have been delivering container based applications before Windows Announced container support for their operating system

Office365 on Terminal server done right

So this is a blogpost based upon a session I had at NIC conference, where I spoke about how to optimize the delivery of Office365 in a VDI/RSDH enviroment.

There are multiple stuff we need to think / worry about. Might seem a bit negative, but that is not the idea just being realistic Smilefjes

So this blogpost will cover the following subjects

  • Federation and sync
  • Installing and managing updates
  • Optimizing Office ProPlus for VDI/RDS
  • Office ProPlus optimal delivery
  • Shared Computer Support
  • Skype for Buisness
  • Outlook
  • OneDrive
  • Troubleshooting and general tips for tuning
  • Remote display protocols and when to use when.

So what is the main issue with using Terminal Servers and Office365? The Distance….

This is the headline for a blogpost on Citrix blogs about XenApp best pratices

image_thumb5

So how to fix this when we have our clients on one side, the infrastructure in another and the Office365 in a different region ? Seperated with long miles and still try to deliver the best experience for the end-user, so In some case we need to compromise to be able to deliver the best user experience. Because that should be our end goal Deliver the best user experience

image_thumb1

User Access

First of is, do we need to have federation or just plain password sync in place? Using password sync is easy and simple to setup and does not require any extra infrastructure. We can also configure it to use Password hash sync which will allow Azure AD to do the authentication process. Problem with doing this is that we lose a lot of stuff which we might use on an on-premises solution

  • Audit policies
  • Existing MFA (If we use Azure AD as authentication point we need to use Azure MFA)
  • Delegated Access via Intune
  • Lockdown and password changes (Since we need change to be synced to Azure AD before the user changes will be taken into effect)

NOTE: Now since I am above average interested in Netscaler I wanted to include another sentence here, for those that don’t know is that Netscaler with AAA can in essence replace ADFS since Netscaler now supports SAML iDP. Some important issues to note is that Netscaler does not support • Single Logout profile; • Identity Provider Discovery profile from the SAML profiles. We can also use Netscaler Unified Gateway with SSO to Office365 with SAML. The setup guide can be found here

https://msandbu.wordpress.com/2015/04/01/netscaler-and-office365-saml-idp-setup/

NOTE: We can also use Vmware Identity manager as an replacement to deliver SSO.

Using ADFS gives alot of advantages that password hash does not.

  • True SSO (While password hash gives Same Sign-on)
  • If we have Audit policies in place
  • Disabled users get locked out immidietly instead of 3 hours wait time until the Azure AD connect syng engine starts replicating, and 5 minutes for password changes.
  • If we have on-premises two-factor authentication we can most likely integrate it with ADFS but not if we have only password hash sync
  • Other security policies, like time of the day restrictions and so on.
  • Some licensing stuff requires federation

So to sum it up, please use federation

Initial Office configuration setup

Secondly, using the Office suite from Office365 uses something called Click-to-run, which is kinda an app-v wrapped Office package from Microsoft, which allows for easy updates from Microsoft directly instead of dabbling with the MSI installer.

In order to customize this installer we need to use the Office deployment toolkit which basically allows us to customize the deployment using an XML file.

The deployment tool has three switches that we can use.

setup.exe /download configuration.xml

setup.exe /configure configuration.xml

setup.exe /packager configuration.xml

NOTE: Using the /packager creates an App-V package of Office365 Click-To-run and requires a clean VM like we do when doing sequencing on App-V, which can then be distributed using existing App-V infrastructure or using other tools. But remember to enable scripting on the App-V client and do not alter the package using sequencing tool it is not supported.

The download part downloads Office based upon the configuration file here we can specify bit editions, versions number, office applications to be included and update path and so on. The Configuration XML file looks like this.

<Configuration>

<Add OfficeClientEdition=”64″ Branch=”Current”>

<Product ID=”O365ProPlusRetail”>

<Language ID=”en-us”/>

</Product>

</Add>

<Updates Enabled=”TRUE” Branch=”Business” UpdatePath=”\server1office365″ TargetVersion=”16.0.6366.2036″/>

<Display Level=”None” AcceptEULA=”TRUE”/>

</Configuration>

Now if you are like me and don’t remember all the different XML parameters you can use this site to customize your own XML file –> http://officedev.github.io/Office-IT-Pro-Deployment-Scripts/XmlEditor.html

When you are done configuring the XML file you can choose the export button to have the XML file downloaded.

If we have specified a specific Office version as part of the configuration.xml it will be downloaded to a seperate folder and storaged locally when we run the command setup.exe /download configuration.xml

NOTE: The different build numbers are available here –> http://support2.microsoft.com/gp/office-2013-365-update?

When we are done with the download of the click-to-run installer. We can change the configuration file to reflect the path of the office download

<Configuration> <Add SourcePath=”\shareoffice” OfficeClientEdition=”32″ Branch=”Business”>

When we do the setup.exe /configure configuration.xml path

Deployment of Office

The main deployment is done using the setup.exe /configure configuration.xml file on the RSDH host. After the installation is complete

Shared Computer Support

<Display Level="None" AcceptEULA="True" /> 
<Property Name="SharedComputerLicensing" Value="1" />

In the configuration file we need to remember to enable SharedComputerSupport licensing or else we get this error message.

image_thumb11

If you forgot you can also enable is using this registry key (just store it as an .reg file)

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESOFTWAREMicrosoftOffice15.0ClickToRunConfiguration]
“InstallationPath”=”C:\Program Files\Microsoft Office 15”
“SharedComputerLicensing”=”1

Now we are actually done with the golden image setup, don’t start the application yet if you want to use it for an image. Also make sure that there are no licenses installed on the host, which can be done using this tool.

cd ‘C:Program Files (x86)Microsoft OfficeOffice15’
cscript.exe .OSPP.VBS /dstatus

image_thumb31

This should be blank!

Another issue with this is that when a user starts an office app for the first time he/she needs to authenticate once, then a token will be stored locally on the %localappdata%MicrosoftOffice15.0Licensing folder, and will expire within a couple of days if the user is not active on the terminalserver. Think about it, if we have a large farm with many servers that might be the case and if a user is redirected to another server he/she will need to authenticate again. If the user is going against one server, the token will automatically refresh.
NOTE: This requires Internet access to work.

And important to remember that the Shared Computer support token is bound to the machine, so we cannot roam that token around computers or using any profile management tool.

But a nice thing is that if we have ADFS setup, we can setup Office365 to automatically activate against Office365, this is enabled by default. So no pesky logon screens.

Just need to add the ADFS domain site to trusted sites on Internet Explorer and define this settings as well

Automatic logon only in Intranet Zone

image

Which allows us to basically resolve the token issue with Shared Computer Support Smilefjes

Optimizing Skype for Buisness

So in regards to Skype for Buisness what options do we have in order to deliver a good user experience for it ? We have four options that I want to explore upon.

  • VDI plugin
  • Native RDP with UDP
  • Natnix PCoIP
  • Native ICA (w or without audio over UDP)
  • Local app access
  • HDX Optimization Pack 2.0

Now the issue with the first one (which is a Microsoft plugin is that it does not support Office365, it requires on-premises Lync/Skype) another issue that you cannot use VDI plugin and optimization pack at the same time, so if users are using VDI plugin and you want to switch to optimization pack you need to remove the VDI plugin

ICA uses TCP protcol works with most endpoints, since its basically running everyone directly on the server/vdi so the issue here is that we get no server offloading. So if we have 100 users running a video conference we might have a issue Smilefjes If the two other options are not available try to setup HDX realtime using audio over UDP for better audio performance. Both RDP and PCoIP use UDP for Audio/Video and therefore do not require any other specific customization.

But the problems with all these are that they make a tromboning effect and consumes more bandwidth and eats up the resources on the session host

image_thumb7

Local App from Citrix access might be a viable option, which in essence means that a local application will be dragged into the receiver session, but this requires that the enduser has Lync/Skype installed. This also requires platinum licenses so not everyone has that + at it only supports Windows endpoints…

The last and most important piece is the HDX optimization pack which allows the use of server offloading using HDX media engine on the end user device

And the optimization pack supports Office365 with federated user and cloud only users. It also supports the latest clients (Skype for buisness) and can work in conjunction with Netscaler Gateway and Lync edge server for on-premises deployments. So means that we can get Mac/Linux/Windows users using server offloading, and with the latest release it also supports Office click-to-run and works with the native Skype UI

So using this feature we can offload the RSDH/VDI instances from CPU/Memory and eventually GPU directly back to the client. And Audio/video traffic is going to the endpoint directly and not to the remote session

image_thumb51

Here is a simple test showing the difference between running Skype for buisness on a terminal server with and without HDX Optimization Pack 2.0

Permalink til innebygd bilde

Here is a complete blogpost on setting up HDX Optimization Pack 2.0 https://msandbu.wordpress.com/2016/01/02/citrix-hdx-optimization-pack-2-0/

Now for more of the this part, we also have Outlook. Which for many is quite the headache…. and that is most because of the OST files that is dropped in the %localappdata% folder for each user. Office ProPlus has a setting called fast access which means that Outlook will in most cases try to contact Office365 directly, but if the latency is becoming to high, the connection will drop and it will go and search trough the OST files.

Optimizing Outlook

Now this is the big elefant in the room and causes the most headaches. Since Outlook against Office365 can be setup in two modes either using Cached mode and the other using Online mode. Online modes uses direct access to Office365 but users loose features like instant search and such. In order to deliver a good user experience we need to compromise, the general guideline here is to configure cached mode with 3 months, and define to store the OST file (Which contains the emails, calender, etc) and is typically 60-80% than the email folder) on a network share. Since these OST files are by default created in the local appdata profile and using streaming profile management solutions aren’t typically a good fit for the OST file.

. Important to note that Microsoft supports having OST files on a network share, IF! there is adequate bandwidth and low latency… and only if there is one OST file and the users have Outlook 2010 SP1

NOTE: We can use other alternatives such as FSLogix, Unidesk to fix the Profile management in a better way.

Ill come back to the configuration part later in the Policy bits. And important to remember is to use Office Outlook over 2013 SP1 which gives MAPI over HTTP, instead of RCP over HTTP which does not consume that much bandwidth.

OneDrive

In regards to OneDrive try to exclude that from RSDH/VDI instances since the sync engine basically doesnt work very well and now that each user has 1 TB of storagee space, it will flood the storage quicker then anything else, if users are allowed to use it. Also there is no central management capabilities and network shares are not supported.

There are some changes in the upcoming unified client, in terms of deployment and management but still not a good solution.

You can remove it from the Office365 deployment by adding  this in the configuration file.

<ExcludeApp ID=”Groove” />

Optimization and group policy tuning

Now something that should be noted is that before installing Office365 click-to-run you should optimize the RSDH sessions hosts or the VDI instance. A blogpost which was published by Citrix noted a 20% in performance after some simple RSDH optimization was done.

Both Vmware and Citrix have free tools which allow to do RSDH/VDI Optimization which should be looked at before doing anything else.

Now the rest is mostly doing Group Policy tuning. Firstly we need to download the ADMX templates from Microsoft (either 2013 or 2016) then we need to add them to the central store.

We can then use Group Policy to manage the specific applications and how they behave. Another thing to think about is using Target Version group policy to manage which specific build we want to be on so we don’t have a new build each time Microsoft rolls-out a new version, because from experience I can tell that some new builds include new bugs –> https://msandbu.wordpress.com/2015/03/09/trouble-with-office365-shared-computer-support-on-february-and-december-builds/

image

Now the most important policies are stored in the computer configuration.

Computer Configuration –> Policies –> Administrative Templates –> Microsoft Office 2013 –> Updates

Here there are a few settings we should change to manage updates.

  • Enable Automatic Updates
  • Enable Automatic Upgrades
  • Hide Option to enable or disable updates
  • Update Path
  • Update Deadline
  • Target Version

These control how we do updates, we can specify enable automatic updates, without a update path and a target version, which will essentually make Office auto update to the latest version from Microsoft office. Or we can specify an update path (to a network share were we have downloaded a specific version) specify a target version) and do enable automatic updates and define a baseline) for a a specific OU for instance, this will trigger an update using a built-in task schedulerer which is added with Office, when the deadline is approaching Office has built in triggers to notify end users of the deployment. So using these policies we can have multiple deployment to specific users/computers. Some with the latest version and some using a specific version.

Next thing is for Remote Desktop Services only, if we are using pure RDS to make sure that we have an optimized setup.  NOTE: Do not touch if everything is working as intended.

Computer Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Remote Session Enviroment

  • Limit maximum color depth (Set to16-bits) less data across the wire)
  • Configure compression for RemoteFX data (set to bandwidth optimized)
  • Configure RemoteFX Adaptive Graphics ( set to bandwidth optimized)

Next there are more Office specific policies to make sure that we disable all the stuff we don’t need.

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Miscellaneous

  • Do not use hardware graphics acceleration
  • Disable Office animations
  • Disable Office backgrounds
  • Disable the Office start screen
  • Supress the recommended settings dialog

User Configuration –> Administrative Templates  –>Microsoft Office 2013 –> Global Options –> Customizehide

  • Menu animations (disabled!)

Next is under

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> First Run

  • Disable First Run Movie
  • Disable Office First Run Movie on application boot

User Configuration –> Administrative Templates –> Microsoft Office 2013 –> Subscription Activation

  • Automatically activate Office with federated organization credentials

Last but not least, define Cached mode for Outlook

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Account Settings –> Exchange –> Cached Exchange Modes

  • Cached Exchange Mode (File | Cached Exchange Mode)
  • Cached Exchange Mode Sync Settings (3 months)

Then specify the location of the OST files, which of course is somewhere else

User Configuration –> Administrative Templates –> Microsoft Outlook 2013 –> Miscellanous –> PST Settings

  • Default Location for OST files (Change this to a network share

Network and bandwidth tips

Something that you need to be aware of this the bandwidth usage of Office in a terminal server enviroment.

Average latency to Office is 50 – 70 MS

• 2000 «Heavy» users using Online mode in Outlook
About 20 mbps at peak

• 2000 «Heavy» users using Cached mode in Outlook
About 10 mbps at peak

• 2000 «Heavy» users using audio calls in Lync About 110 mbps at peak

• 2000 «Heavy» users working Office using RDP About 180 mbps at peak

Which means using for instance HDX optimization pack for 2000 users might “remove” 110 mbps of bandwidth usage.

Microsoft also has an application called Office365 client analyzer, which can give us a baseline to see how our network is against Office365, such as DNS, Latency to Office365 and such. And DNS is quite important in Office365 because Microsoft uses proximity based load balancing and if your DNS server is located elsewhere then your clients you might be sent in the wrong direction. The client analyzer can give you that information.

image_thumb3

(We could however buy ExpressRoute from Microsoft which would give us low-latency connections directly to their datacenters, but this is only suiteable for LARGER enterprises, since it costs HIGH amounts of $$)

image

But this is for the larger enterprises which allows them to overcome the basic limitations of TCP stack which allow for limited amount of external connection to about 4000 connections at the same time. (One external NAT can support about 4,000 connections, given that Outlook consumes about 4 concurrent connections and Lync some as well)

Because Microsoft recommands that in a online scenario that the clients does not have more then 110 MS latency to Office365, and in my case I have about 60 – 70 MS latency. If we combine that with some packet loss or adjusted MTU well you get the picture Smilefjes 

Using Outlook Online mode, we should have a MAX latency of 110 MS above that will decline the user experience. Another thing is that using online mode disables instant search. We can use the exchange traffic excel calculator from Microsoft to calculate the amount of bandwidth requirements.

Some rule of thumbs, do some calculations! Use the bandwidth calculators for Lync/Exchange which might point you in the right direction. We can also use WAN accelerators (w/caching) for instance which might also lighten the burden on the bandwidth usage. You also need to think about the bandwidth usage if you are allow automatic updates enabled in your enviroment.

Troubleshooting tips

As the last part of this LOOONG post I have some general tips on using Office in a virtual enviroment. This is just gonna be a long list of different tips

  • For Hyper-V deployments, check VMQ and latest NIC drivers
  • 32-bits Office C2R typically works better then 64-bits
  • Antivirus ? Make Exceptions!
  • Remove Office products that you don’t need from the configuration, since this add extra traffic when doing downloads and more stuff added to the virtual machines
  • If you don’t use lync and audio service (disable the audio service! )
  • If using RDSH (Check the Group policy settings I recommended above)
  • If using Citrix or VMware (Make sure to tune the polices for an optimal experience, and using the RSDH/VDI optimization tools from the different vendors)
  • If Outlook is sluggish, check that you have adequate storage I/O to the network share (NO HIGH BANDWIDTH IS NOT ENOUGH IF STORED ON A SIMPLE RAID WITH 10k disks)
  • If all else failes on Outlook (Disable MAPI over HTTP) In some cases when getting new mail takes a long time try to disable this, used to be a known error)

Remote display protocols

Last but not least I want to mention this briefly, if you are setting up a new solution and thinking about choosing one vendor over the other. The first of is

  • Endpoint requirements (Thin clients, Windows, Mac, Linux)
  • Requirements in terms of GPU, Mobile workers etc)

Now we have done some tests, which shown the Citrix has the best feature across the different sub protocols

  • ThinWire (Best across high latency lines, using TCP works over 1800 MS Latency)
  • Framehawk (Work good at 20% packet loss lines)

While PcoIP performs a bit better then RDP, I have another blogpost on the subject here –> https://msandbu.wordpress.com/2015/11/06/putting-thinwire-and-framehawk-to-the-test/