Is low latency enough? Optimizing TCP for optimal ICA traffic

So I decided to write this blogpost to actually show the effects of TCP optimization on Citrix.
I’ve been stating that you should always change the TCP profile for NetScaler Gateway because the default is BAD.

So therefore I decided to do a bit research to see what kind of effect will a simple TCP profile have on a NetScaler Gateway vServer. A simple setup where I had two NetScaler Gateway virtual server where one has the default TCP profile and another has the nstcp_xa_xd profile. Now I also used NetScaler Insight to follow the latency between the clients and the server. Both using the same version of NetScaler and the same version of Citrix Receiver to connect to both sites.

So before we begin we also have the latency issue

So latency is time it takes to go end-to-end, in this case it is the endpoint and the VDA agent.

It should be noted that my connection has about 8 – 10 MS (WAN Latency)so it low latency enough to ensure a good user experience?  The problem is not always the latency, but a bunch of different issues (Jitter, Wifi and overhead, packet loss & retransmissions, congestion etc)

So start out with I did a file transfer test which allows me to see how stable the connection is and how it fluxuates during the transfer.

First test: File transfer between endpoint and RSDH host, small file 300 MB using a Citrix Receiver session.

File transfer from the default TCP profile
image

As you can see it spikes up and down during the entire file transfer, this is because the profile is lacking certain TCP properties like

File transfer from the Optimized TCP profile virtual server
image

As you can see it has a much better transfer rate even though it stays on about the same bandwidth this is because of the limits on the broadband connection in my lab enviroment.

Information from Insight (Using AppFlow which can only do 1 minute granularity) as you the optimized server gains higher bandwidth alot faster because of the TCP scaling window configured. It since I transfered such as small file it was quick to go down again.

image

So what if we try to add some additional packet loss to the mix? What if we have additional 5% packet loss and try the same scenario? Because the optimized TCP profile has for instance SACK, DSACK and FACK which makes it easier to resume a TCP stream in event of packet loss, since it does not need to retransmit all packet which have been lost.

Optimized TCP Profile (TCP Window Scaling, Nagle, SACK, DSACK etc)
image

Still going pretty stable, in the events of file transfers.

Default TCP Profile

Well this isn’t going very well, because of the packet loss, it takes alot more time for the default TCP profile to resume the TCP stream, and also in the event of packet loss the sending end has to restransmit all dropped packets

image

From Insight we can see spikes using the default TCP profile, while the optimized TCP profile manages to keep a steady flow.  Now it is also important that Insight doesn’t look at packet loss it only looks at the bandwidth from the VDA agent.

image

Westwood+ defined in the TCP profile (TCP Westwood+ is a sender-side only modification of the TCP Reno protocol stack that optimizes the performance of TCP congestion control over both wireline and wireless networks) Note that the default NSTCP profile and NSTCP XA XD uses the New Reno Congestion Algoritm)

image

image

Westwood+ defined in the TCP profile  + TCP Hystart

TCP Hystart: Is  new TCP profile parameter in 11.1. Which is a slow-start algorithm that dynamically determines a safe point at which to terminate (ssthresh). It enables a transition to congestion avoidance without heavy packet losses.  If congestion is detected, Hystart enters a congestion avoidance phase. Enabling it gives you better throughput in high-speed networks with high packet loss. This algorithm helps maintain close to maximum bandwidth while processing transactions.

TCP Profile:
image

Notice that that using Westwood  + Hystart and optimized TCP profile on the NetScaler Gateway Virtual Server I get higher troughput then I initally did on the Virtual server and have a more steady stream of TCP segments (Notice that this is still on 5% packet loss)

Westwood+ defined in the TCP profile  + TCP Hystart

image

Only optimized TCP profile
image

So this has been a little introduction into how to optimize a virtual server for ICA-traffic. On the next articles I will take a closer look at latency and how it affects the TCP traffic flow, and how we can configure the VDA Optimal TCP Receive Window and also how to configure Windows 2016/2012 to ensure better performance.

0 thoughts on “Is low latency enough? Optimizing TCP for optimal ICA traffic”

  1. Very nice, Marius. Understanding how all these component work either together or against each other is critical to optimizing the overall environment.

  2. Nick Rintalan

    Great stuff – I’m always telling folks to enable these TCP profiles to improve ICA traffic flow but I still feel like so many folks are unaware that these profiles even exist!

    -Nick

Leave a Reply

Scroll to Top
%d bloggers like this: