Monthly Archives: February 2016

Enabling use of SPDY with AVINetworks

With more and more of the web moving away from the tradisional HTTP 1/1 and moving towards the use of HTTP/2 and/or SPDY. More and more web servers also now by default supporting HTTP/2 or SPDY. For instance LinkedIn now uses SPDY and most web browsers support this feature. Now SPDY offers the following benefits over HTTP 1/1

  • Multiplexed requests: There is no limit to the number of requests that can be issued concurrently over a single SPDY connection.
  • Prioritized requests: Clients can request certain resources to be delivered first. This avoids the problem of congesting the network channel with non-critical resources when a high-priority request is pending.
  • Compressed headers: Clients today send a significant amount of redundant data in the form of HTTP headers. Because a single web page may require 50 or 100 subrequests, this data is significant.
  • Server pushed streams: Server Push enables content to be pushed from servers to clients without a request.

So remember SPDY only replaces the way the data is written to the network. So how to configure this on AVI?

Go into the virtual service, click Edit on the Application Profile

image

Then go into Acceleration and into Front-end optimization, then click “Enable SPDY 3.1”

image

After that we can save the configuration. Now the simplest way to check if SPDY 3.1 is enabled is by using the SPDY/HTTP2 addon detector for either Firefox or Google Chrome, where an thunder icon will appear in the URL bar.

And as we can see from the addon, this site is now SPDY 3.1 enabled.

image

This will improve the performance between the client and the service engine, but the service engine will still communicate with the backend server using regular HTTP 1.1

Now in the analytics we can easily see the improvements when we changed from HTTP 1.1 to SPDY

HTTP 1/1

image

SPDY 3.1

image

Azure Stack networking overview and use of BGP and VXLAN

Now after been dabling with Azure Stack for some time since the preview, there has been one thing that has been bugging me, which is the networking flow. Hence I decided to create an overview of the network topology and how things are connected and how traffic flow is.

Important to remember that Azure Stack is using much of the features in 2016 including SLB, BGP, VXLAN and so on.
Most of the management machines in the Azure Stack POC is placed in the vEthernet 1001 connection on the Hyper-V host, and is connected to the vSwitch CCI_External.
The mangement machines are located on 192.168.100.0/24 scopes.
Now with this updated chart, we can see that each tenant has its own /32 BGP route which is attached to the MuxVM which acts as an load balancer

image

When traffic goes from the clientIP it is encapsulated using VXLAN and then goes to the MuxVM (Using its public IP address) In my case its 192.168.233.21 (Which is part of the PAHostvNIC which is then routed to the MuxVM using VXLAN encapsulation (Which uses UDP) and then forwarded to the BGPVM and then out the NatVM and out to the world.

image

On the other hand we have NATVM and the CLIENTVM which are placed on the 192.168.200 scope. The 192.168.200.0/24 network can communicate via the BGPVM which has two-armed configuration. Which acts as the gateway between 192.168.100 network and the 192.168.200.0 network. Now the funny thig is that NATVM just acts like a gateway for the external network in, it has RRAS installed and since it is directly connected to both networks it allows access from externally. Now BGPVM also has RRAS installed, but we cannot see that using the RRAS console, we need to see it in PowerShell, and also BGPVM (as stated) has a BGP route setup to the MuxVM. The MuxVM acts as an load balancer for the BGVM using BGP to advertise the VIP to the router using a /32 route.

So for instance on the ClientVM if we open a connection to Portal.Azurestack.local (Which has an IP of 192.168.133.74) The traffic flow will go like this.

ClientVM –> NATVM –> BGPVM –> (BGP ROUTE PEER) –> MuxVM –> PortalVM

Now remember that the configuration of BGP and LB and the host is done by the network controller

SLB infrastructure
For a virtual switch to be compatible with SLB, you must use Hyper-V Virtual Switch Manager or Windows PowerShell commands to create the switch, and then we must have the Azure Virtual Filtering Platform (VFP) for the virtual switch enabled.

So for those that are looking into Windows Server 2016, Look into the networking stack of 2016 its bloody HUGE!

The case of the HTTP traffic not working NetScaler

So today I was asked to help troubleshoot an issue where an customer had setup a new pair of NetScaler MPXs which was setup using LACP and different VLANs, after the initial setup and started with a basic load balancing setup for Storefront 3.1, stuff were not working.

The service was working as it should, meaning that the SNIP traffic was working to the backend server. Opening a HTTP connection to the service didn’t do anything.. Even the networking tools in IE and Chrome didn’t see anything. I tried to do a ping but that was working fine

Then my first idea was WTF? Why isn’t HTTP traffic working but ICMP was working…

So like any good IT-guy we started at the bottom of the chain (Physical and data link layer)

Was the networkin working as it should?
VLAN configured properly? Check
LACP configured properly? Check
Routing properly configured? (Mac based forwarding didn’t work either)
After looking over the configuration, we noticed that outgoing traffic was going to one MAC address and then responding from another MAC address, which might pinpoint the issue, but that was a false positive since it was just HSRP protocol doing it thing…

Now since PING was working we just wanted to verify that all the other parts of the network was working as it should, and of course we didn’t see any firewall issues as well. Now comes the interesting part, we setup WireShark on a RDS server to see if the HTTP traffic was actually going back and forth to the NetScaler.

And it was, we saw the HTTP traffic going back and forth as it should, so looks like the network was working as it should.

Problem was now that the HTTP traffic packets were coming back to the client, but was NOT appearing in the browser and this seemed to happen all clients that tried to connect, and again

So what is happening now?? Then I get the information that they have an HTTP proxy solution in place. Which of course could be the culprit but this was not the case….

Now last night I actually saw a blog from Citrix here –> https://www.citrix.com/blogs/2016/02/24/announcing-storefront-3-5/

image

AHA bingo! one quick look at the event log on the Storefront server we saw this.

clip_image001

So then we logged into the SSL parameter settings of the Service Group and saw this, after we disabled TLSv12, it worked!

clip_image002

So the solution was actually in the first lines of the blogpost… MPX and Storefront 3.1. Funny thing is that we actually had another similar case with SDX and VPXs where a consultant has upgraded their VPX appliances frmo 10.5.55 to 10.5.59, what happend is that the VPXs started to communicate using TLS 1.2 backend with older servers, which then caused the same problems. After upgrading the VPXs to the latest 11.64 version they were able to DISABLE TLS 1.2 backend for the SSL parameters and things started to work again.

What is Microsoft doing with RDS and GPU in 2016? and what are VMware and Citrix doing?

So it was initially labed Server 2016, for then I forgot an important part of it, which ill come back to later.

This year, Microsoft is most likely releasing Windows Server 2016 and with it a huge number of new features like Containers, Nano, SDN and so on.

But what about RDS? Well Microsoft is actually doing a bunch there,

  • RemoteFX vGPU support for GEN2 virtual machines
  • RemoteFX vGPU support for RDS server
  • RemoteFX vGPU with OpenGL support
  • Persional Session Desktops (Allows for an RSDH host per user)
  • AVC 444 mode (http://bit.ly/1SCRnIL)
  • Enhancements to RDP 10 protocol (Less bandwidth consuming)
  • Clientless experience (HTML 5 support is now in tech preview for Azure RemoteApp) which will also most likely be ported for on-premises solutions as well)
  • Discrete Device Assigment (Which in essence will be GPU-passtrough) http://bit.ly/1SULnLD

So there is all these stuff happening in terms of GPU enhancements and performance increase of the protocol and of course delivering hardware offloading uses the encoder.

Another important piece is the support for Azure which is coming with the N-series, which is DDA (GPU-passtrough) in Azure which will allow us to setup a virtual machine with dedicated GPU graphics running for a per hour price when we need it! and also in some cases can be configured for an RDMA backbone where we have need for high compute capacity for deep-learning. This N-series will be powered by NVDIA and K80 & M60.

So is still RDS the way to go in terms of full-scale deployment ? Can be, RDS has gotten from a dark place to become a good enough solution (even thou it has its limitations) and the protocol itself has gotten alot better (even do I miss alot of tuning capabilities for the protocol itself..

Now VMware and Citrix are also doing their things, with a lot of heavy-hitting being done at both sides, but also this again gives ut alot of new feature since both companies are investing alot in their EUC stack.

The interesting part is that Citrix is not putting all their eggs in the same basket, with now adding support for Azure as well (Which already includes support for ESXi, Amazon, Hyper-V and so on), meaning that when Microsoft releases the N-series as well, Citrix can easily integrate to the N-series to deliver the GPU using their own stack which has alot of advantages over RDS. Horizon with GPU usage is limited to running on ESXi.

VMware on the other hand is focusing on a deep partnership with Nvidia and also moving ahead with Horizon Air Hybrid (which will be a kinda Citrix Workspace Cloud setup) and also VMware is doing ALOT on their Stack

  • AppVolumes
  • JIT desktops
  • User Enviroment Manager

Now 2016 is going to be an interesting year to see how these companies are going to evolve and how they are going to drive the partners moving forward.

Monitoring Syslog from OMS with non-oms agents

So this weekend I was tasked with trying to setup OMS syslog monitoring against Linux targets which was not supported as part of the OMS agents. Now the supported list of OMS Linux agents are the following:

Amazon Linux 2012.09 –> 2015.09 (x86/x64)
CentOS Linux 5,6, and 7 (x86/x64)
Oracle Linux 5,6, and 7 (x86/x64)
Red Hat Enterprise Linux Server 5,6 and 7 (x86/x64)
Debian GNU/Linux 6, 7, and 8 (x86/x64)
Ubuntu 12.04 LTS, 14.04 LTS, 15.04, 15.10 (x86/x64)
SUSE Linux Enteprise Server 11 and 12 (x86/x64)

Now since many have network devices which run non of these operating systems I needed to setup something which would allow me to forward the Syslog events from other devices and then forward it to OMS. So what I came up with was setting up a Syslog collector on a supported OMS agent operating system. So I setup a Ubuntu 14.04 virtual machine which was going to be used as a syslog collector

image  

The simplest way was to use the built-in service rsyslog on ubuntu to configure it for remote collection, by default it is only used for local syslogging it does not accept remote syslogs.

Now as mentioned this requires a simple machine running Ubuntu 14.04 or 15.04. From the terminal we need to configure rsyslog.conf which is located under /etc folder

From there you need to change file, which can be done using VIM or VI. In the Conf file you need to remove # in front of the ModLoad and UDPServerRun which will allow the syslog daemon to gather from remote sessions.

# provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514

Next you need to add this line before the GLOBAL DIRECTIVES part of the confing file.

$template RemoteLogs,”/var/log/%HOSTNAME%/%PROGRAMNAME%.log” *
*.*  ?RemoteLogs

This is used for the syslog daemon to create syslog files under /var/log where all the log files will be named after the remote host that forwards information.

After this is configured you need to restart the rsyslog feature,

sudo /etc/init.d/rsyslog restart

image

Now we should see that the syslog folder will be populated under the folder of the host name.

After this is done you need to install the OMS agent using the following commands

$> wget https://github.com/Microsoft/OMS-Agent-for-Linux/releases/download/v1.1.0-28/omsagent-1.1.0-28.universal.x64.sh

$> chmod +x ./omsagent-1.1.0-28.universal.x64.sh

$> md5sum ./omsagent-1.1.0-28.universal.x64.sh

$> ./omsagent-1.1.0-28.universal.x64.sh –upgrade -w <YOUR OMS WORKSPACE ID> -s <YOUR OMS WORKSPACE PRIMARY KEY>

After the OMS agent is configured. Then we need to configure the syslog collector from within OMS

image

Then we can go into Log Search, we can go into the Syslog viewer and drill into the different alerts.

image

So in this case I just configured regular Syslog setup from a Cisco ASA and a Citrix NetScaler to forward to the Ubuntu server.

NetScaler and traffic flow explained

So after reading trough alot of forum posts and blogposts on the subject and also getting alot of feedback on my previous ebook I decided to write a blogpost on the subject. Let us consider the following we have a NetScaler two-arm mode where we have an service located in DMZ and backend-server on another subnet.

So some important things to think about:

  • VIP is NOT an packet generating IP by default, it has the load balancing logic (Methods, Persistency and so on)
  • Internally on the NetScaler when traffic hits a VIP it will look up the service group(services) servers to see if the NetScaler is located on L2 directly to the backend servers, if not it will look at the routing table to see if it can talk to that network.
  • When adding a SNIP to the NetScaler, it will create a DIRECT ROUTE to that particular layer 2 network. ( In case of SNIP 192.168.1.110 (It will add a DIRECT ROUTE 192.168.1.0/24 where the GW is set to 192.168.1.110. The SNIP is used for the route lookup capability, which the NetScaler is commonly used for when returing traffic.
  • Most of the monitors which are attached to a service are using the SNIP as Source IP

image

So when a client accesses a VIP all traffic will be directed to the VIP, where the destimation MAC  will be directed to Interface 1. Remember that on a NetScaler a IP address is not directly bound to a Interface, unless specifically configured.

The NetScaler has an interal table which looks at the servers  that are attached and will then using the closest IP from SNIP to talk with the backend server.

Now the problem with the example above is that it will not work with the default settings. Because since a VIP cannot generate outgoing packets on its own, the traffic flow will stop.

It is important that we have an L2 IP address on the same L2 network that we have the VIP. This is because SNIP is a Packet generating IP. So if we look at the example above again

image

This time we have a SNIP where the VIP is located. Meaning that we have a SNIP for each L2 network that we have. So in this the traffic flow will work like so.

Client –> VIP –> NetScaler –> SNIP (Closest L2 IP) –> Server, and when the NetScaler now responds back to the client

Server –> SNIP –> NetScaler (Session Table) –> VIP (Check if SNIP is present) –> Client

So even if we setup a SNIP on the same network where the VIP is located, this is never used as a source when going back to the client, it is just needed to establish a packet generating feature.

Mac based forwarding

Another option that we have, which many use which is called MAC Based forwarding.

Q. What is MBF?

A: MBF alters the way the NetScaler appliance routes the server replies back to clients. MBF caches the MAC address of the uplink router that forwarded the client request to the appliance. When a reply is received, it is passed through to the same router that sent the client request without going through any route lookup. If MBF is disabled, then the return path is determined by a route lookup, or is sent to the default route if no specific route exists.

Which is essentially turning of the routing lookup feature. IF we use this feature, then we do NOT require a SNIP where the VIP is located because then the packet generating feature is located on the interface itself. But still features like static routes will still work because when the NetScaler initiates a connection, it uses the route and ARP tables for the lookup function

Different backend subnets and monitors

What about the SNIP’s if we want to communicate with server 2 which is located on another network? Since we already have a SNIP in the default backend subnet. We can just add a new static route

Network to access:          netmask 10.200.200.0 – 255         the gateway that I need to talk to with to reach that network
192.168.0.0                      255.255.255.0                                 192.168.1.1

This presumes that the Router 1 (192.168.1.1 has knowledge of the second router on the network) another option that we have if we do not want to have that SNIP traversing between the different network is to add another SNIP directly to that subnet, with using for instance VLAN or direct attaching the interface.

What if I have multiple SNIPs connected to the same subnet?

The NetScaler would use the SNIPs in a round-robin fashion to communicate with the resources. If you want to dedicate a SNIP to a particular service you should create a net profile which allows us to restrict traffc to a particular subnet from a SNIP.

We can also specify a netprofile to a VIP, which allows us to seperate SNIP traffic from customer 1 to customer 2 for instance. For instance if we have two net profiles containing SNIP 1 and SNIP 2, and we bind those to a VIP 1 and VIP 2, we can distigunish traffic coming from which source to the same backend server

Client —> VIP 1 (netprofile) –> SNIP 1 –> Server1

Client –> VIP 2 (netprofile) –> SNIP 2 –> Server1

  • If there is no net profile on the virtual server or the service/service group, NetScaler uses the default method.
  • If there is a net profile only on the service/service group, NetScaler uses that net profile.
  • If there is a net profile only on the virtual server, NetScaler uses the net profile.
  • If there is a net profile both on the virtual server and service/service group, NetScaler uses the net profile bound to the service/service group.

We can also bind a particular SNIP to a monitor which in essence will only use a particular SNIP for monitoring purposes, which allows us to even more granulary filter out which SNIPs are used for monitoring and which SNIPs are used for direct backend communication. 

  • If there is a net profile bound to the monitor, NetScaler uses the net profile of the monitor. It ignores the net profiles bound to the virtual server or service/service group.
  • If there is no net profile bound to the monitor,
    • If there is a net profile on the service/service group, NetScaler uses the net profile of the service/service group.
    • If there is no net profile even on the service/service group, NetScaler uses the default method of selecting a source IP.

We can also specify a Net Profile to

called policy based routing, which allows us  to specify which type of traffic that should be routed and so on.

So for instance we want to route web traffic using the gateway (Since we have an application firewall) but regular monitoring can be going using the direct connection in that case we can specify a PBR (Policy Based Routing) to send HTTP traffic to a particular network to be processed via a Gateway ( Firewall) and therefore eliminating unecessery traffic going via the firewall

NOTE: PBR are evaluated before regular routes, so if there is traffic that is not evaluated via the PBR routes, the regular routing table will  be processed.

NetScaler and PowerShell cmdlets

Now this is something I have been planning a post on for some time, ever since I started working with the C# library to do NITRO API calls against NetScaler. I was planning and started on a a PowerShell module for NetScaler, but still someone beat me to the race, so no reason to reinvent the wheel anymore Smilefjes

Someone at Citrix (Santiago Cardenas)  has already created an REST API based PowerShell module, which is placed on GitHub here –> https://github.com/santiagocardenas/netscaler-configuration

Now the scripts which contains many of the basic features, but ill give you an recipe which will allow you to create your own extensions to the scripts. Now using REST API, we have built-it documentation which is available on the NetScaler here –> Under the download page of the NetScaler

image

From the download you have an index.html file which will show you the different tasks

image
There are two main categories, Configuraiton and statistics, from there I can drill down into a specific feature. So for instance let us look at gateway vServer (Which is located under SSL VPN) which is also the same as Gateway vServer

So if we want to setup a Gateway vServer what do we need to specify ? If we from there choose vpnserver which is the base object
image
We get all the attributes that can be configured from the vpnvserver object.

name, servicetype, ipv46, range, port

Now its a long list, but if you scroll down the documentation page you can see a specific example if you for instance wish to add a vServer (The objects in red are the ones that ARE required)

image

Now using a REST API we need to use a POST command which will push the settings we specify using PowerShell. The github PowerShell cmdlets have already taken care of this, so the commands are built up llike this.

function GatewayvServer ($GatewayFQDN, $VIP) {
EnableFeatures SSLVPN

$body = @{
    “vpnvserver”=@{
        “name”=”$GatewayFQDN”;
        “servicetype”=”SSL”;
        “ipv46″=”$VIP”;
        “port”=”443”;
        “icaonly”=”ON”;
        “tcpprofilename”=”nstcp_default_XA_XD_profile”;
        }
    }
$body = ConvertTo-JSON $body
Invoke-RestMethod -uri “10.0.0.1/nitro/v1/config/vpnvserver?action=add” -body $body -WebSession $NSSession `
-Headers @{“Content-Type”=”application/vnd.com.citrix.netscaler.vpnvserver+json”} -Method POST

A funciton is the name we use when starting it from PowerShell and the variables are the ones that we can specify behind the cmdlet. Now all the specific attributes are part of a variable called $body, which then added to the HTTP Body. The 10.0.0.1 is the direct name of the NetScaler.

Now what if we want to create a function that gets information about a particular vServer? We can see from the documentation that there is a “get” example

image

So an example Powershell function would look like this,

function Get-GatewayvServer {
# Login to NetScaler and save session to global variable
$gateway = Read-host -Prompt “Type VIP name”
$body = ConvertTo-JSON @{
    “login”=@{
        “username”=”$username”;
        “password”=”$password”
        }
    }
Invoke-RestMethod -uri “10.0.0.1/nitro/v1/config/vpnvserver/VIP22” -WebSession $NSSession -Method GET
$Script:NSSession = $local:NSSession
}

As we can see from the URI (All we need is to specify the hostname of the NetScaler and that particular VPN vServer using the GET HTTP Method. So if you are unsure of the URI you can just open up a browser and connect to that particular URI

image

So the PowerShell cmdlets from Santiago Cardenas can be used as a starting point, adding your own PowerShell functions is pretty easy when you just look at what attributes and URI that are being used. So start scripting!

A Closer look at AVI networks

Now since I work alot with NetScaler and spend to much time on social media these days, I am bound to see another product which sparks my interest.

This is where AVINetworks popped up into my view. (Well was kinda hard not to notice it)

So what do they do? They deliver an ADC (or even better Cloud Delivery Platform) product which is software-only, and is aimed at the next-generation services (Containers, Microservices) which looks to be their main focus.

Their architecture is pretty simple, we have an AVI Controller which has the monitoring, analytics and management of the different service engines which actually deliver all the load-balancing features. So using the controller we define a load balanced service and the AVI Controller (If it has access) will deploy a service engine to service that service for the end-users. and note using the connectors or the CLI it is easy to automate deployment of new services for instance even from a development standpoint.

As of now, they state any cloud but it is limited to VMware ESX, OpenStack and Amazon Web Services. Now their product seemed interested so I decided to give it a try on our ESXi enviroment.

The setup is a simple OVA template which deployes the AVI Controller –> (Can be downloaded from here –> http://kb.avinetworks.com/try/

After the deployment is done you get to the main dashboard

image

So let’s setup a new virtual service, setup a simple IIS load balancing VIP, using the default port and HTTP profile and TCP profile.
image

Note that I can custom create a TCP Profie with custom TCP parameters, and I can enable front-end optimization, caching and other X-forwarded for rules under the application profile.

Now I need to create a server pool. Which consists of the port, load balancing rules, persistency and can also use AutoScale rules.

image

After I have added my servers and defined the virtual network it should attach to I can go ahead with the service creation. From here I can add HTTP rules

image

Under Rules I can define different HTTP request policies to modify header and so on.

image

Next I define the analytics part, and activate real time metrics. This is something that I think seperates an ADC from an load balacer which is the insight!

image

Then the advanced part, here I can define performance limitations and weight and so on.

image

When I am done with the configuration I click Save, then I get to this dashboard, Hello gorgeous!

image

Now what is happening in the background now Is that the AVI controller is deploying an Service Engine OVA template to my ESX hosts.

image

Which is connected to my internal VM network, when the service engine is done deploying the health score is set to 100

image

Now when I start to generate some traffic against the VIP I can see in real-time what is going on, and how long the applicaiton itself takes to respond to the response.
Now this is vaulable insight! I can see that my internal network is not the bottleneck, neither is the client but the application itself is spending to much time. I can see how many connections and the amout of troughput being generated.

image

If I go into security I can see if I have any ongoing attacks or what level of security I have in my network. Need to get some more details on what kind of attacks that will be detected in this overview.

image

Just for the fun of it, I used LOIC to spam the VIP with HTTP GET requests and se if I could trigger something, but it didn’t, but however if I looked into the log I could see that I get all the information I want from within the dashboard

image

I can basically filter upon anything I want. Now if I go back to the dashboard, I can see the flow between the service engine, vip and the server pool that it is attached to

image

Another cool feature is the ability to scale-out or scale-in if needed. Let us say that we can see that the Service Engine is becoming a bottleneck, then we can just go into the service and choose scale-out

image

When we go back to the dashboard now we can see that we have two service engines servicing this VIP

image

Now the cool thing is that we can set AVI to autoscale if needed, let’s say that one of the service engines are becoming a bottleneck it will trigger a CPU alert which would then create another service engine (IF the AVIController has write access to the virtual enviroment)

Now in terms of load balancing between mulitple service engines, it uses GARP on the primary Sevice engine where most of the traffic which be proccesed. Excess traffic is then forwarded at layer 2 to the MAC of the second SE and then the second SE changes the source IP address of the connection and is then bypassing the primary SE on the way back to the client.

So far I like what I see, this is another approach to the tradisional ADC delivery method where everything is in a single appliance, so stay tuned for more!

Windows Server 2016 and introducing AVC 444 mode

With the upcoming release, Microsoft has been busy with adding alot of new features to the RDS platform. Like RemoteFX vGPU for GEN2 and for Server OS + GPU Passtrough and Server VDI etc.. But out the blue came this https://blogs.msdn.microsoft.com/rds/2016/01/11/remote-desktop-protocol-rdp-10-avch-264-improvements-in-windows-10-and-windows-server-2016-technical-preview/

Also Rachel Berry at Nvidia also blogged about the limitations on H.264 and 4:2:0 –> https://virtuallyvisual.wordpress.com/2016/02/18/microsoft-rdp-end-client-h-264-444-hardware-decode-support-on-existing-decoders-420-elegant-and-cool/ anyhow I was curious just how does this affect my RDP performance?

The Setup was pretty simple, either I can use it with RemoteFX vGPU or a plain virtual machine running 2016 TP4. I also require a Windows 10 with build 1511 or higher.

There are basically two policies that we can define

image

Now Prefer AVC hardware encoding, if you are using RemoteFX vGPU, the policy should be set on the Hyper-V host and not the RDS server itself.

image

After these settings are enabled and active, next time you try to connect to an RDP session you will see it triggers an event in the events log under RDS Applications and Services Logs -> Microsoft -> Windows -> RemoteDesktopServices-RdpCoreTS
Just need to reconnect after setting the policy

image

Profile: 2048 (Yep we are good)

Now I did a couple of tests, which were pretty simple to see the bandwidth usage, (Also to test the differences between RDS 2012 and 2016) which were running the same setup in policies just one was enabled with AVC 444 and the other wasn’t

This was a simple test of video transfer, the 1 line is RDS 2012 R2 and the below is running 2016 TP4 (almost 20MB lower bandwidth usage) wow!

image

Now I ran the same test again and I got similar results for 2012 R2, but this time I was running with AVC 444 mode enabled, which my initial guess would state that it used a bit more bandwidth

image

I also did a couple of more tests, but the conclusion was almost the same. Use of AVC 444 in video uses about 10% more bandwidth. I will need to test more to see how much better the graphics actually are.

NetScaler eBook and work in progress and new project!

After being overwhelmed with requests (600+) and feedback (40+) on my previous project on NetScaler optimizing and tuning, both from guys in the community and Citrix themselves. I have decided to go ahead with the new project, which is basically creating a new “ebook” which focuses this time on the most requested feature… NetScaler gateway.

I have decided to split the new project into different segments of the book.

  • NetScaler Gateway
    • RDP Proxy
    • ICA Proxy
      • Best-pratices
      • Multi-domain scenarioes (single domain, multiple domains within same forest, multiple domains with trust, multiple domains no trust)
      • Deployment types (2 arm, 1 arm, double hop)
      • Optimal Gateway
      • Framehawk
      • Audio over DTLS
      • SSL VPN
      • Traffic profiles
      • Portal Customization
    • Bookmarks
    • Endpoint analysis and VPN
      • Pre-auth
      • OPSWAT
      • Smart Access
    • Unified Gateway
    • AAA
      • MFA
      • Kerberos auth
    • Troubleshooting
      • Policies
      • Connection
    • Security
      • SSL/TLS settings
      • PCI-DSS
    • AppFlow
      • HDX Insight

If there is anything that I have missed please leave me some feedback, and of course help is always appreciated, my goal is to start each part with an overview of the feature, what each option does and then create a to-do guide to configure each feature. For instance

AppFlow generates…..yada…yada, best-pratices include…yada…yada…yada

Then the to-do guide will go something like this.

To setup AppFlow go into System –> AppFlow… do step 1, 2 and 3.

Agree/Disagree?

My plan was also to integrate XenMobile and ShareFile components to the mix, but I will require assistance from someone else since those products are not my strong side.

Oh and based upon the feedback I have gathered so far on the first eBook, there will be an update to it as well, containing typoes, enchanced sections and some new stuff.