Monthly Archives: July 2015

How Nutanix works with Hyper-V and SMB 3.0

In my previous blog post I discussed a bit about software defined options using Hyper-V and that Windows Server is getting alot of good built-in capabilities but lacks the proper scale out solution with performance, which is also something that is coming with Windows Server 2016.

Now one of the vendors which I talked about which has a proper scale-out SDS solution for Hyper-V with support for SMB 3 is Nutanix, which is the subject for this blogpost where I will describe how it works for SMB based storage, now before I head on over to that I want to talk a little bit about how SMB 3 and some of the native capabilities and why they do not work for a proper HCI setup.

With SMB 3.0 Microsoft Introduced two great new features, which was SMB Direct and Multichannel, which are features that are aimed for higher troughput over lower latency.

SMB Multichannel (leverages multiple TCP connections across multiple CPU cores using RSS)

SMB Direct (allowing for RDMA based network transfer, which does bypasses the TCP stack and moving data from memory to memory which gives low overhead, low latency connections.

Now both these features allow us to leverage better NIC utilization, but is aimed for a traditional configuration where storage is still a seperate resource from computing. My guess is that when we are going to deploy a Storage Spaces Direct cluster on Windows Server 2016 in a HCI deployment these features will be disabled.

So how does Nutanix work with SMB 3 ?


First of, important to understand the underlaying structure of the Nutanix OS. First of all local storage in the Nutanix nodes from a cluster are added to a unified pool of storage which are part of the Nutanix distributed filesystem. On top of this we create containers which have their settings like compression, dedup and replication factor which defines the amount of copies of data within a container. The reason for these copies are for fault-tolerance in case of a node failure or disk failure. So in essence you can think about this is a DAG (Database availability Groups) but for virtual machines.

So for SMB we can have shares which are represented as containers which again are created on top of a Nutanix cluster.  Which are then presented to the Hyper-V hosts for VM placement.

Also important to remember that even thou we have a distributed file system across different nodes, the data is always run locally for a node (reason for this is so that the network does not becoming a point of congestion) Nutanix has a special role called the Curator (Which runs on the CVM)which is responsible for moving the hot data as local to the VM as possible. So if we for instance do a migration from host 1 to host 2, the CVM on host 1 might still contain the VM data and then reads and writes will from host 2 to CVM on host 1 the CVM will start to cache the data locally.

Now since this architecture leverages data locallity there is no need for feature like SMB Direct and SMB multichannel so therefore these features are not required in a Nutanix deployment for Hyper-V, however is does support SMB transparent failover which allows for continuously available file shares.

Now I haven’t started to explain yet how this architecture handles I/O yet, this is where the magic happens. Stay tuned.

Software defined Storage options for Hyper-V

As I see that Hyper-V gaining more and more traction, I also see that we are in the need for better storage solutions around it. Now Microsoft has Storage Spaces which came in 2012 and introduces features like Dedup as well. Problem with the deduplication feature is that it was mostly aimed at VDI enviroments (for Hyper-V) and not tradisional servers and was limited to one thread, in Windows Server 2016 this is expanded with support for backup workloads. Storage Spaces was also enhanced with tiering in 2012 R2 which gives the abiility to add SSD disks add move data between tiers on a storage spaces setup and also gives us the ability to do Write-back cache for random writes. In the upcoming Windows Server 2016, we know that we will have the option to do Storage Spaces Direct (Meaning local attached disks on server nodes to work as a streched cluster) just like VSAN and so on) which can either act as a Scale-out file server cluster or as an hyper converged solution combining SMB and Hyper-V on the same roles. Which gives an architectual advantage since it allows us to scale much simpler (to the amount of nodes supported which is set to 32)

Microsoft also introduced SMB 3.0 protocol which allows for scale out communications with features such as

  • SMB Multichannel (Which allows to use multiple network connections as the same time)
  • SMB Direct (Which gives low-latency conections over RDMA)
  • Usage for SQL and Hyper-V over SMB

So SMB is good for fault-tolerance and high troughput options, and with RDMA it gives us low latency connections but it is still limited to the disks and controllers which are behind the SMB file servers, and using SMB with regular network cards is still TCP(Which has about 5 –8% overhead if not configured properly), which in most cases will perform slower then localized virtual machines on individual hosts, so what about other options and using memory as a tier?

Here are some numbers to chew on (From Jeff Dean) about speed where Memory is a bit of the equation.

L1 cache reference                             0.5 ns
Branch mispredict                              5 ns
L2 cache reference                             7 ns
Mutex lock/unlock                            100 ns (25)
Main memory reference                        100 ns
Compress 1K bytes with Zippy              10,000 ns (3,000)
Send 2K bytes over 1 Gbps network         20,000 ns
Read 1 MB sequentially from memory       250,000 ns
Round trip within same datacenter        500,000 ns
Disk seek                             10,000,000 ns
Read 1 MB sequentially from network   10,000,000 ns

Microosft also introduce something called CSV cache (Which was available from Server 2012) which allows us to allocate system memory as a write-trough cache. The CSV Cache provides caching of read-only unbuffered I/O Which in essence makes it work good with Hyper-V clusters and Scale-out file servers using CSV

Problem with CSV cache is does not work with.

  • Tiered Storage Space with heat map tracking enabled
  • Deduplicated files using in-box Windows Server Data Deduplication feature (Note:  Data will instead be cached by the dedup cache) 
  • ReFS volume with integrity streams enabled (Note:  NTFS is the recommended file system as the backend for virtual machine VHDs in production deployments)

Means that we cannot get the best of both worlds, where we could combine Memory, SSD, and HDD in the same storage pool.

Another thing is that Microsoft does not offer inline-dedup for storage traffic, their dedup engine runs as a background task (post process)

With Windows Server 2016 Im saying that Microsoft is moving towards a feature set which gives their customers a basic feature set of what they need in the software defined storage space

  • Hyper convereged (Storage Spaces Direct)
  • Tiering capabilities
  • Enhanced decuplication
  • High troughput on SMB
  • Low cost

So for those that require more Performance, Feature and so on for Hyper-V, in terms of what options are there?

For Vmware there are already a long list of different vendors that deliver storage optimization / SDS / HCI solutions

  • Atlantis
  • Pernixdata
  • Nexenta
  • Nutanix
  • SimpliVity
  • VSAN
  • DataCore

Both Atlantis and SimpliVity have stated that they will have support Hyper-V “Soon”. Atlantis does have support for Hyper-V on their ILIO product but not for USX.

As of now only Nutanix and DataCore have full support for Hyper-V and SMB 3.0 both of them offer more flexibility in terms of features and better performance with use of memory as a tier which is just of the basic stuff. Tune in as I will explore these features troughout the next blogposts and show how they differ from the built-in features in Microsoft.

NOTE: The vendors that are in the list, are the ones I know about, I didnt do a very long check so if someone knows about someone else please let me know.

New award – Veeam Vanguard

Received some good news today, (Which I have known for quite some time) but it is only now that I am allowed to talk about it Smilefjes

I have been quite active regarding Veeam on my blog and much work related since I am a Veeam Instructor and a general evangelist for their products, so therefore I was quite thrilled when Veeam announced a new community award called Veeam Vanguard and that I was one of the awardees!

and now I join the ranks of other skilled IT-pros in the community such as, Thomas Maurer, Rasmus Haslund and a fellow Norwegian Christian Mohn

Thanks to Veeam!

More info on the Vanguard page here —

Getting started with Microsoft Advanced Threat Analytics

This is something I have been meaning to try out for a while, since the preview release at Ignite. Advanced Threat Analytics is a new software from Microsoft (which comes from a purchace Microsoft did a while back) but it focuses on some of the more common problems with security in Windows enviroment, such as Golden tickets, Pass the hash, abnormal user behavior and so on.

Now Microsoft ATA is pretty simple architecture it consist of two components and a MongoDB base where the data is stores, the two components

The ATA Center performs the following functions:

  • Manages ATA Gateway configuration settings

  • Receives data from ATA Gateways

  • Detects suspicious activities and behavioral machine learning engines

  • Supports multiple ATA Gateways

  • Runs the ATA Management console

  • Optional: The ATA Center can be configured to send emails or send events to your Security Information and Event Management (SIEM) system when a suspicious activity is detected.

The ATA Gateway performs the following functions:

  • Captures and inspects domain controller network traffic via port mirroring

  • Receive events from SIEM or Syslog server

  • Retrieves data about users and computers from the domain

  • Performs resolution of network entities (users and computers)

  • Transfers relevant data to the ATA Center

  • Monitors multiple domain controllers from a single ATA Gateway

These roles can be deployed on two different virtual machines or on the same VM, really important that during setup of the ATA center, define that communcation happen using the external IP on Center communication and management IP. By default it sits on then you need to install both components on the same server.

ATA Center Configuration

Now the Gateway needs to be able to see the DC (or Global Catalogs) traffic using Port Mirroring, which can either be used in a physical enviroment with SPAN or RPSAN, or we cna setup port mirroring in a virtualized fashion.

I have my demo enviroment running on Hyper-V which allows me to easily setup Port mirroring. First thing I need to do is configure the NIC on my DC to do port mirroring.


Then I need to add another NIC on my Gateway VM and configure that as a destination mirroring mode.


I also need to enable the NDIS monitoring filter on the vSwitch


Before the initial setup note that there are some limitations in the preview…

Make sure that KB2919355 has been installed!

Only enter domain controllers from the domain that is being monitored. If you enter a domain controller from another domain, this will cause database corruption and you will need to redeploy the ATA Center and Gateways from scratch!

After you have deployed both components, all you need to do is define the domain controller and NIC, in the management console.


Now after this is done we can verify that it has connectivity by checking the dashboard and search for a user


Now by default ATA takes about 2 weeks before it can etasblish a baseline for how regular activity works, but it has some default alters which we can trigger to make sure that it works as it should. For instance we can use a DNS reconnasince attack


Simple nslookup and ls paramter. This will then trigger in the console


Since this is still preview it has a some limitations, as of right now it cannot detect PtH, so stay tuned for more about this when the full release comes.

Citrix Netscaler and support for next generation web traffic protocols like SPDY & HTTP/2

Now with the ever growing pace of internet traffic, we are being faced with one challenge, an old protocol which is over 15 years old now and is now way any shape to continue in this race, and yes the one I am talking about is the HTTP protocol.

Now over the years, Google has done a great job trying to improve this way of communication with its own protocol called SPDY which uses prioritizing and multiplexing and with transmission headers are sent using GZIP or Deflate. You can read more about SPDY here –> 

Now on the other hand you have the HTTP/2 protocol which the IEFT has worked one, which Google said will replace their own SPDY protocol (

You can read more about the RFC on HTTP/2 protocol here –> but in essence its the same thing as SPDY, since the initial draft of HTTP/2 was based upon SPDY. Another thing that is important to note that communucation with HTTP2 is based upon a binary format since this is much easier to compress, while tradisional HTTP1.1 is upon human readable text. The people over at HTTP Watch did a comparision between tradisional HTTP, HTTP2 and SPDY and we can see that these new protocols works alot more efficient.

So what else is needed ? We need a web server that supports HTTP/2 or SPDY and we need web clients that support these protocols.

As we can see most web servers are already supported HTTP/2 Windows coming with in in Windows Server 2016 and the new version of IIS, and most web browsers support HTTP/2 as well, such as Chrome, Opera, Firefox, Internet Explorer and lastly Microsoft Edge

But for instance Firefox only supports HTTP/2 using TLS 1.2 meaning that even if the Netscaler can use HTTP2 over HTTP it will not work with most of the web browsers.


So how do I test that this stuff works ? the simplest thing is to download an addon to Chrome which is called HTTP/2 and SPDY indicator, which basically shows which sites are enabled for HTTP/2 and SPDY and so on.(This extension is available for FireFox as well)

So whenever we are on a site which has HTTP/2 enabled the icon will appear as such


We can also look at the interal table within Chrome by typing chrome://net-internals/#spdy in the address bar.

If this does not work on your chrome version you need to enable SPDY4/HTTP2 within Chrome which can be done using the chrome://flags/#enable-spdy4 flag.

In regards to setting this up on the Netscaler we have to create or alter a HTTP profile, and note this is only available from version 11 and upwards.


And choose enable under the checkbox for HTTP/2, if SPDY is also enabled the following preference is done when communicating with a vServer that has the HTTP profile bound

  • HTTP/2 (if enabled in the HTTP profile)
  • SPDY (if enabled in the HTTP profile)
  • HTTP/1.1

Now in most cases the backend servers are still using HTTP/1.1 In that case the Netscaler works as a proxy and decodes the traffic from the clients to HTTP 1.1 data and restrasmits the data to the backend servers.

It is however important to note that running HTTP/2 on VPX is not supported and hence the clients will fall back to SPDY which is supported on a VPX.

However there are some requirements that are worth noticing on VPX for SPDY as well:

Troubleshooting for SPDY

If SPDY sessions are not enabled even after performing the required steps, check the following conditions.

  • If the client is using a Chrome browser, SPDY might not work in some scenarios because Chrome sometimes does not initiate TLS handshake.
  • If there is a forward-proxy between the client and the NetScaler appliance, and the forward-proxy doesn’t support SPDY, SPDY sessions might not be enabled.
  • NetScaler does not support NPN over TLS 1.1/1.2. To use SPDY, the client should disable TLS1.1/1.2 in the browser.
  • Similarly, if the client wants to use SPDY, SSL2/3 must be disabled on the browser.