Monthly Archives: September 2015

Getting started with PernixData Architect

So how well does your application perform? This is the question that everyone want to know the answer to, its kinda like the meaning of life (from an IT-pro perspective)

In order to understand how an application performs we can of course look at the memory and cpu usage, but the question that remains uanswered in most cases is, WTF is it doing with the storage system? Enter PernixData Architect

Now I have previously written about PernixData and FVP https://msandbu.wordpress.com/2015/03/20/pernixdata-fvp-what-does-it-actually-do/ Architect uses the same agent on the ESX host, but instead of doing acceleration it does data analysis.

Now many companies that do data acceleration or sell storage often tell you how much IOPS it can do with a specific block type. Now using Architect we can actually see what kind of block sizes our virtual machines are generating and latency / troughput.

The installation process is pretty much the same as FVP we installed the host extension, install the management server which runs on SQL server.

(NOTE: I have a pretty small lab at the moment, running nested ESXi on Vmware workstation) so I only have a few virtual machines

Have to say that the UI is quite an improvement

image

We can use Architect and FVP within the same UI (Just switch between the dashboard menu in the top) We can do drilldown into different metrics

image

Which allows us to see latency/iops and what kind of workloads that are running. We can also see the block size breakdown which average IOPS

image

Now eventually when this is integrated with the Cloud solution that PernixData is building, it will give awesome insight into how we can configure our storage properly and what we can expect of our storage system. And we can eventually get some knowledge on how to properly configure applications and how they operate on a storage level.

Last week at Commaxx, hello new possibilities

For the last three years I have been working at a VAD (Value Add Distributor) in Norway called Commaxx, where my focus has been on Veeam/Microsoft/Citrix and also Dell the last year or so. My focus was on pre-sales, consultant, trainer among other things and has given me a real career boost on my part. The last year or so has been doing alot of stuff around Azure and System Center which as been fun.

So now I am entering into my final week at Commaxx, had some great years there but due to personal stuff I will be changing job (really soon) and my focus from a technical point-of-view will lay elsewhere. Not that I wont be working with Microsoft/Citrix/Veeam/Dell but I will try to embrace even more stuff and become more technical, one of the key things I have seen know is that I still know so little about IT and there is so many stuff I want to know more about. Think this quote sums it up

As I see it I not an expert in anything, if anything I more of a generalist. But maybe that allows me to see the big picture in alot of things. Becides, there are no experts in IT, just those that understand stuff a bit more then others Smilefjes

Stay tuned!

Bug in Netscaler 11, PFX lost after reboot

So I recently got aware that Netscaler 11 has a serious bug that after a Netscaler is rebooted the PFX certificate is removed from the Netscaler http://discussions.citrix.com/topic/367451-server-certificate-lost-after-reboot-ns-110/ even if the config is saved.

I decided to do a little digging on my lab enviroment, after imported a new PFX certificate with Pkey and attaching it to a LB server, as shown here.

I saved the config and I can also see that the certificate is listed here under SSL – Certificates

image

After a reboot, I can clearly see that the certificate is not installed, even thou it is still on the file system.

image

image

Wierd thing is that I can see from comparison of the saved config and the running config that the only thing that is missing are these two lines, which actually installed the certificate.

image

Now the fix is pretty simple as Carl listed on the forum, convert the PFX to PEM format and it seems to work, but still I have forwarded this to Citrix to get some clarity…

Network capabilities with Windows Server 2016

Now with the release of Windows Server 2016, to many have been caught up with the support for Docker, Nano server and storage spaces direct. To many are missing out on what is the big investment that Microsoft is doing in WS2016, namely the networking stack!

Which is also going to be a big part of when Microsoft also releases Azure Stack, since most of the Azure functionality in regards to networking is being ported to Windows Server 2016.

So what is actually new ? So far all we have are the TP3 bits. So this is what is included. Now most of these features are only available from PowerShell and are part of the Network Controller stack.

  • Software Load Balancer (SLB) and Network Address Translation (NAT). The north-south and east-west layer 4 load balancer and NAT enhances throughput by supporting Direct Server Return, with which the return network traffic can bypass the Load Balancing multiplexer.

  • Datacenter Firewall. This distributed firewall provides granular access control lists (ACLs), enabling you to apply firewall policies at the VM interface level or at the subnet level.

  • Gateways. You can use gateways for bridging traffic between virtual networks and non-virtualized networks; specifically, you can deploy site-to-site VPN gateways, forwarding gateways, and Generic Routing Encapsulation (GRE) gateways. In addition, M+N redundancy of gateways is supported.

  • Converged Network Interface Card (NIC). The converged NIC allows you to use a single network adapter for management, Remote Direct Memory Access (RDMA)-enabled storage, and tenant traffic. This reduces the capital expenditures that are associated with each server in your datacenter, because you need fewer network adapters to manage different types of traffic per server.

  • Packet Direct. Packet Direct provides a high network traffic throughput and low-latency packet processing infrastructure.

  • Switch Embedded Teaming (SET). SET is a NIC Teaming solution that is integrated in the Hyper-V Virtual Switch. SET allows the teaming of up to eight physical NICS into a single SET team, which improves availability and provides failover. In Windows Server 2016 Technical Preview, you can create SET teams that are restricted to the use of Server Message Block (SMB) and RDMA.

  • Network monitoring. With network monitoring, network devices that you specify can be discovered, and you can monitor device health and status.

  • Network Controller. Network Controller provides a scalable, centralized, programmable point of automation to manage, configure, monitor, and troubleshoot virtual and physical network infrastructure in your datacenter. For more information, see Network Controller.

  • Flexible encapsulation technologies. These technologies operate at the data plane, and support both Virtual Extensible LAN (VxLAN) and Network Virtualization Generic Routing Encapsulation (NVGRE). For more information, see GRE Tunneling in Windows Server Technical Preview.

  • Hyper-V Virtual Switch. The Hyper-V Virtual Switch runs on Hyper-V hosts, and allows you to create distributed switching and routing, and a policy enforcement layer that is aligned and compatible with Microsoft Azure.
    image
    Think that this will allow us to create L2 connections directly with vNetworks in Azure.

  • Standardized Protocols. Network Controller uses Representational State Transfer (REST) on its northbound interface with JavaScript Object Notation (JSON) payloads. The Network Controller southbound interface uses Open vSwitch Database Management Protocol (OVSDB).

Also with the current investment into OMI stack and with the support for PowerShell DSC we can easily extended the support to the physical network as well. Also that since the network controller uses JSON to do management we can see that we are going to be able to use Resource Manager capabilities that are used in Azure as well when Azure Stack becomes available.

What is Microsoft Azure IaaS missing at this point?

Well, this might be a bit of a misleading blogpost, and it is not aimed at critizing Azure, but mearly a post which aims to look at what I feel that Microsoft IaaS Azure is missing at this point. Now even thou Microsoft is doing alot of development work on Azure, much of it is focused on Azure AD (No wonder since they have like 18 billion auths each week) but still there is work to be done on the IaaS part.

With the late introduction of  Azure Resource Manager https://msandbu.wordpress.com/2015/05/22/getting-started-with-azure-resource-manager-and-visual-studio/

Azure DNS https://msandbu.wordpress.com/2015/05/08/taking-azure-dns-preview-for-a-spin/

Introduction to Containers on Azure http://azure.microsoft.com/en-us/blog/containers-docker-windows-and-trends/

Storage Premium and such https://msandbu.wordpress.com/2014/12/17/windows-azure-and-storage-performance/ https://msandbu.wordpress.com/2015/01/08/azure-g-series-released-and-tested/

So what is missing ?

  • Live Migration of Virtual Machines when doing Maintance on hosts:  The concept of setting up Availability Set (meaning setting up 2x of each virtual machine role is not very sexy when trying to persuade SMB customers to move their servers out to Azure) and In some cases, like RDS session hosts which are statefull which might be a bit pain if one host suddenly reboots
  • 99,9% SLA on Single Virtual Machine instances (Again reference to point 1) While this used to be an option, it was quietly removed during 2013…. While some of the competition has SLA for running single virtual machine instances/roles, Microsoft does not. Or maybe have a customizable maintance window.
  • Better integration of On-premises management, While VMM now does have an option to integrate with Azure it is missing some feature to make it better such as deployment from Azure https://technet.microsoft.com/en-us/library/mt125377.aspx 
  • Scratch the old portal and be done with the new one! Today some features are only available in the old portal such as Azure AD, while some features are only available in the new portal. This is just confusing. I suggest that you get done porting the old feature into the new one and then start creating new features / capabilities in the new portal.
  • Better use of Compute ( For instance being able to customize virtual machines sizes, while I know that having pre-defined size gives better resource planning, but in some cases customers might need just a 2vCPU and 8 GB ram and paying that small extra for 4 vCPUs (while it is not needed) should not be necessery.
  • Less limitations on Network capabilities, while it has improved there are still some limitations which in fact limit network appliances on Azure (such as Netscaler which can only operate with 1 vNIC in Azure) yes I know that having multiple vNICs is supported but it is randon which does not work very well with network appliances) Same with the ability to set Static MAC adresses, this is because a lot of network appliances using MAC based licensing
  • Central management of backup (While Backup Vault contains alot of information and some of the capabilities in still in Preview, I would love to have a single view which shows all backup jobs, also give the Azure Backup some capabilities to jump onto Exchange, SQL and Hyper-V) and also include support for DS-series!
  • Iaas V2 VMs also are quite the improvement and moving away from use of cloud Services, there are alot of limiations here towards the other Azure services. Such as that it does not support the Azure Backup Service, and that there are no plans to give a migration option from V1 to V2 VMs.
  • Azure DNS give it a web-interface! while PowerShell is fun and makes things alot easier, sometimes I like to look at DNS zones from a GUI
  • Support for BGP on VPN Gateways (Which allow for failover between different VPN tunnels, same goes for providing suppor for multiple-site Static VPN connections.
  • IPv6 support!
  • Support for Gen2 and VHDX format. Now Microsoft is pushing Generation 2 virtual machines and the new VHDX format, Azure should support this as well. This would make things alot easier in a hybrid scenario and make it alot easier moving resources back and forth
  • Azure RemoteApp while it is a simple of good product there are some things I miss, such as full desktop access (most of our customers want to have full desktop access) and remove the limitation of 20 users minimum, this is a huge deal breaker for SMB customers in this region.
  • Console Access to virtual machines (In some cases while RDP might not be available for some reason, we should have an option to get into the console of the virtual machine)

Now what is the solution to getting all this added to Azure? us of course!

The best way to get Microsoft’s attention to add new features and capabilities into Azure is by either posting feedback on this site or by voting up already existing posts http://feedback.azure.com/forums/34192–general-feedback

Much of the newly added capabilities, originates from this forum.

Getting started with PowerShell management with Arista

In 2012 Microsoft Introduced (Open Management Infrastructure) OMI which allows for standard based management across different platforms. As of now Microsoft is working with Cisco and Arista to port OMI to their network switches. And also with the latest version of PowerShell DSC we can also use DSC against OMI servers running on these switches, stay tuned for more about that.

But this is a blogpost on how to get started with PowerShell management with Arista. We can download a trial from Arista’s website to run in a virtual enviroment.

After setup we need to configure a management IP and define the port parameters for the CIM session and deploy an ACL, then save the configuration.

configure
interface management 1
ip address 192.168.200.66/24

exit

management cim-provider
no shutdown
http 5985
https 5986

exit

aaa root secret Password1

ip access-list OMI
10 permit tcp 192.168.200.0/24 any eq 5985 5986

exit

control-plane
ip access-group OMI in

copy running-config startup-config

Now that the appliance is available we need to connect to it using a new-cimsession

# Since the computer does not trust the certificate we need to skipCAchecks
$nossl = New-CimSessionOption -SkipCACheck -SkipCNCheck -UseSsl

# Switch credentials
$password = ConvertTo-SecureString “Password1” -AsPlainText -Force
$credentials = New-Object System.Management.Automation.PSCredential( “root”, $password )

# Create a session to the switch
$switch = “192.168.0.10”
$session = New-CimSession -CN $switch -port 5986 -Auth Basic `
        -Credential $credentials -SessionOption $nossl

Now with WMF 5.0 we can use the included NetworkSwitchManager module to do management against the switches natively without knowning the diferent CIM classes.

For instance, we can use get-networkswitchfeature or ethernetport.

image

for instance we can define trunk ports and VLAN access

image

And as we can see from the running configuration that the parameters are set

image

Still there is alot missing from the NetworkSwitch module, hence we need to use the built-in CIM classes to do much of the management, stay tuned for more.

Setting up Storage Spaces Direct on Windows Server 2016 TP3

This is a step-by-step guide on how to setup a minimal Storage Spaces direct cluster on virtual machines running on Vmware workstation. It is also meant to enlighten people abit about the functionality which Microsoft is coming with and what it is lacking at the moment.

Important thing to remember about Storage Spaces Direct it is Microsoft’s first step into a converged infrastructure. Since it allows us to setup servers using locally attached Storage and created a cluster on top. Kinda lika VSAN and Nutanix, but not quite there yet. On top of the cluster functionality it uses Storage Spaces to create a pool of different vDisks on top to store virtual machines. Storage spaces Direct is not at the same level as VSAN and Nutanix, but It can also be used for general file server usage.

Clustering_Calabria_Hyperconverged

This setup is running Vmware workstation 11 with 2 virtual machines for scale-out file server and 1 domain controller.
The two scale out file servers have attached 4 virtual harddrives and 2 NICs.

Important that the harddrives are SATA based

image

After setting up the virtual machines install file server and failover cluster manager

Install-WindowsFeature –Name File-Services, Failover-Clustering –IncludeManagementTools

Then just create a failover cluster using Failover cluster manager or using PowerShell

New-Cluster –Name hvcl –Node <hv02,hv01> –NoStorage

After the Cluster setup is complete we need to define that this is going to be a Storage Spaces Direct SAN

Enable-ClusterStorageSpacesDirect

image

Then do a validation test to make sure that the Storage Spaces direct cluster shold work as inteded Smilefjes

image

Now you might get a warning that the disks on both nodes have the same identifier In case you need to shut down on of the VMs and change SATA disk identifier

image

Then define cluster network usage

image

The Storage Spaces Replication network will be on Cluster Only usage. Now that we have the bunch of disks available we need to create a disk pool. This can either be done using Failover cluster or using Powershell

image

 

But either way you should disable Writebackcache on a Storage Spaces Direct cluster which can be done after the creating using set-storagepool –friendlyname “nameofpool” –writecachesizedefault 0

image

Now we can create a new virtual disk, then configure settings like storage resilliency and such

image

Then we are done with the vDisk

image

Now when going to the virtual disk partion setup, make sure that you set it to ReFS

image

Now we can see that default values of the storage spaces direct vdisk

image

Now I can create a CSV volume of that vDisk

image

After we have created the CSV we need to add the Scale-out file server role as a clustered role

image

Next we need to add a file share to explose the SMB file share to external applications such as Hyper-V

image

image

image

And we are done!

We can now access the storage spaces direct cluster using the client access point we defined. Now during file transfer we can see which endpoint is being used for the reciving end. in this case it is this host 192.168.0.30 which is getting a file transfer from 192.168.0.1 and will then replicate the data to 10.0.0.2 across the cluster network.

image

The SMB client uses DNS to do an initial request to the SMB file server. Then they agree upon the dialect to use in the process. (This is from Message Analyzer)

image

Now what it is missing ?

Data locality! I have not seen any indication that Hyper-V clusters running on top of Storage Spaces direct in a hyperconverged deployment have the ability to “place” or run virtual machines on the node that they are running on top on. This will create a fragementation of storage / compute which is not really good. Maybe this is going to be implemented in the final version of Windows Server 2016, but not the SMB protocol does not have any built-in mechinims that handles this. For instance Nutanix has the built-in since the Controller will see if the VM is running locally or not and will start replicating bits and bytes until the processing is running locally.

Setting up HTTP/2 support on IIS server 2016 & Citrix Storefront

With the slow demise of HTTP, there is a new kid on the block HTTP/2, Which I have blogged about earlier from a Netscaler point of view https://msandbu.wordpress.com/2015/07/03/citrix-netscaler-and-support-for-next-generation-web-traffic-protocols-like-spdy-http2/

In the upcoming server release from Microsoft, IIS 2016 will be the first IIS release that supports HTTP/2, it is enabled by default from TP3 (All we need is a certificate to enable HTTP/2) So if I fire up a HTTP connection to a IIS server 2016 it will use regular HTTP, this can be seen using developer tools on Internet Explorer

image

Now if I setup support for HTTP/2 for older versions, this needs to be enabled from registry at the moment, using the following registry key.

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetServicesHTTPParameters

Here we need to create a new DWORD value named DuoEnabled

Then set the value to 1

image

Then we need to add a certificate since HTTP/2 by default requires TLS in order to function, this can be done by for instance adding just a self-signed certificate to the web-site binding.
NOTE: This is not something that has be put as the standard, but just adopted by the different web-server vendors as well as browser vendors.

image

then restart the IIS service.

Now we can again to a connection to the IIS website and have developer tools open from IE and we can see that it is connecting using HTTP/2

image

Now I can also verify that this works flawlessly for Citrix Storefront as well

image

Just by moving to HTTP/2 looks like it has improved the performance very much. The login page went from 200 ms to about 40 – 50 ms load time. The general feel of the performance of the site looks much smoother.

NOTE: I have sent an email to Citrix to ask if this is supported or if there will be an upgrade in the future to support this properly.

NOTE: You can see more about the implementation of HTTP/2 on IIS on this GitHub page –> https://github.com/MSOpenTech/http2-katana

Virtual Machine backup in Azure using Veeam Endpoint

A while back I blogged about Veeam Endpoint https://msandbu.wordpress.com/2014/12/01/veeam-endpoint-backup-a-new-free-backup-solution-for-computers-and-physical-servers/ while it is aimed at Physical computers / servers it has another purpose that I just discovered.

In Azure, Microsoft has currently a preview feature called Azure VM backup, which in essence is a image based backup of virtual machines in Azure. Now since this currently has alot of limitations I figured what other options do we have?

While some people do Windows Server Backup directly to another Azure VM disk, I figured why not give Veeam a try with Data disk and use it in conjunction with SMB files. The reason why is that we can use Veeam Endpoint do to backup to an data disk (which is attached to an individual VM) then create a task to move the backup to an SMB files store (in case the virtual machines crashes or is unavailable we have the backup on an SMB file share and that makes it accessable for all other virtual machines within that storage account. NOTE: Doing Veeam backup directly to SMB file shares is not working

So we create a virtual machine in Azure and then use the portal to attach an empty data disk for the virtual machine

image

This new disk is going to be the repostiory for Veeam Endpoint within the VM

SMB files is a SMB like feature which is currently in preview and is available for each storage account. In order to use it we must first create a SMB file share using PowerShell

$FSContext=New-AzureStorageContext storageaccount storageaccountkey

$FS = New-AzureStorageShare sampleshare -Context $FSContext

New-AzureStorageDirectory -Share $FS -Path sampledir

After we have created the fileshare we need to add the network path to the virtual machine inside Azure. First we shold use CMDkey to add the username and password to the SMB file share to that it can reconnect after reboot

cmdkey /add: storageaccountpost.file.core.windows.net /user:useraccount /pass:<storage Access Key>

And then use net use z: \storageaccount.file.core.windows.netsampleshare

image

After the network drive is mapped up, we can install Veeam Endpoint.

image

Now Veeam Endpoint is a free backup solution, it can integrate with existing Veeam infrastructure such as repositories for more centralized backup solution. It also has some limitations regarding application-aware processing but works well with tradisional VMs in Azure.

After setup is complete we can setup our backup schedule

image

image

image

Then I run the backup job. Make sure that the backup job is run correnctly, not that as best-pratice is not to store any appliaction or such on C: drive, I also got VSS error messages while backing up data on c: therefore you should have another data disk where you store applications and files if neccessery.

Now after the backup is complete we have our backup files on a data disk that is attached to a virtual machine. We have two options here in case we need to restore data on another virtual machine.

1: We can run the restore wizard from the backup files on another virtual machine against the copied backup files on the SMB file share

image

2: Deattach and reattach the virtual disk to another virtual machine.
this is cumbersome to do if we have multiple virtual harddrives

image

Now the attaching a virtual disk is done on the fly, when we run the restore wizard from Veeam, the wizard will automatically detect the backup volume and give us the list of restore points available on the drive

image

Note that while running the file recovery wizard does not give us an option to restore back directly to the same volume, so we can only copy data out from a backup file.

image

Well there you have it, using Veeam endpoint protection for virtual machine in Azure against a data drive. After given it a couple of test runs I can tell its working as intended and gives alot better functionality over the built-in windows server backup. If you want to you can also set it up with Veeam FastSCP for Azure and allowing it to download files from Azure VMs down to an on-premises setup.

Nvidia GRID 2.0 at Vmware 2015

Among all the new updates announced at Vmware, Nvidia made one of their own. Nvidia announced that GRID 2.0 architecture is going to be released on September 15th. http://nvidianews.nvidia.com/news/nvidia-grid-2-0-launches-with-broad-industry-support

This is a huge improvement and opens up for a lot of opportunities, this GRID 2.0 architecture is built upon the latest Maxwell GPU architecture, which can either be in two forms, one using tradisional form factor and the M6 which is aimed for blade servers.

For instance this means that we can deploy a Dell M630 (Which is the 13th Generation Dell blade servers) combines with the Tesla M6 cards. Also with the suppor for Linux on both Vmware Horizon and Citrix XenDesktop this will hopefully enable more use of GPU in Linux based workloads.

grid 2.0 2x

Features:

  • Doubled user density: NVIDIA GRID 2.0 doubles user density over the previous version, introduced last year, allowing up to 128 users per server. This enables enterprises to scale more cost effectively, expanding service to more employees at a lower cost per user.
  • Doubled application performance: Using the latest version of NVIDIA’s award-winning Maxwell™ GPU architecture, NVIDIA GRID 2.0 delivers twice the application performance as before — exceeding the performance of many native clients.
  • Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers — not simply rack servers — from leading blade server providers.
  • Linux support: No longer limited to the Windows operating system, NVIDIA GRID 2.0 now enables enterprises in industries that depend on Linux applications and workflows to take advantage of graphics-accelerated virtualization.- See more at: http://nvidianews.nvidia.com/news/nvidia-grid-2-0-launches-with-broad-industry-support#sthash.nekBRGxv.dpuf
  • Now there is a little Gotcha here… That is GRID 2.0 requires a software licence from Nvidia, you can read more about it at Thomas Poppelgaard’s blog here –> http://bit.ly/1NWbwWC

    Also important to remember that Citrix also announced Framehawk support using Citrix Netscaler a few weeks back, combine this with vGPU you get a really good desktop experience.