Monthly Archives: July 2013

Setup Netscaler for XenDesktop 7 and AppController 2.8

This is going to be a long one Smilefjes
Always wanted to document this myself but never had the time, so I figured why not knock two birds with one stone and blog it as well since many are probably wondering about the same thing.

This is a typical deployment for many right? You have your internal XA/XD which are tied to a StoreFront web server and for remote access you have Netscaler Gateway/AG

And depending on the setup you might have a Netscaler in DMZ behind a NAT firewall, or directly connected to the internet from the DMZ or you might have a double hop network where you have multiple DMZ sones and firewalls.

So how to tie them together ?
First I suggest you read my previous post regarding XenDesktop 7 with StoreFront and Appcontroller deployment.
https://msandbu.wordpress.com/2013/06/26/xendesktop-7-setup-and-appcontroller-setup/

Lets head over to our Netscaler deployment. We can start by cheching our network connection.

We have different types of networking within the NS, we have VIP( Virtual IP) which are typically tied to load balanced service. We have SNIP (Subnet IP) which are used to initiate a connection to the back-end servers (XenDesktop Servers, Storefront etc) and you have a NSIP (Netscaler IP which is used for management)

So for a user the connection will look like this.

User –> VIP –> SNIP –> XenDesktop (Servers)

Typical deployment is that you have a netscaler with two interfaces, one in to the DMZ and one into the backend servers. (In my case I have all interfaces connected to the same subnet.image

Next we can add authentication.
Go into Netscaler Gateway –> Policies –> Authentication –> LDAP –> Add

image

For named expression I choose General and True and choose Add.
((What does this do ? specifies that IF the traffic is going trough the NS appliance then this policy should be applied)

Then give it a name and choose new server and enter the information to the AD server. After you have entered the info “Press Retrieve Attributes”
Remember that this command uses the IP address of the server you are using the browser on.

If you are having trouble with authentication fire up console to the Netscaler Appliance type in shell then cd /tmp then type the command cat aaad.debug
This will display in real time information regarding the authentication tries.

After that is done, add a DNS server.

image

Now lets add a certificate (for this purpose I have a Enterprise Root CA on Windows Server 2012 which I used to create a web server certificate which contained the host name of the access gateway) nsgw.msandbu.local in my case and I choose to export it as a PFX file including the private key (You will need the private key!!) In production you should use a third party CA to isse a certificate to you.

You can upload the PFX file under Traffic Management –> SSL –> Manage Certificates –> then you can upload the PFX.

image

After this is done open Netscaler console and extract the certificate and the key from the PFX.
This can be done by running openssl from the Netscaler Console

openssl.exe pkcs12 -in publicAndprivate.pfx -nocerts -out privateKey.pem (Extract keys)
openssl.exe pkcs12 -in publicAndprivate.pfx -clcerts -nokeys -out publicCert.pem (Extract Certs)

After that is done you can install the certificate
image

Next we create a virtual server under Netscaler Gateway and assosiate it with an IP-address.
Since we just want ICA-proxy and no VPN (Smart Access solution) we can choose Basic Mode.
Under Protocol choose SSL (After this is done the service will go down unless you have a valid ceritificate installed)

image

If you go into the Authentication Tab (mark the Enable Authentication)
and under Primary Authentication Policiess choose insert policy. (By default the one we created earlier will appear)

Now if you wish to have two-factor authentication you can add another Primary authentication policy.

image

After this is done head over to policies. We need to add a Session Policy, here as well we use ns_true as an expression. Give it a name and press create New Request Profile.

image

Here we enter the information about the backend storefront servers. (NOTE I already have one stored there this is because I have created this earlier Smilefjes

Now there are a couple of options here we need to define.
First under Published Applications.
image
1: We have to define ICA-proxy, this will tunnel ICA traffic via port 443 back to the user.
2: Web Interface address this has to be Storefront web address.
3: Single sign-on domain should be your local AD domain. (Don’t enter anything here in case you have multiple domains)

Next is under Client Experience –>
Define Single Sign-ON to web applications using Primary Credentials, this allows the Netscaler gateway to authenticate to the Storefront site.

image

We have to define at the NS should use SSO to the storefront web adress using the Primary authentication mechanism which is AD in my case.

Last but not least, Security so we can allow users to actually enter.

image

You should also enable TCP profile for this virtual server set to nstcp_default_xa_xd_profile (This profile works best for internal usage and high bandwidth networks)

image

Then we also have to add STA (Of the XD controllers in my case) Go back to Published Applications.

Click Add and enter the URL of the XD controller. After you save and refresh the page it will show up like mine did now.

image

Remember to save the config! Smilefjes
After that is done we have head over to Storefront

Now there are a couple of things we need to fix there. First we need to add an authentication option from Netscaler.

image

This will allow the Storefront to authenticate users coming from  Netscaler. (To pass the credentials forward)

Next we have to go to Stores –> Enable Remote Access –> Choose Add netscaler appliance –>

image

Here enter the info regarding your netscaler.
SNIP here is the one that you entered inn earlier on the Netscaler, StoreFront uses this to validate that any incoming connections comes from a trusted host.
The CallBack URL is the Internal IP-address of the Netscaler.

image

Then you setup it as a NO VPN Tunnel and choose the Gateway appliance to use.
You also have to add the STA’s here as well.

image

And last but not least, Beacons.
Beacons are used to identify if the end-user comes from an internal or external connection.
For instance you can put an external beacon for a public accessable website and internal for a website that is ONLY available for internal users.

This is what decides if the ICA-file the end-user receives is going to be used via ICA-proxy or a plain ICA-connection straight to the server.

image

In this case since it’s a demo enviroment all are on the same network. But I could remove the nsgw as an external beacon. And just have www.citrix.com and another external site.

Now since the AppController connected to the Storefront service we don’t need to anything else inorder to view Apps deployed from AppController.

NOTE: There is a couple of things if you are doing to deploy for instnace WorX apps from appcontroller and going to use mVPN solution to iOS and Andriod.

You will need to enable a couple of things here.
* Split-tunneling
* Clientless Access URL Encoding = Clear

image

You also need to enable Secure Browsing

After this is done, we can open up our virtual IP URL.
In my case it is https://nsgw.msandbu.local

Login with my username and password and start a desktop connection (For the purpose of this demonstration I have also added a weblink from AppController that points to yammer.com

image_thumb[9]

image_thumb[1]

If you open resource monitor you can see that traffic is tunneled in port 443
image_thumb[6]

And if we open resource monitor on the desktop I just launched I can see that the servers speaks via the session reliability port to the SNIP ip (Which is 60.114)
image_thumb[5]

Invalid user for SSO Citrix AppController

Got this wierd error when connecting to appcontroller today with my admin user. This happend for both externally and internal users.

The problem ? I did a restore of my admin user in AD and hence I lost a couple of parameters on my user which AppController needs.

1: Make sure that your users has Firstname, Lastname and e-mail defined within AD.
2: Refresh the AD connection from within AppController

image

How easy is it deploying XenClient ? Quick walkthrough

Todays generation of IT-users are embracing BYOD as much as possible, where one user has multiple devices to work with. This isn’t so easy to achive with a corporate laptop.

Imagine a new employee comes on board and is given a new corporate laptop. This new employee demands flexibility and the ability to personilize the laptop, not quite easy when the laptop is installed with a corporate image and contains different security settings to contain the integrity of the laptop.

So you get a not so happy user. The solution ?
XenClient!

XenClient is a type 1 hypervisor (Like XenServer and Hyper-V) but it is used on client computers not on servers. Where the main idea is to have different virtual machines running on a laptop (for instance having 1 VM which the user can use at home his personal VM) and then having a corpoprate VM which he uses at work)
This way the user gets the BYOD he/she wants.

Remember that XenClient has an HCL which you should look on before you deploy this product
http://citrix.com/ready/en/search?search%5Bq%5D=Xenclient

The architecture and deployment of this solution is pretty simple and this is what I want do show.

The Architecture consists of a Managment Server and a Synchronizer server (Which must be installed on top of a hypervisor 2008R2 or 2012 With Project Thunder beta release)

The Synchronizer is the one responsible for deploying and maintaining the VDI images.
For the purpose of this demo I deployed Project Thunder (Which supports Hyper-V 2012 server) and deployed XenClient on top of VMware player.

You can download this from mycitrix (requires login)
After installation and opening you need to configure the network switch

image

Check the other settings as well and make sure that the UNC path for VMs are correct.

image

Now in my case im going to create a VM running on Server 2012 which I want to deploy to my users, doesn’t make a lot of sense but that’s the only ISO I had stored here Smilefjes

If you want to import ISO files into XenClient you have to place them in this folder
image

Then go into Software Library –> ISO Media and import the file.

image

Next we go into Virtual Machines and choose create new and add the ISO file and enter the paramteres

image

Next I choose the usage mode.
Since this is going to be my personal VM and I don’t image updates Im going with Custom.
In many cases I would use PVD with the corporate image since then I can have one Image that I need to update.

image

Next I define which hardware this VM should have.

image

Next I define policies for this VM, for instnace backup schedule, allow USB devices etc.

image

Then XenClient will start the VM on Hyper-V

image

Then I can finish customization and then I can deploy it to my users.

image

Then it runs the deployment job

image

Then you can see that the VM is deployed

image

Then I now have a VM and a local user which I will use to connect to the Synchronizer server.
(Something I found a bit odd was that I had a self-signed certificate on the server but the client connected anyways)

image

Then I have to enter information to connect to the Sync-server

image

Now as an IT-admin I could setup this for another user and download his VM but in my case I have my own windows server vm I wish to user to I enter my own user Smilefjes

image

Now XenClient will start to download the VM which is deployed to my user

image

I can now see in the XenClient management console that my computer is connected to the server.

image

Now if someone steals the laptop I can choose kill it in the XenClient console, the Client refreshes and all I see is this screen. And remember that you should setup a policy that defines that your clients need to contant the sync-server for instance once a week.

image

image

image

Now remember that XenClient is a part of XenDesktop Enterprise or Platinum so when you buy XenDesktop you get this as well.
Someone I suggest Citrix does, is to “integrate” XenDesktop with XenClient. For instance if I have my VM running on XenDesktop and I wish to sync it to my XenClient laptop as well for working on the road, this is something they should work on Smilefjes

Load balancing Application Catalog for Configuration Manager

A customer asked me recently can I configure load balancing for my Application Catalog service on Configuration Manager, since It runs on Silverlight im unsure how it will work…

Sure you can!
The Application Catalog in Configuration Manager consist of two components, the Application Catalog Web Service Point and the website point.

image

Now when you install these you have the option to configure what ports they should run on. In my case I choose port 80 (Since I want my load balancer to handle the SSL traffic)

First I make sure that the catalog is working
Open a web browser to http://applicationcatalogserver/CMApplicationCatalog 
From here I have to enter my username and password (Since im using Chrome)

image

The Application catalog server is the one that has the Silverlight XAP module that runs on the web server, the Silverlight module again contacts the Web Service point in order to generate the software that the user has access to.

image

The silverlight module is located in “ClientBin”
Content folder contains images and css files and JS and can be targeted for caching (If you have that option on your load-balancer)

Now in my case I have a Netscaler VPX that Im going to use.
So a quick runtrough there.

1: Add Servers (Which have the applicationcatalog role intalled)
image
2: Add the service you want to setup (And add a monitor, HTTP in this case)
image
3: Create a Virtual Server and choose SSL and add a certificate (Note if you choose SSL and don’t add a certificate the service will go down)
image
4: Add persistency (For my case I choose client-ip) and choose LB method
image

After this is done check the virtual server and open the same url with https:

image

And it worked.
One last thing is to change the default URL in the Client Agent settings.
Here you have to specify a URL and enter the whole path for the Application Catalog.

image

After that is done you have to update the policy on a client and check for yourself.
You can open Software Center to see that the policy is active.
NOTE: It is important that the Value for the HTTP is
https://servername:port/CMApplicationCatalog/ or else the url won’t redirect.

Or you can do a redirect at the load balancer Smilefjes

XenDesktop 7 beta exams

So Citrix has released (a couple of weeks ago) beta exams of their upcoming XenDesktop 7 exams (Which are at the moment free for the taking for a limited time on pearson-vue.com)
There are three exams

http://training.citrix.com/cms/index.php/certification-links/new-certs/#beta-exams

1Y1-200  Managing Citrix XenDesktop 7 Solutions Beta Exam
which is equivilant to CCA on XA/XD

1Y1-300 Deploying Citrix XenDesktop 7 Solutions Beta Exam
which is equvilant to CCEE on XA

1Y1-400 Designing Citrix XenDesktop 7 Solutions Beta Exam
which is equivilant to the CCIA

And for each exam, Citrix har designed a prep guide

http://training.citrix.com/mod/ctxmodviewer/view.php?mod=resource&id=1140 (1Y1-400)
http://training.citrix.com/mod/ctxmodviewer/view.php?mod=resource&id=1138 (1Y1-300)
http://training.citrix.com/mod/ctxmodviewer/view.php?mod=resource&id=1139 (1Y1-200)

Now I am fortunate enough to have tried all three exams so far and the study guides give you good pointers where to start looking.
Citrix has already documented alot on the eDocs site regarding XenDesktop 7
http://support.citrix.com/proddocs/topic/xendesktop/cds-xd-7landing-page.html

The exams as well might also include questions from Netscaler, PVS 7, Storefront 2, XenServer 6.2 so there is plenty of stuff to read on Smilefjes

Some of the other stuff is documented here on eDocs (Storefront, PVS, etc)
http://support.citrix.com/proddocs/topic/technologies/lic-library-node-wrapper.html

Good luck for those give it a go!

System Center Management Pack for VMM Fabric Dashboard 2012 R2 released

Today Microsoft released a Management Pack for VMM 2012R2 Fabric (or a dashboard view)
Which can be found here –> http://www.microsoft.com/en-us/download/details.aspx?id=39635

If you want to install this, there are a couple of prerequistes that you need to take care of.
(Which are not stated on the download page)
First you need to install some additional management packs on SCOM

  • Windows Server Internet Information Services 2003
  • Management packs that are required by the management pack for Windows Server 2008 Internet Information Services 7:
    • Windows Server 2008 Operating System (Discovery)
    • Windows Server Operating System Library
  • Windows Server 2008 Internet Information Services 7
  • Windows Server Internet Information Services Library
  • SQL Server Core Library

And then you have to connect VMM 2012R2 with OPSMGR2012R2
Which can be done under settings and System Center settings –>

image

After that is done, SCVMM will import some additional management packs into SCOM (For VMM monitoring) and then you can import the downloaded dashboard.

You will find the dashboard under Monitoring –> System Center Virtual Machine Manager –> Cloud Health Dashboard

image

And from here I can for instance view the Fabric Health Dashboard which will give me more detailed info om my virtual infrastructure.

image

Network Node monitoring physical networks (So you would have to enable network monitoring in SCOM in order to get information)
The purpose of this dashboard is just to give you a quick overview of (What is the health of my cloud?)

Managing Ubuntu Clients with Configuration Manager

Microsoft recently released a preview of System Center 2012 R2 and with it, they released a new version of the additional clients for Configuration Manager.
You can download them from here –> http://www.microsoft.com/en-us/download/details.aspx?id=39360

The pack includes clients for:

  • AIX Version 7.1, 6.1, 5.3
  • Solaris Version 11 (SPARC and x86) , 10 (SPARC and x86), 9 (SPARC)
  • HP-UX Version 11iv2 (PA-RISC and IA64) , 11iv3 (PA-RISC and IA64)
  • RHEL Version 6 (x64 and x86) , 5 (x64 and x86), 4 (x64 and x86)
  • SLES Version 11 (x64 and x86), 10 (x64 and x86), 9 (x86)
  • CentOS Version 6 (x64 and x86), 5 (x64 and x86)
  • Debian Version 6 (x64 and x86), 5 (x64 and x86)
  • Ubuntu Version 12.4 LTS (x64 and x86), 10.4 LTS (x64 and x86)
  • Oracle Linux 6 (x64 and x86), 5 (x64 and x86)
    • Mac OS X 10.6 (Snow Leopard)
    • Mac OS X 10.7 (Lion)
    • Mac OS X 10.8 (Mountain Lion)

For my part I see more and more using Mac in the enterprise, but at my former job we had alot of RHEL and Ubuntu users as well, so therefore I wanted to show how we can manage these types of clients in the enterprise.

Now in order to setup a client we need to download two files to the ubuntu computer.
The CCM-universal package and the install file.

After the files are downloaded you have to open terminal and run the following command from the download folder

NOTE: Be sure that the linux client can find the ConfigMgr servers by nslookup.
You might need to alter the resolv.conf file to point to another DNS server.
You might also need to define a domain name in order to use the FQDN
domainname AD.fqdn from terminal

./install -mp <computer> -sitecode <sitecode> <property #1> <property #2> <client installation package>

NOTE: You have to change the rights on the install file by running chmod +x install from temrinal

So in my case ./install –mp configmgr.msandbu.local –sitecode TST ccm-Universal-x86.tar

image

image

After this is done you can review logs from the /var/opt/microsoft/scxcm.log folder.
NOTE: If you run the installation again you will get a message if you wish to overwrite in case you entered the wrong info during setup, if you wish to uinstall it completely you can run the command /opt/microsoft/configmgr/bin/uninstall

Note: from CU1 Linux clients now support FSP as well which you can specify during the installation. –fsp fsppoint.fqdn

Azure and IOPS performance

Many are thinking about moving to the cloud or at least look at the possbilities, and some might wonder can my workload function in the cloud ? or perhaps you are thinking of deploying Citrix in the cloud?
For many the failure of Citrix implementations are tied to the SAN or perhaps the IOPS to the SAN.

So I took Azure for a test-drive to see how well the storage there handles IO.
For those who are unfamiliar with IaaS on Azure, the VMs have 2 disks by default.
1: is OS disk which like the familiar System Disk. (This is labeled c:)
2: is the temporary disk (The SWAP file is placed here, this disk is not persistent)
3: Data disks (Which are not included when you create a VM you have to add on later) and depending on which VM instance you create you get a limit of how many data disks you can add.
So as of now I have tested a small VM in Azure and wanted to determine (is this VMs slower then an on-premise VM)
In teory the VM should be limited to max 500 IOPS on a data disks
http://msdn.microsoft.com/en-us/library/windowsazure/dn197896.aspx

And remember if you have issues with limited capacity, check what VM instance you are running. They have a limit in network bandwidth.

All of the disks by default are set to 512 sector size in Azure, for some comparison I did the similar IOmeter tests from my personal laptop running a top notch SSD disk from Samsung.

First I did a 100% READ IO operation using 512 sector size in IOmeter.
The System Disk on the VM in azure operated actually quite well, might have something to do with the cache on the system disk.
image

But again is nothing compared to the SSD drive on my computer.
image

Now I try the temp disk.
image

And I also attached an empty data disk without not cache and doing a 512 100% read IO image

Ouch, this is similar to a 10k SAS disk deliver in terms of performance, lets try another data disk and enable write and read cache and do the similar READ test.

image

Clearly it has its perks, WRITE/READ cache is enabled for the Operating System Disk.

Using the read cache will improve sequential IO read performance when the reads ‘hit’ the cache. Sequential reads will hit the cache if:

  1. The data has been read before.
  2. The cache is large enough to hold all of the data.

Now it looks like I did not push the limit enough, it seems that the cache is equal to about 2 GB since after running HD tune I saw a decline in performance after 2 GB.
This is the regular data disk without cache

image

This is the data disk with cache (the horizontal line is GB)

image

I also did some IOPS testing from here as well. Data drive without cache

image

Random IO here is equal to what we got in IOmeter, lets try with the cache data disk.

image

Now this is my laptop SSD drive (picture down below), and as you can see the cache data drive has better performance since it is using RAM as cache.

image

Now lets again test some file benchmark testing of files at 500mb.
The first one is my SSD drive

image

This is the data drive with cache.

image

The regular data drive.

image

Now for a twist, I removed the data disk without cache and I added another data disk with full cache. I took the two data disk with cache and coverted them to a raid 0 volume and I ran the same tests again.

image

Some different results. When I get to 1MB transfer sizes and RANDOM then I get a huge bandwitdh increate, but on the lower blocks I see no performance increase.

And in regular file transfer benchmarking I actually see a decline in performance.

image

Stay tuned for more! Im going forwards to setup a XenDesktop arctitechture and to see it in real life.

Monitoring Exchange 2013 with Operations Manager 2012

Microsoft just released a Management Pack for Exchange 2013 and it couldn’t arrive fast enough Smilefjes The installation is simple as pai, download and import via management packs.

You can download the management pack from here –>

This Mangement Pack require the same prerequisites as the previous ones.

  • You have one of the following versions of System Center Operations Manager deployed in your organization:
    • System Center Operations Manager 2012 RTM or later
    • System Center Operations Manager 2007 R2 or later
  • You have already deployed SCOM agents to your Exchange Servers. Show me how.
  • The SCOM agents on your Exchange Servers are running under the local system account. Show me how.
  • The SCOM agents on your Exchange Servers are configured to act as a proxy and discover managed objects on other computers. Show me how.
  • Your user account is a member of the Operations Manager Administrators role.

Now if you want to use this in a lab enviroment I always enable proxy roles for all the agents. (Not recommended for a producition because of the security risks)

But it you want to enable this for all agents open SCOM Powershell and run the command

Get-SCOMAgent | where {$_.ProxyingEnabled.Value -eq $False} | Enable-SCOMAgentProxy

This will enable agent proxy for all agents that do not have proxy enabled Smilefjes

The management gives a clean overview of the infrastructure (In my lab enviroment I only have one server which is a CAS and Mailbox server)

image

Now if I enter health explorer I see all the different objects that are being monitored.

image

And the list goes on Smilefjes 
So far my impression is that the Microsoft Exchange have been good to enter knowledge sources for each monitor.

The Server Helath shows all the different components and which servers they are installed on.
image

You can see more about the documentation regarding the management pack here –> http://technet.microsoft.com/en-us/library/dn195908(v=exchg.150).aspx

Azure and price comparison with On-premise

Have several customers come to me in the last couple of months asking me “How can Azure be more affordable then an on-premise solution?” “I mean a virtual machine in Azure costs more then I can run in our datacenter”. So I have always said back to the customer “have you thought about the SAN? The Power Usage ? Internet Connection? Hardware failure? Licensing ? Rental of datacenter etc and so on ?

I also see alot of forums posts regarding the same thing, so therefore I thought I would write a post how to do a price comparison with an on-premise solution and running IaaS on Azure.

Now in my research I had to set some prerequistes.
* A new company that needs to setup a datacenter start with renting some rack space at a colocation center.
* The pricing has been based upon some norwegian company prices.
* This new company needs to setup a new IaaS based solution based upon Hyper-V and failover clustering
* This new company is basing their hardware on Dell hardware (both virtualization hosts and networking and storage) With the regular support of 3 years. So in the extreme cases they would need to replace their hardware every 3 years.
* The company will also need a good internet access to this private cloud for the end-users running applications against it.
* The operating system mostly used will be Windows Server 2012 (Therefore im going to base it on Windows 2012 Datacenter Server)
* One person will have to be in-charge of the hardware part-time or this can be out-sourced to the colocation company.
* The datacenter needs to have good physical security measures inplace.

So let us start with Azure. The pricing here is based upon the calculator and since this is a company that knows how many vm we need we will setup a pre-paid 12 month plan.
Lets start with something small. Our company has to host some applications on a web servere running on 20 different servers these will be running on a medium VM in Azure (a medium VM consists of 2 shared cores and 3,5 GB of RAM. (Total of 40 shared cores and 70 GB of RAM)

You can read more about the different options here –> http://msdn.microsoft.com/en-us/library/windowsazure/dn197896.aspx  (This makes up to $2,678.40 a month) And inside that number there are a couple of factors that are included.
This makes up to $32140 a year for 20 Virtual machines running non stop in Azure.

UPDATE: 01/04/2014 Since Microsoft has reduced the cost on VM since last time this article was updated the price has now been lowered for from 32140$ to 20184$ a year for 20 VMs running non-stop. 

* All the hardware is managed by Microsoft (This means UPS, Power, networking, storage, )
* Phyiscal Security is controlled by Microsoft
* Internet Access is included
* The Windows Server 2012 license and CALs are included as part of the pay-per-hour fee.
* Highly-available (The data is being replicated three times inside the same datacenter and Azure hold controll of VM’s being available)

So how much would this cost on-premise for a company ?

* Renting rack space for instance in my case I found a colocation company that has the ability to offer colocation https://www.mywebhost.no/produkter/colocation/
So lets say I wanted to reck an entire rack (that would cost me around 1137$ a month this gives me UPS, physical security, own internet access to the rack but not including power. (so for the rack renting space would be $1137.

Hardware (I would atleast need 2 physical servers setup with a failover cluster and . The cluster would be setup with an iSCSI based SAN solution. Now for some Dell Servers R720 (With both 40 GB of ram and 2x Intel Xeon with 8 cores each costs about $6000 each (which then includes 3 year support) so for two servers that’s 12000$ for one year. As for the SAN I cannot get any prices from Dell since I need to be a dell partner to get that I can only estimate around  $4000 there as well, since iSCSI runs over regular ethernet I need a managed switch where I can configure VLANs so I found a managed gigabit switch from Dell which costs around 1500$ so in total for the hardware (not including cables etc) is around $8000 + $4000 + $1500 = $17500 for one year. (NOTE: that this cost can be divided by 3 since the support lasts for 3 years and there will be no more invenstments in hardware in that timeframe)

And for the power I have found that the regular kw/hour is around 0,05$ here in Norway (In June) so for the Dell R710 under heavy load uses about 258 Watts/hour and the switch uses 30 watt under load. 546W and if this infrastrucure runs 24/7 this equals to 13KWh a day (so for one year) which is a total of 365 days in a year) with 13 KWh we get around $237 for the Power Usage. (When it is under full load of course) source: http://essa.us.dell.com/DellStarOnline/DCCP.aspx

* Software costs for licensing. (In this case since we have 20 virtual machines running in a cluster we could either use 10 standard licenses or two datacenter licenses. Now I have to use standard licenses from OPEN lisenses
https://mspartner.microsoft.com/en/us/Pages/Licensing/Downloads/open-license-no-level-estimated-retail-price-list.aspx
Now a datacenter 2 Proc license costs $4,810.00 w/o SA. So in case we would need 2 licenses (one for each host) so that totals of $9620 (Now when a new release comes out I would need to buy the new license or I can buy a license with SA then I would get the new release)

UPDATE: 01/04/14 Since Microsoft has raised the price for Windows Server 2012 R2 the Datacenter lisense goes up from $4810 to $6156 w/o SA User CALs are the same so they do not require an update. Totalt in licses for three years $15712

And this software that the buisness i running requires users to authenticate to AD (Which requires CALs) Im going with user CAls (they cost around $34 each) so for 100 users they come to $3400 as well.

So licenses in total = $13020

Now one part missing and that is that we need someone to manage this infrastructure (Both hardware, hypervisor level and the failover cluster) Since this is just a small installation im guessing we need a regular employee doing this 10% of his full time job. Im taking a regular year salary from the norwegian market.
http://www.studenttorget.no/index.php?artikkelid=2300
So for an IT consultant they get an average of $71178 a year so for 10% that equals to $7117 a year.

So in total over a total over 3 years (With an on-premise solution)
* Renting rack space, network connection externally, physical location, fire guard etc)
$1137 a month (13644 for one year) 40932 for three years.

* Power Usage
$237 a year ($711 for three years)

* Hardware
$17500 for three years

* Licenses
$13020 for three years

* Man hours
$7117 a year (21351 for three years)

Total: $96206 for three years for an on-premise solution.
For Azure Total for three years: $60552
Update: 01/04/14
This makes out a difference of 35654$

Another factor to think about here is that if you are academic or educational you get the license cost reduced for about 90% but still Azure would be a cheaper option.

Now some factors I did not consider.

* Azure replicates data three times inside the same datacenter to ensure High-availability, this is not included in the on-premise solution I used (Which would make the on-premise solution alot more expensive, either by having a cold-rack server with replicated VMs)

* Azure includes VPN solutions which I can setup either Site-to-site or Point-to-site this would require me to buy a hardware based VPN solution or use a windows server as an VPN server and require a public IP-address and require firewall configuration on the on-premise solution

* The pricing used for the SAN is not really accurate (Would really much like to get some input here! Smilefjes )

* Licensing OS (The calculations I based it upon are on OPEN and there are some discounts and rebate offerings im not aware of. For instance SPLA and EDU have a bigger discount programs and get therefore lower licensing costs. (EDU can subtract around 70% of the license cost)

* Azure gives a better IOPS pr / virtual machine then the on-premise solution based on the SAN we choose. (Therefore better end-user experience)

* Azure can also offer a load balancing capabilities

* On-premise solution requires additional man-power to start up (setting up and deploying servers, installing hypervisor and patching etc) start-up cost

* The ability to scale up on demand is easy just to click of a button on Azure. In case you no don’t need 20 virtual machines running you can just stop the machines and you will no longer be charged for them.

* In your on-premise datacenter you might still have enough capaticy to have more multiple machines then 20 (and you have already covered the cost of them) but in Azure you will need to pay for each extra machine.

* Both options would need someone to manage AD, IIS and backend solutions.

So even thou there is about 20.000$ difference in the case I just described, Azure will ultimately give you a easier and cheaper deployment. Azure also has advanced capabilities, like replication, HA, LB and VPN which always cost extra to implement on-prem.

But I would really like your feedback on this article, anything I’ve missed ?

 

UPDATE: I also did a comparison between Azure and Amazon EC2 instances as well to see if there was a major difference between the two. I did a comparison between Windows Virtual Machines.
Amazon EC2 instance m3.medium 1 virtual core 3 GHZ, 3,7 GB RAM SSD 1x 4 Where we are running 20 instances fulltime.

Azure Medium Virtual Machines which as 2 x 1,6 GHZ, 3,5 GB of RAM Where we are running 20 instances fulltime

The calculation looks like this. For Windows virtual machines.
Azure: 20256$ (Both includes 100GB bandwidth)
Amazon: 25836$ (Both includes 100GB bandwidth)

The calculation for Linux virtual machines.
Azure:  13488$
Amazon: 15012$ 

NOTE: that in Azure I choose a 12 month pre-paid plan and therefore got a good rebate. This was not an option that I found in the Amazon Price calculator.