Monthly Archives: December 2014

Study guide 70-695

Well almost the last post, one more to go. And this might be interesting for many and is not that known yet. A couple of months ago Microsoft launched the beta of MSCE Enterprise Devices and Apps
This is a new certification title that aims on Configuration Manager specialists and with knowledge of MAP, USMT, MDT, Intune, RemoteApps, VDI,
Exam 695 Deploying Windows Devices and Enterprise Apps
Exam 696 Managing Enterprise Devices and Apps

This is the study guide for exam 695 which aims on using Configuration Manager, MDT, MAP, USMT, ACT and also alot around image building and so on. The one for 696 will come early in 2015.
Assess the computing environment
Configure and implement the Microsoft Assessment and Planning (MAP) Toolkit, assess Configuration Manager reports, integrate MAP with Microsoft System Center 2012 Configuration Manager, determine network load capacity
MAP and System Center Configuration Manager
Implementing MAP

Plan and implement user state migration
Design considerations, including determining which user data and settings to preserve, hard-link versus remote storage, mitigation plan for non-migrated applications, and wipe-and-load migration versus side-by-side migration; estimate migration store size; secure migrated data; create a User State Migration Tool (USMT) package
USMT howto
Windows Upgrade with USMT
USMT scenarioes
Create USMT package
Estimate migraiton store size

Configure the deployment infrastructure
Configure Windows Deployment Services (WDS), install and configure Microsoft Deployment Toolkit (MDT), identify network services that support deployments, select Configuration Manager distribution points, support BitLocker
Configure WDS
Configure MDT
PXE and distribution points
Bitlocker and deployment

Configure and manage activation
Configure KMS, MAK, and Active Directory–based activation; identify the appropriate activation tool
What is KMS, MAK
Configure MAK
Configure KMS
Active Directory based Activation

Implement a Lite Touch deployment (18%)
Install and configure WDS
Configure unicast/multicast, add images to WDS, configure scheduling, restrict who can receive images
Multicast WDS
Adding IMage

Configure MDT
Configure deployment shares, manage the driver pool, configure task sequences, configure customsettings.ini
Creating a deployment share
Manage drivers
Configure task sequence MDT
Configure customsettings.ini

Create and manage answer files
Identify the appropriate location for answer files, identify the required number of answer files, identify the appropriate setup phase for answer files, configure answer file settings, create autounattend.xml answer files
Create answer fiels
Create autounattend

Implement a Zero Touch deployment (20%)
Configure Configuration Manager for OSD
Configure deployment packages and applications, configure task sequences, manage the driver pool, manage boot and deployment images
Managing drivers in Configuration Manager
Managing Boot images
Deployment imagesd
Deploy task sequences
Configure Application for OSD

Configure distribution points
Configure unicast/multicast, configure PXE, configure deployments to distribution points and distribution point groups
Configure distribution point for PXE
Multicast Configuration Manager
Distribution Point group

Configure MDT and Configuration Manager integration
Use MDT-specific task sequences; create MDT boot images; create custom task sequences, using MDT components
Integrate MDT with Configuration Manager
Create Custom MDT boot images
Create a custom task sequence

Create and maintain desktop images (21%)Plan images
Design considerations, including thin, thick, and hybrid images, WDS image types, image format (VHD or WIM), number of images based on operating system or hardware platform, drivers, and operating features
Thin or thick
WDS image types
number of images

Capture images
Prepare the operating system for capture, create capture images using WDS, capture an image to an existing or new WIM file, capture an operating system image using Configuration Manager
Capture using MDT
Creating Capture image WDS
Capture image Configuration Manager

Maintain images
Update images using DISM; apply updates, drivers, settings, and files to online and offline images; apply service packs to images; manage embedded applications
Update images using ISDM
Modify boot image with drivers
Offline servicing images
Prepare and deploy the application environment (20%)
Plan for and implement application compatibility and remediation
Planning considerations, including RDS, VDI, Client Hyper-V, and 32 bit versus 64 bit; plan for application version co-existence; use the Application Compatibility Toolkit (ACT); deploy compatibility fixes

RDS, VDI or Hyper-V
Application compability toolkit

Deploy Office 2013 by using MSI
Customize deployment, manage Office 2013 activation, manage Office 2013 settings, integrate Lite Touch deployment, re-arm Office 2013, provide slipstream updates
Customize and manage office 2013 deployment
Activating Ofifce 2013
Rearm Office 2013
Slipstream Office 2013 updates

Deploy Office 2013 by using click-to-run (C2R)
Configure licensing, customize deployment, configure updates, monitor usage by using the Telemetry Dashboard

Licensing C2R
Telemtry dashboard

Study guide 70-534

This might just be my last post in 2014. I have already gotten several requests to write this study guide therefore I wanted to share it before the end of 2014. The exam 70-534 is another Azure exam which is more focused on the architecting side of Azure. The exam is currently in beta but will be scheduled available in Q1 2015.
So as usual Im going to list the curriculum in the post and add URLs to the different subjects beneath. More information about the exam can be found here –-
Design Microsoft Azure infrastructure and networking (15–20%)
Describe how Azure uses Global Foundation Services (GFS) datacenters
Understand Azure datacenter architecture, regional availability, and high availability

Azure Data center
Links to regional availability and high availability cloud services
Azure regions

Design Azure virtual networks, networking services, DNS, DHCP, and IP addressing configuration
Extend on-premises Active Directory, deploy Active Directory, define static IP reservations, understand ACLs and Network Security Groups

Guidelines for deploying Active Directory in Azure
Azure DNS
Extending Active Directory to Azure
Defining static IP reservations Azure
Azure ACLs
Network security group
Design Azure Compute
Design Azure virtual machines (VMs) and VM architecture for IaaS and PaaS; understand availability sets, fault domains, and update domains in Azure; differentiate between machine classifications

Azure availability sets
Machine Classifications
Azure VM config
Comparison between Paas and Iaas
Describe Azure virtual private network (VPN) and ExpressRoute architecture and design
Describe Azure point-to-site (P2S) and site-to-site (S2S) VPN, understand the architectural differences between Azure VPN and ExpressRoute*

Point to site VPN azure
Site to site VPN azure
Secure Cross-premises Connectivity
Describe Azure services
Understand, at a high level, Azure load balancing options, including Traffic Manager, Azure Media Services, CDN, Azure Active Directory (Azure AD), Azure Cache, Multi-Factor Authentication, and Service Bus

Azure load balancing
Azure CDN:
What is Azure AD
Azure Cache
Azure Multi-Factor authentication
Azure Service Bus

Secure resources (15–20%)
Secure resources by using managed identities
Describe the differences between Active Directory on-premises and Azure AD, programmatically access Azure AD using Graph API, secure access to resources from Azure AD applications using OAuth and OpenID Connect

Difference between Azure AD and on premise AD
Use Graph API to query Azure
Secure Access to Resources with Azure AD

Secure resources by using hybrid identities
Use SAML claims to authenticate to on-premises resources, describe DirSync synchronization, implement federated identities using Azure Access Control service (ACS) and Active Directory Federation Services (ADFS)

Azure SAML claims

Secure resources by using identity providers
Provide access to resources using identity providers, such as Microsoft account, Facebook, Google, and Yahoo

Secure resources with Google, Facebook
Access Control Service
Security guidance
Identify an appropriate data security solution
Use the appropriate Access Control List (ACL), identify security requirements for data in transit and data at rest
Design a role-based access control strategy
Secure resource scopes, such as the ability to create VMs and websites

Role based access control
Secure resource scopes

Design an application storage and data access strategy (15–20%)

Design data storage
Design storage options for data, including Table Storage, SQL Database, DocumentDB, Blob Storage, MongoDB, and MySQL; design security options for SQL Database or Azure Storage; identify the appropriate VM type and size for a solution

Azure Storage Scailabillity
Security Options for Azure Storage
VM types and sizes
Azure table Storage
Azure SQL storage

Design applications that use Mobile Services
Create Azure Mobile Services, consume Mobile Services from cross-platform clients, integrate offline sync capabilities into an application, extend Mobile Services using custom code, implement Mobile Services using Microsoft .NET or Node.js, secure Mobile Services using Azure AD

Using Offline Sync
Create Mobile Services
Registrering with Azure AD and mobile services
Design applications that use notifications
Implement push notification services in Mobile Services, send push notifications to all subscribers, specific subscribers, or a segment of subscribers

Get Started with Push notification

Design applications that use a web API
Implement a custom web API, scale using Azure Websites, offload long-running applications using WebJobs, secure a web API using Azure AD

Secure a Web Api using Azure AD
Use WebJobs
Implement custom web api
How to scale azure websites

Design a data access strategy for hybrid applications
Connect to on-premises data from Azure applications using Service Bus Relay, BizTalk Hybrid Connections, or the VPN capability of Websites, identify constraints for connectivity with VPN, identify options for joining VMs to domains or cloud services

Service bus relay
BizTalk Hybrid Connection
VPN websites
VPN Azure AD
Design a media solution
Describe Media Services, understand key components of Media Services, including streaming capabilities, video on-demand capabilities, and monitoring services

Azure Media Services
Live streaming
Monitoring Media Service
Video on demand

Design an advanced application (15–20%)
Create compute-intensive applications
Design high-performance computing (HPC) and other compute-intensive applications using Azure Services

Design HPC in Azure
HPC Batch in Azure
HPC capabilities in Azure

Create long-running applications
Implement worker roles for scalable processing, design stateless components to accommodate scale
Scalable Processing

Worker roles in Azure Select the appropriate storage option
Use a queue-centric pattern for development, select the appropriate storage for performance, identify storage options for cloud services and hybrid scenarios with compute on-premises and storage on Azure, differentiate between cloud services and VMs interacting with storage service and SQL Database

Storage Best practice
Storage Options*
Azure Storage premium
SQL Hybrid
Desining Hybrid Scenarois Azure

Integrate Azure services in a solution
Identify the appropriate use of machine learning, big data, Media Services, and search services

Machine learning document
Big data
Search Services Azure
Media Services Azure

Design websites (15–20%)
Design websites for scalability and performance
Globally scale websites, create websites using Visual Studio, debug websites, understand supported languages, differentiate between websites to VMs and cloud services

Globally scale websites
Azure Websites and Visual Studio
Azure websites, Cloud Services and VMs
Troubleshooting Azure Websites
Remote Debugging Azure Websites with Visual Studio
Deploy websites
Implement Azure Site Extensions, create packages, hosting plans, deployment slots, resource groups, publishing options, Web Deploy, and FTP locations and settings

Site Extensions
Create Packages
Web Hosting plans Azure
Deploying web sites
FTP websites
Staged deployment websites
Design websites for business continuity
Scale up and scale out using Azure Websites and SQL Database, configure data replication patterns, update websites with minimal downtime, back up and restore data, design for disaster recovery, deploy websites to multiple regions for high availability, design the data tier

SQL replication Azure
Azure Websites backup
Global web precense Azure
Design for disaster recovery Azure

Design a management, monitoring, and business continuity strategy (15–20%)
Evaluate hybrid and Azure-hosted architectures for Microsoft System Center deployment Understand, at an architectural level, which components are supported in Azure; describe design considerations for managing Azure resources with System Center; understand which scenarios would dictate a hybrid scenario

Managing Hybrid Clouds with System Center
Supported workloads Azure
Azure Hybrid scenarios
Design a monitoring strategy
Identify the Microsoft products and services for monitoring Azure solutions; understand the capabilities of System Center for monitoring an Azure solution; understand built-in Azure capabilities; identify third-party monitoring tools, including open source; describe use cases for Operations Manager, Global Service Monitor, and Application Insights; describe the use cases for Windows Software Update Services (WSUS), Configuration Manager, and custom solutions; describe the Azure architecture constructs, such as availability groups and update domains, and how they impact a patching strategy

Global Service Monitor
Monitoring Azure with System Center
Application insight
Open Source monitoring
Azure monitoring

Describe Azure business continuity/disaster recovery (BC/DR) capabilities
Understand the architectural capabilities of BC/DR, describe Hyper-V Replica and Azure Site Recovery (ASR), describe use cases for Hyper-V Replica and ASR

Site Recovery
Hyper-V replica to Azure
Getting started with Azure Site Recovery
Use cases Azure Site recovery

Design a disaster recovery strategy
Design and deploy Azure Backup and other Microsoft backup solutions for Azure, understand use cases when StorSimple and System Center Data Protection Manager would be appropriate

Configuring Azure Backup
Azure and Data Protection Manager

Design Azure Automation and PowerShell workflows
Create a PowerShell script specific to Azure
Describe the use cases for Azure Automation configuration
Understand when to use Azure Automation, Chef, Puppet, PowerShell, or Desired State Configuration (DSC)
Azure Automation
Azure runbooks
Azure PowerShell
Using Chef or Puppet
Chef or Puppet Azure
Azure desired state configuration

Deep dive Microsoft and Dells Cloud platform system

On 20th of October this year Microsoft and Dell announced the coming of their new platform Cloud Platform Systems. ( oddly enough just two months after VMware announced EVO:RAIL and EVO:RACK)

This platform is aimed at customers that want a fully functional private cloud offering, or even hosting providers that want a fully tested, validated platform.

CPS is a offering that either stays within 1 rack and can be deployed up into 4 racks. The platform consists of only Dell hardware and Microsoft software stack like Hyper-V, System Center and Azure Pack.

So within a rack we have the following hardware offering

  • 512 cores across 32 servers (each with a dual socket Intel Ivy Bridge, E5-2650v2 CPU)
  • 8 TB of RAM with 256 GB per server
  • 282 TB of usable storage
  • 1360 Gb/s of internal rack connectivity
  • 560 Gb/s of inter-rack connectivity
  • Up to 60 Gb/s connectivity to the external world

So what kind of real hardware is behind this rack?

5 x Force 10 – S4810P
1 x Force 10 – S55

Compute Scale Unit (32 x Hyper-V hosts)
Dell PowerEdge C6220ii – 4 Nodes per 2U
Dual socket Intel IvyBridge (E5-2650v2 @ 2.6GHz)
256 GB memory
2 x 10 GbE Mellanox NIC’s (LBFO Team, NVGRE offload) (Tenant
2 x 10 GbE Chelsio (iWARP/RDMA) (SMB 3.0 based shared storage)
1 local SSD  200 GB(boot/paging)

Storage Scale Unit (4 x File servers, 4  x JBODS)
Dell PowerEdge R620v2 Servers (4 Server for Scale Out File Server
Dual socket Intel IvyBridge (E5-2650v2 @ 2.6GHz)
2 x LSI 9207-8E SAS Controllers (shared storage)
2 x 10 GbE Chelsio T520 (iWARP/RDMA)
PowerVault MD3060e JBODs (48 HDD, 12 SSD)
4 TB HDDs and 800 GB SSDs

A single rack can support up to 2000 VM’s (2 vCPU, 1.75 GB RAM, and 50 GB disk). And like with Vmwares EVO rail offering, this platform is finished integrated, all you have to do is connect network and power and you are good to go. So

On the Storage side, CPS is set up using Storage spaces with tiering and 3-way mirror for tenant workloads and for backup we have dual parity. Note also that backup pool is configured to 126 terabyte and is also setup to use deduplication. Tenant workloads is setup to use 156 TB of data.

As for performance, Microsoft has calculated how much IOPS we can get from a CPS system.

Totalt IOPS 100% READ


Totalt IOPS 70% read and 30% Write


These tests were run on 14 different servers, targeting the storage.

And each Hyper-V node is accessing SMB based JBOD storage using RDMA with has low latency traffic. all storage traffic is also using EMCP to load balance  traffic via the Force10 switches. Where we have 2x 10 GB in LACP. Also that there is a flat layer 3 network between the switches, so not issues with STP and with using ECMP we have redudant links between all the hosts.

Storage Spaces configuration

3 x pools (2 x tenant, 1 x backup)

VM storage:

16x enclosure aware, 3-copy mirror @ ~9.5TB

Automatic tiered storage, write-back cache

Backup storage:

16x enclosure aware, dual parity @ 7.5TB

SSD for logs only

24TB of HDD and 3.2TB of SSD capacity left unused for automatic rebuild

Available space

Tenant: 156 terabytes

Backup: 126 terabytes (+ deduplication)’

Note that interleave on the storage space setup is set at 64KB and number of columns is 4 for the tenant pool.



If you want to know how much this beaty costs it is about $1,635,600 which I havent found anywhere but from this performance article ( and note that this does NOT include Microsoft software at all.

F5 VIPRION 2200 chassis is also included as an load balancer module and is also setup and integrated within System Center.

So this is some of the capabilities on the Cloud Platform System, so if you want to order a rack of this, everything will be fully integrated, installed and configured before you order it. Note that it does not give a nice web portal like EVO rack does. But uses the familiar System Center tools and Hyper-V. Every component which is included is setup clustered and in high-availabilty on the shared storage. There are only 3 components which are not setup on the shared storage which is the AD-controllers, these are setup locally on three different physical servers.















So within the first rack we have the management cluster which consists of 6 nodes (these are not needed in the 2 – 4 racks.) but includes services like System Center, SQL and so on.


So it is going to be really interesting to see what the response is for this type of pre-configured converged infrastructure. (I know VCE has been on the market for some time, so therefore it is going to be extra interesting) and also how this platform is going to evolve over time, with now vNext capabilities and with Dell have a major launch cycle with FX series and so on.

Software defined Storage and Converged infrastructure one-size fits all?

It has been quite a year, as I look back at it at the end of the year, I see that there has been a lot of new interesting technology emerging. One of the hot topic that has been in my head for quite some time is SD (Software defined) and Converged infrastructure/hardware. Now there has also been a lot of discussion about these terms, should we embrace them with open arms or are they for particular use cases or is it just a trend that is going to pass away ? 

Let’s look back at some of the largest software vendors in the world, what have they done within these areas the last couple of years.

Vmware launched VSAN (Software defined storage) and they also launched EVO:RAIL and EVO:RACK which is a new converged infrastructure with has been adopted by different partners such as Dell, HP and Cisco. Vmware has also done a lot of work in the field of SDN (Software Defined networking) with their NSX features. On the other hand we have Microsoft which has done a lot with their Storage space features and will be expanded upon in vNext with shared nothing scale out server ( kinda like VSAN) and also with the work they have done on NVGRE (Software-defined networking) Microsoft also announced a finished converged infrastructure with their Cloudplatform Systems in partnership with Dell.

Now both of vendors are working actively with their software defined stack because they see the limitations with traditional virtualization architecture. If you think of a regular datacenter with dedicated SAN and FC switches for the hosts running some hypervisor. Now the last couple of years, the size amount of the SANs has grown but the speed has not, while today CPU’s and RAM is becoming more efficient and faster, same goes for the chipsets and controllers and cache sizes. Now many of the SAN vendors are adding SSD or flash cards to cope with the lack of speed in their solutions. Now the problem with SAN’s and a traditional way to setup infrastructure is that the components are connected trough a network which itself adds latency (not a huge amount) in 10 GB it is  5 – 50 microseconds and to compare that with a local bus it is about  20 – 100 microseconds (usec). Also the way that the different protocols interact in order to ensure data integrity adds latency. So therefore VMware and other vendors in the market like Nutanix created a solution where you have storage as close to the compute resource as possible to go away from the complexity and speed issues that traditional infrastructure might have. 

We also have other vendors like Atlantis, Pernixdata, Nexenta (all of these are software-defined storage) and we have VCE (Converfed infrastructure) which has all their different way to reach the goal. Now with all these different vendors delivering different solutions we have a huge list of different vendors to choose from! since it is all about the use case. Now all of these vendors offer more features and just increased throughput, since all these vendors have emerged in the last 5 years it is because it is a market for them, and customers has different demands. 

So is the traditional way of storing data going to go away, are SANs going to die ? I don’t think so, they are going to evolve with the software defined approach. Since more features are moving into the main components (hypervisor) they need to! Many SAN’s still has many good use cases, now the common ones are

* Backup
* Archiving

What else?

Web hosting (Are most common CPU and Memory hoggers, and in the many cases when a customer is running huge web platforms they have HLB as well)
General file servers (they need storage space and not in most cases a HUGE demand for speed) 
General virtual machine hosting( depends on if the customer is expected to grow in a huge scale or if they have need for some serious horsepowers) you can setup a pretty cost-effective hyper-v hosting solution with JBOD SAS and Storage spaces with Hyper-V. For a hosting partner it is important to deliver it in the most cost efficient way.

I could probably mention 10 more, but I’m not here to bash these different solutions. We should embrace these new vendors since they give us IT people more to choose from, more options that might be a better fit for OUR usecase I simple love it! and that we have no solution that fits all! going forward in 2015 we are going to see that Vmware and Microsoft are going to add more into their stack as well which means that the other vendors as well need to step up their game. 




Dell vWorkspace 8.5 review

A cool year so far to be working in IT! alot of new tech been released and I have a lot of catching up to do. One of this is vWorkspace 8.5 which was released a couple of weeks ago.

For those that aren’t aware of what it is, can read some of my previous posts regarding vWorkspace or see a bit from this Dell FAQ:

Now vWorkspace has been an underdog to the other competition like Vmware View and Citrix XenDesktop, but with this release they have removed some of the gap that the competition might have.

So what’s included and what’s new in this release?

* Foglight for virtualization
Foglight is a monitoring tool which allows us to monitor our entire virtual infrastructure even if it us running on Vmware or Hyper-V

Permalink til innebygd bilde it also has a good integration directly to vWorkspace which allows us to drill down to see specific user sessions (screenshot is from user experience monitor from Foglight) Foglight can also be expanded to monitor solutions like Exchange, which requires that we have a new cartridge (addon)

* Wyse Streaming Manager (7.2)  (In premium edition)
Is a streaming technology, kinda like Citrix Proervisioning Server, which allows us to stream OS images directly to a server or a client. Also in the new release is support for application layring features as well, which is some of the features that Unidesk and Cloudvolumes from VMware is delivering. Will write more about WSM in a future post.

Also in the latest version of vWorkspace, Dell added a HTML 5 receiver which allows for clientless access to vWorkspace. And also for those unaware vWorkspace already supports Linux VDI using the open source project Free RDS RDP server.  (Something that Vmware and Citrix are currently working on)


Also they have a pretty nifty setup regarding deployment of Connectors and have as well support for most mobile platforms with own clients. And of course many of the Wyse ThinOS clients are certified for vworkspace as well.

vWorkspace can also integrate against, Vmware and Hyper-V / System Center for provisioning and managment of VDI / RDSH.

Also with 8.5 came support for the following platforms.

Microsoft SQL Server 2014
Microsoft System Center Virtual Machine Manager 2012 R2
Microsoft App-V 5 SP2
VMware vCenter 5.5
Cisco IP Communicator

Now vWorkspace has suppor for the tradisional ways of delivering VDI / RSDH and also has extended capabilities for Hyper-V with something called Hyper-V catalyst components.

HyperCache: provides read Input/Output Operations Per Second (IOPS) savings and improves virtual desktop performance through selective RAM caching of parent VHDs. This is achieved through the following:

Reads requests to the parent VHD are directed to the parent VHD cache.
data that is not in cache is obtained from disk and then copied into the parent VHD cache.
Provides a faster virtual desktop experience as child VMs requesting the same data find it in the parent VHD cache.
Requests are processed until the parent VHD cache is full. The default size is 800 MB, but can be changed through the Hyper-V virtualization host property.

HyperDeploy: manages parent VHD deployment to relevant Hyper-V hosts and enables instant cloning of Hyper-V virtual computers. HyperDeploy uses the following techniques to minimize the time used to deploy a virtual computer.

Smart copying that only copies to the Hyper-V hosts the parent VHD data that is needed.
Instant provisioning allows the child VHDs to be cloned while the parent VHD is still being copied to the Hyper-V host.
Copy status is displayed on the Parent VHDs tab to allow for monitoring of the progress and completion.

So looking forward to see if vWorkspace can match the competition in the time ahead! Smilefjes

New stuff for Intune

So Microsoft has been busy coming with numerous updates to Intune lately. The latest updates came last week, you can see updates here –>

  • Ability to restrict access to Exchange Online email based upon device enrollment and compliance policies
  • Management of Office mobile apps (Word, Excel, PowerPoint) for iOS devices, including ability to restrict actions such as copy, cut, and paste outside of the managed app ecosystem
  • Ability to extend application protection to existing line-of-business apps using the Intune App  Wrapping Tool for iOS
  • Managed Browser app for Android devices that controls actions that users can perform, including allow/deny access to specific websites. Managed Browser app for iOS devices currently pending store approval
  • PDF Viewer, AV Player, and Image Viewer apps for Android devices that help users securely view corporate content
  • Bulk enrollment of iOS devices using Apple Configurator
  • Ability to create configuration files using Apple Configurator and import these files into Intune to set custom iOS policies
  • Lockdown of Windows Phone 8.1 devices with Assigned Access mode using OMA-URI settings
  • Ability to set additional policies on Windows Phone 8.1 devices using OMA-URI settings

Now one of the cool features is the managed browser app. This allow us to manage how content is opened and displayed from this app. By default the application can do two things.

  • Allow the managed browser to open only the URLs listed below – Specify a list of URLs that the managed browser can open.
  • Block the managed browser from opening the URLs listed below – Specify a list of URLs that the managed browser will be blocked from opening.

So we define a URL which a user can open (NOTE: You can see what kind of policy prefix I can use here –>

The application itself is available from Google Play but in order to use it in conjuction with Intune policies we need to deploy the application from Intune itself. Besides the managed browser application, Microsoft also released some other applications like Intune PDF viewer, Intune AV player, Intune Image player which users can download from google play. So when a user uses the managed browser, opens a PDF link from the managed browser, it will automatically open in the Intune PDF viewer (Where we can define settings like cannot copy/paste screenshot etc.

So when we setup this we need to deploy the package to our users, so they can install it from the company portal. NOTE: Dont deploy it right away we need to create some policies first.


So when setting up policies we have a lot of new policy features we can define for our devices.


Now the Managed Browser Policy is just the allow/deny list. And we have the mobile application management policy, here we can define how the apps are going to integrate and what users can do when the content is displayed.


When we are done creating the policies, we can deploy these policies. Now unlike other policies these need to be deployed as a part of the software and not directly to users or groups. So when setting up the browser deployment we can add the policies.


Now we can head on over to the mobile device! First of I need to sync my mobile device policy


Then I install the managed browser app and other compents I need from the company portal


Now I am ready to use managed browser. When I open a URL that is on the deny list I get this error message.


When I open a URL that is on the allow it works like a regular browser, but when I download a PDF file you can see there is a loading bar underneath the URL this is because the managed browser downloads the PDF internally in the App and then


we are switched over to the Intune PDF viewer


so again, alot of new stuff arriving to Intune, looking forward to the next chapter

Problems with Netscaler and Hyper-V NIC teaming ICA error 1110

Had a customer case where they had troubles with ICA sessions being terminated when connecting via Netscaler. They had a regular MPX pair setup in HA which then serviced XenApp servers which were located on a cluster of Hyper-V hosts. These hosts were running Windows Server NIC teaming switch independent Dynamic mode.

The Citrix sessions were terminated with an failed status of 1110. What they also noticed is that when the Netscaler were trying to connect to the XenApp host they used the MAC adress of the XenApp virtual machine, when the traffic was to return to the Netsacler, the MAC adress changed from the XenApp host to the MAC adress of the Hyper-V host.

This makes the Netscaler drop the traffic and the ICA session was terminated.

From the deployment guide of NIC teaming (

Dynamic mode has this as a “side effect”

3.11    MAC address use and management
In switch independent / address hash configuration the team will use the MAC address of the primary team member (one selected from the initial set of team members) on outbound traffic.  MAC addresses get used differently depending on the configuration and load distribution algorithm selected.  This can, in unusual circumstances, cause a MAC address conflict.

This is because is the world of Ethernet, an endpoint can only have one MAC-adress and with the use of switch-independant there can only be one physical adapter that is active that allows inbound traffic.

Therefore If you are having issues with Netscaler and Hyper-V NIC teaming you should change from Dynamic to Hyper-V port nic teaming because then the NIC teaming will not do any source MAC adress replacement.



But note that Hyper-V distribution load balancing has its own issues, which you can read about in the LB document.

Windows Azure and Storage performance

For those that have been working with Azure for some time there are some challenges with delivering enough performance for a certion application or workload.
For those that are not aware Microsoft has put limits on max IOPS on disks which are attached to a virtual machine in Azure.  But note these are max limits and not a guarantee that you get 500 IOPS for each data disk.

Virtual Machine instance
Basic ( 300 IOPS) (8 KB)
Standard ( 500 IOPS / 60 MBPS) (8 KB)

There is also a cap for a storage account for 20,000 IOPS.

In order to go “past” the limit, Microosft has mentioned from time to time to use Storage Spaces (which is basically a software RAID solution which was introduced with Server 2012) in order to spread the IO load between different data disks. (Which is a supported solution) “physical disks use Azure Blob storage, which has certain performance limitations. However, creating a storage space on top of a striped set of such physical disks lets you work around these limitations to some extent.”

Therefore I decided to do a test using a A4 virtual machine with 14 added data disks and create a software pool with striped volume and see how it performed. NOTE that this setup was using regular storage spaces setup which by default uses a chuck size of 256 KB blocks and column size of 8 disks.

I setup all disks in a single pool and created a simple striped volume to spread the IO workload across the disks (not recommended for redudancy!) and note that these tests were done using West Europe datacenter. And when I created the virtual disk I needed to define max amount of columns across disks.

Get-storagepool -FriendlyName test | New-VirtualDisk -FriendlyName “test” -ResiliencySettingName simple -UseMaximumSize -NumberOfColumns 14

Also I did not set any read/write cache on the data disks. Now I used HD tune pro since I delivers a nice GUI chart as well as IOPS.

For comparison this is my local machine with an SSD drive (READ) using Random Access testing.


This is from the Storage space (simple virtual disk across 14 columns with 256 chucks) (READ)


This is from the D: drive in Azure (note that this is not a D-instance with SSD)


This is from the C: drive in Azure (which by default has caching enabled for read/write)


Then when doing a regular benchmarking test with Writing a 500 MB file to the virtual volume on the disk.


Then against the D drive


Then for C: which has read/write cache activated I get some spikes because of the cache.


This is for a regular data disk which IS not in a storage pool. (I just deleted the pool and used one of the disks there)

This a regular benchmark test.


Random Access test


Now in conclusion we can see that a storage space setup in Azure is by few precentage faster then a single data disk in terms of performance. The problem with using Storage Spaces in Azure is the access time / latency that these disks have and therefore they become a bottleneck when setting up a storage pool.

Now luckily, Microsoft is coming with a Premium storage account which has up to 5000 IOPS pr data disk which is more like regular SSD performance which should make Azure a more viable solution to deliver application that are more IO intensiv.

Can I run my workload in Azure?

This is a typical question I get quite often, and therefore I decided to write a blogpost to get all the facts straight and talk a bit about the cons about running workloads in Azure and why it some cases is not possible or not the best option. But note there are many use cases for when to use Azure, but I just want to get people aware of some of the different limitations that are there.

So first of, what is Azure running on? the entire Azure platform is running on top of Windows Servers with a modified Hyper-V 2012 installed, and also since Azure is a PaaS/IaaS platform, Microsoft managed all of the hardware and hypervisor layer.  Azure was built on 2008 R2 originally and now 2012 it still only supports VHD disks.

Virtual machines sizes in Azure are predefined, meaning that I cannot size a VM based on what I need, I can only use what Microsoft has predefined (Which you can see here –> ) and also depending on the VM instance size we have predefined how many data disks we can attach to the VMs.

(FOR INSTANCE A4 HAS A MAXIMUM AMOUNT OF 16 DATA DISKS (1 TB EACH) meaning that we get total of 16 TB storage space to a virtual machine. This is of course with the use of storage spaces)

Microosft Azure at the moment supports 3 types of Windows Server OS types (2008 R2, 2012 and 2012 R2)

Microsoft also has a list of supported Windows Server workloads in Microosft Azure –>

And also note that Azure VM’s are mostly using AMD Opteron based CPUs as well, which has a lower performance then regular Intel XEON based CPUes.

So what issues have I seen with Azure when designing workloads

1: Laws and regulations (for instance in Norway, we have alot of strict rules about storing data in the cloud, so be sure to verify if the type of data you are storing can be placed outside of contry.

2: Need for speed (Azure is using JBOD disks and has a cap of 500 IOPS pr data disk, if you need more disk speed we need to use Storage Spaces and deploy different raid sets. But there is just a theoritical speed up to 8000 IOPS. (Which is about a SSD speed) also note that this is AMB based CPUs, if you have a application that really CPU intensiv you might see that Azure is not adequate. NOTE: that Azure is coming with a new C-class which is built up with Premium Storage, which has over 30,000 IOPS and is coming with a Intel based CPU.

3: Graphic accelerated applications/VMs (Since we cannot do changes to the hardware, and Microsoft does not have a option for choosing instances with hardware graphics installed, this is still an option that requires on-premise hardware

4: Unsupported products (If we for instance want to run Lync or Exchange Server, these are not products which are supported running in Azure and therefore require an on-premise solution)

5: Specific hardware requirements (If you have a VM or application that requires some specific hardware using com-ports or USB attachements)

6: Requires near zero downtime on VM level (fault tolerance) Azure has a SLA for virtual machines, but this requires that we have two or more VMs in an availaibilty set this will allow Azure to take down one instance at a time when doing maintance or fault happening in a physical rack/switch/power. There are no SLA for single virtual machines instances, and when maintance is happening, Microsoft will shut down your VM there are no live-migrations in place.

7: Very network instensiv workloads (Again there are bandwidth limits on the different virtual machine instances, and if you also require a S2S VPN between there are also a cap on 80 MBps on the regular gateway. There is also express route option which allows for a dedicated MPLS VPN to a azure datacentre. And also important to remember is the latency between your users and azure datacentres. This can give you a quick indication of high the latency is between you and Azure –>

8: Applications that require shared storage in the back-end( for instance clustering in many cases require shared backend storage which is not possible in Azure since it is bound to a virtual machine as a disk)

9: Stuff that comes in finished appliances (unless the partner has their product listed in Azure marketplace)

10: requires IPv4 and IPv6 (IPv6 is not supported in Azure)

11: Network based solutions (IPS/IDS) (Since we are not allowed to deploy or change the vSwitch which our solution runs in Azure we are not able to for instance set up RSPAN which allows us to use IDS technology in Azure)

These are some use cases where Azure might not be a good fit, I will try to update this list with more, if anyone has any comments or things they want me to add please commect or send me an email

Upcoming events and stuff

There’s alot happening lately and therefore there has been a bit quiet here on this blog. But to give a quick update on what’s happening!

In february I just recently got confirmation that I am presenting two session at NIC conference (Which is the largest IT event for IT-pros in scandinavia) ( Here I will be presenting 2 (maybe 3) sessions.

* Setting up and deploying Microsoft Azure RemoteApp
* Delivering high-end graphics using Citrix, Microsoft and VMware

One session will be primarly focused on Microsoft Azure RemoteApp where I will be showing how to setup RemoteApp in both Cloud and Hybrid and talk a little bit about what kind of use cases it has. The second session will focus on delivering high-end graphics and 3d applications using RemoteFX (using vNext Windows Server), HDX and PCoIP and talk and demo abit about how it works, pros and cons, VDI or RDS and endpoints so my main objective is to talk about how to deliver applications and desktops from cloud and on-premise…

And on the other end, I have just signed a contract with Packt Publishing to write another book on Netscaler, “Mastering Netscaler VPX” which will be kind of a follow up of my existing book

Which will focus more in depth of the different subjects and focused on 10.5 features as well.

I am also involved with a community project I started, which is a free eBook about Microsoft Azure IaaS where I have some very skilled norwegians with me to write this subject. Takes some time since Microsoft is always adding new content there which needs to be added to the eBook as well.

So alot is happening! more blogsposts coming around Azure and Cloudbridge.