Monthly Archives: January 2013

Microsoft hybrid cloud solutions

Many people are looking towards moving many of their services to the cloud (either it is applications, virtual machines or just data)
The key factors for this approach is multiple reasons
* Agility
* Availability
* Economy

The ability to quickly expand and scale-up if the service or application needs more processing power in order to meet the demand of the customer.
Cloud solutions also gives (in many cases) a better availability then regular on-premise solutions can match.
In addition, of course the cost savings it may give in many cases. If you think about it all the large service providers (Google, Microsoft and Amazon) have huge data centers that are specially equipped to keep power down and have the most efficient cooling.
At a much lower price than your data center can.

However, the move to these kind of Cloud solutions comes at a price, so what do you lose when converting to a public cloud solution?
* Control (Where is your data stored? Who has access to these data? In many cases, the public cloud provider lists this up)
* Security (How is the data stored? In many cases I still need to take backup of the solution in case I do something wrong)
* Customization (When you use for instance Office365 you have limited options to do any customization to for instance third party vendors)

So therefore many look at the hybrid cloud approach, it gives you the benefits of both worlds!
Microsoft has many approaches to the Cloud.

Azure (Iaas)

Let’s take some scenarios:

Mobile Device Management and Data from the cloud. (Configuration Manager and Intune)
Microsoft recently announced a new version of Intune (Wave D) which allows for mobile device management of Apple iOS, Android, Windows Phone 8 and Windows RT. Now if you want this functionality in Configuration Manager you would have to “connect” your Configuration Manager site with Intune. This allows you to manage your mobile devices via intune within Configuration Manager.
This allows Microsoft to build additional functionality into Intune, it gives you more features, and you do not have to update Configuration Manager in order to get these new features.
If you are skeptical to allow clients to get data from a distribution point which is accessible from the internet that is located in your datacenter you now have the option to create a distribution point in Azure (So roaming clients can get data from the DP in Azure.)

Backup to the cloud (Data Protection Manager and Windows Server Backup)
Storage is cheap now a days, but you still don’t want all the backup stored in the same datacenter to close this gap you can now attach Data Protection Manager and Windows Server Backup to store data up in a Windows Azure BLOB.
I will come back to how this can be put to use in case of disaster recovery.

Hybrid Collaboration solution
Many organizations are or have already implemented Microsoft Exchange in some degree but they see the value of having it in the cloud solution to Microsoft but do not wish to have everyone with the most critical data put there.
With Office365 and Exchange, you have the ability to put many of your users in the cloud and the rest on-premise (In addition to SharePoint or Lync) and can manage it from the same solution. If you were to put ADFS in there as well, you can maintain the SSO function that users expect even to the Cloud.

IaaS and System Center
In many cases you need more capacity and your data center does not have the necessary space to have more blades put into it. (Just one example) With the IaaS solution from Windows Azure you can extend your datacenter to the cloud and use it to create the new Virtual machines needed. With Azure virtual connect; you can establish a VPN connection between your local datacenter and Windows Azure so from the customer’s point-of-view the solution that you provide will look like they are coming from your datacenter. And these computers/servers you can manage like any server I can for instance install a SCOM agent on my servers and monitor them like any other (But remember that the traffic here goes via the VPN tunnel so look and expect some bandwidth usage and depending on the VPN solution they usually have a limit on how much VPN traffic they can manage.)
You can also use AppController to manage these computers (Move up and down from Azure to your datacenter ) If you wish you can also use the GSM (Global Service Monitoring) module from Operations Manager, which allows you to setup a monitoring solution where Microsoft will monitor it from one or more of their datacenters in the world. Look at my previous post here à


Microsoft has already created some integration between Orchestrator and Office365 and Azure. Look at my previous post à

they also come with PowerShell modules as well, which allows you to create automation ready runbooks in Orchestrator to rapidly deploy new users (O365) and virtual machines (azure) and these can be deployed via the Self-Service portal in System Center to the users.

Now hopefully these scenarios has given you some insight on how you can deploy your hybrid solution, I believe we have only seen the beginning on what is to come. This are indeed exciting times and with release of an eventual System Center 2015 we can look forward, to even more integrated solutions and more options when regarding to where to host your services.

Application Approval workflow with SCCM and SCSM

Ever wished that you could redirect the application approval request that appears in the SCCM console into an e-mail or perhaps an leader (Which doesn’t have much to do inside the SCCM console) ?
This is possible with the Application Approval Workflow Accelerator from Microsoft à

AAW uses System Center 2012 – Orchestrator between Configuration Manager and Service Manager to synchronize Configuration Manager Applications, leverage Service Manager Workflows, and post the approval status back to Configuration Manager. We created wizards in Service Manager to configure custom service request template-matching criteria. User and application properties received in the approval request from Configuration Manager are used to select a service request template containing an approver list and activities that best fit your business process.

Key features:

Sync Configuration Manager Application’s data into the Service Manager database.

Monitor and transport Configuration Manager Application Catalog requests requiring approval to Service Manager and open a service request.

Return the completed approval workflow status to Configuration Manager for handling.

Allow administrators to define and maintain application selection criteria for specific applications or application groups and specific users or user groups.

Track service application requests and view application catalog contents in Service Manager.

Veeam backup and replication under the hood

NOTE: This post is just a little google translation of my previous post written in Norwegian J

Today, there are a wealth of options when it comes to backup, the focus has shifted to a large extent from being able to backup only physical machines to be able to back up the physical, virtual and applications located on the machines (like SQL, Mail, IntraWeb, business applications) solutions have higher demands on them in terms of handling large amounts of data, while it should be easy to use and be quick to be able to restore data.

There are many different suppliers of backup software on the market, to name a few:
Acronis # agent-windows
Microsoft DPM
Dell Appasure
Symantec Backup Exec 2012

Then there is Veeam:

The difference with Veeam In relation to the other manufacturers is that they focus only on the virtual layer, thus you will get a tailored solution that is only geared towards virtual infrastructure.

Veeam has also recently launched a new version of its main Backup and Replication in version 6.5, which introduces some new functionality and support for new products, they include come with support for Windows Server 2012 and VMware vSphere 5.1, which means that they were first with support for these new products. For those who are not so familiar to Veeam, so you can read a bit more about them here ->

Veeam has the following software to its portfolio:
Veeam Backup and Replication (Which is the main product of Veeam, used for backup and replication of virtual machines (VMware and Hyper-V) plus much more. It also has its own tools for backup and recovery to
Exchange, AD and SQL) sounds as a regular backup product, but it has some features that make it unique compared to the competition I get into that later. You can read more about the product here
Veeam ONE (Which is a complete monitoring tools to monitor Hyper-V and VMware, it also has built-in reporting tools)
You can read more about it here
Veeam Management Pack (which is a by-product (Management Pack) for System Center Operations Manager provides full monitoring of your VMware infrastructure Operations Manager)
Previously called nWorks Management Pack. You can read more about it here

In addition, they also have other products:

Veeam Backup Free Edition (Which is a minimal version of Backup and Replication is mostly used to make copies of virtual machines and compressing them via VeeamZIP)
Veeam One Free Edition (Is a minimal version of Veeam One and have some restrictions on how long it can store data)

Backup and Replication consists of three components

Proxy Server: this is that makes the job of extracting data from the server as it should be backed up and put this in a repository.
Backup Server: Administration server, here you enter the backup jobs that you want and run. All of the work and statistics are stored in an SQL database connected to the server.
Repository: This is where the backup data is added.

So mainly it is the Proxy server that goes in and retrieves data from the server and passes it on to Backup Repository. Surely, you notice that a backup job takes too long; you can easily add more proxy servers (when Proxy server works very CPU intensive)
I’ll show how to define different proxy servers for different jobs in a next post.
However, when to reveal Bottlenecks are four things to look for. NB: Sure Veeam reveals when there is a bottleneck, the lower the “speed” of the rest of the components to the speed that the weakest link handles.


1: Hyper-V host (Much to read and write to the disk?)
2: Proxy server (Is the CPU reaches max?)
3: Network (Has it reached its maximum bandwidth?)
4: Target repository (Much to read and write to the disk?)

Other components:

Enterprise Backup Server: Possibility to manage multiple backup servers, it also allows you to search through backups for individual files
Backup Search: User MOSS Integration Services on a Microsoft search server for faster and be able to search through the data.

The architecture of Hyper-V

Basically it will be installed a proxy sever on Hyper-V host, you certainly need to take the load away from the Hyper-V host, you must have a server that is set up as Offhost Data Proxy
(This requires a server that is installed with Hyper-V VSS due and should be the same Hyper-V version for it to backup)

The architecture of Vmware


For its part, Vmware hasnothing that is installed on the host, you must set up a separate Windows server running as the Backup Proxy (This server should have access to the same storage that Vmware host has)
This server can also be a virtual machine running on VMware but this requires that the server HotAdd access to VMs on Datastore)

Supported systems:



Microsoft Hyper-V Server 2008 R2 SP1

Microsoft Windows Server 2008 R2 SP1 with Hyper-V

Microsoft Windows Server 2012 with Hyper-V

vSphere 5.0, 5.1


Management Server (not required)

If you want to backup against VMM it requires the installation of the VMM console on the backup server)

Microsoft System Center Virtual Machine Manager 2008 R2 SP1

Microsoft System Center Virtual Machine Manager 2012


You can read more about the recommendations regarding hardware and supported systems here ->

The installation of Veeam requires that also installed
. Net Framework 4.0
A SQL Server either locally or on another server.

If you do not have any of the parts will Veeam installation install both (though a SQL Express version of 2008 R2)

Installation is very simple and streamlined


Enter the license file you have received.

Management Console (Is Backup server with components)

Catalog Service (Is responsible for indexing the VM OS files)

PowerShell snap-in (Provides PowerShell commands that can be used to automate backup akviteter via script)

If you do not have any SQL servers available, select local setup (This installation will set up a SQL Express 2008R2)

The application supports most versions of MSSQL

• Microsoft SQL Server 2005 (Full and Express Edition)

• Microsoft SQL Server 2008 (Full and Express Edition)

• Microsoft SQL Server 2008 R2 (Full and Express Edition)

• Microsoft SQL Server 2012 (Full and Express Edition)

Here you must specify a user that has full database permissions on the database. The same users will be automatically granted “Log on as a service” privilege on the server.

So here, it is advisable to use a least-privilege user.

Then, just click Next and then install.

B & R can now be launched from the desktop or the Start menu.

Before we start adding Hyper-V servers and configure backup is important that we go through the setup on the server to the configuration options available there.


It is mainly divided into 5 tabs.

Backup & Replication (Here you define backup and replication jobs, getting all the backups you have set up)

Virtual Machines (Lists all virtual machines that are connected in the Veeam)

Files (Lists the files on the physical Host)

Backup Infrastructure (Here you define which servers will be Proxy servers, which server will be the repository and the servers that are managed by Veeam)

History (Lists all jobs that have been run through Veeam)

In addition, you have an additional menu when you click on the Session Tools button at the top left, here you can access the PowerShell module, opportunities to put

user access, define traffic throttling, backup configuration and set up notification (SNMP and e-mail) I’m going to come back a little PowerShell and examples you can use later.

Under “Help” menu, you have the opportunity to look at the license is tied up in the Veeam server and the ability to change the license.

When we add a new virtualization host, we go to Virtual Machines, right-click on

Microsoft Hyper-V to select Add Server.

Add the IP address of the host

Now we have the choice whether we want to link it to the Virtual Machine Manager or standalone / cluster Hyper-V host. In this context, I am going to choose Hyper-V host,
but you want to use it against VMM must have VMM console installed on the Backup Server.

At the same time PowerShell on VMM server must be set to Remote Signed to Veeam server access to run PowerShell commands on the VMM host.
Set-ExecutionPolicy RemoteSigned

Once we have selected Hyper-V and goes on we are asked to add a user (This must have local administrator rights on the Hyper-V server)

Then you will get the components that will be installed on the Hyper-V server and the number of concurrent jobs Hyper-V host can do.
Hyper-V hosts with multi-core processors can handle multiple simultaneous tasks. For example, for 4-core CPU, it is recommended to specify a maximum of two simultaneous tasks,
for 8-core CPU – 4 simultaneous tasks. Nevertheless, when you define the number of concurrent tasks, you should keep in network traffic flow in your virtual infrastructure.

Transport component is responsible to shuffle data from the Hyper-V server to a repository (Proxy component)
Hyper-V Integration (Is responsible for managing VSS communication with VMs, the Service also distributes a driver that handles the change block tracking for Hyper-V.)

When it is finished installing, you will get a list of all VMs that are attached to the current Hyper-V server.

If you got any problems with installation, check the Event log -> Application or under Log folder to Veeam

Also check that the services on Hyper-V server is running properly.

Once you have verified that it is operational, we can add a backup job.

Go to Backup and Replication -> Right click on jobs and select Backup

Give the job a descriptive name and a good description.

Select Add from the right list and select the VM (Alternatively VM host) that will be backed up.
(You select Exclusions VM that should not be backed up or possibly the disk Veeam will not backup)

Under Storage decide which proxy to use to retrieve the backup and the repository will be stored on VMs.

Off-host (so the proxy job running on its own dedicated server with transport role installed)

On-host (So Hyper-V server running backup job to backup directly shipped to a repository)

You can also add the failover (If an Off-host proxy does not respond as an on-host proxy take over the job)

But I will nevertheless recommend that you add a new off-host proxy server that can be used to relieve the Hyper-V server.

So when we start a backup job in this way.


1. Veeam Backup & Replication triggers a snapshot of the required volume of production Hyper-V host.

2. The created snapshots are divided from the production server and mounted on offhost backup proxy.

3. Veeam agent running on a backup proxy using a mounted volume snapshot to extract VMs, VMs processed on the proxy server to be copied to the repository.

4. When the backup is complete, the snapshot is removed from the backup proxy.

If we click on the advanced button where you have a number of choices.


* Reversed Incremental

(Basically, when you take an incremental backup means that you take a copy of all the data that has changed since the last backup)
One takes a full backup one time in a week, so you take a backup of the changes that are made every day. So in this case here certain you want a full backup from Friday.

When you restore

Sunday + Monday + Tuesday + Wednesday + Thursday + Friday.

With Reversed Incremental changes are “sprayed” in the VBK file to restore it to the latest state of the VM. It also creates a reverse
incremental backup file (. VRB) contains data blocks that are replaced when full backup file is restored.
Therefore, the last restore point is always a full backup, and it will be updated after each backup cycle. With reversed incremental backup can immediately
restore a VM to the latest backup additional processing, because the last restore point is a full backup file.
If you need to restore a VM to a specific point in time, Veeam Backup & Replication applies the necessary. VRB files. VBK file to get the desired recovery point.

* Enable Full Synthentic
Veeam creates a new full backup of the backup data is stored from before.

* Perform Active Full
Veeam will bring out a new full backup of VMs.


Enable inline data deduplication:
Means of Veaam will deduplicate data as they moved to the backup repository (Note that this requires more CPU)

• No Compression is recommended if you are using storage devices with hardware compression and deduplication tool to store backups.
• dedupe-friendly compression is an optimized compression level for very low CPU usage. It is recommended if the backup proxy does not meet the minimum system requirements, and you do not want to load it heavily.
• Optmial Compression is the recommended compression level gives the best compromise between the size of the backup file and the time of the backup.
• Extreme Compression and the smallest size of the backup file but reduces backup performance. We recommend that you run the backup proxies on computers with modern multi-core processors (6 cores recommended) if you intend to use the best compression.

Storage Optimizations:
* Local target. This option is recommended if you plan to use SAN, DAS or local storage as a target. SAN identifies larger blocks of data, and therefore can process large amounts of data simultaneously. This option provides the fastest backup, but reduces deduplication – the larger the data block, the lower the chance to find an identical block.
• LAN target. This option is recommended for NAS and on-site replication. It gives a better dedupe and reduce the size of an incremental backup file.
• WAN target. This option is recommended if you plan to use the WAN for offsite backup. Veeam uses small data blocks, which involves significant processing overhead, but results in the maximum dedupe and the smallest size incrementelle files.

Deduplication views block for block to see if there are any identical block files, so instead of saving redundant data will instead refer to the second block file.

Compression looks at the file file, to see if the file contains a number of duplicate data. For example, a JPG image that contains the value (red pixel, red pixel, red pixel, red pixel …….. red pixel) Instead of saving this several times you can compress this

by saying (red pixel x 249) So therefore, one could get saved large amounts of data to compress such files.




Here we can set up notifications to an email address or SNMP when a job is completed / errors / alerts


* Enable Hyper-V guest quiescence (Certainly, where for one reason or another can not make use of VSS for backup of VMs must check here) Basically, Veeam then put VMs In Pause while the backup.
* Take Crash Consistent Backup (Of course we want the VMs to be online while taking backup (While “Enable Hyper-V guest” is enabled) must tick here)

The difference between a regular backup (usually called an application-consistent backup) and a crash consistent backup is. In a nutshell, creates crash consistent backup a snapshot that when restored, creating a virtual machine similar to the one that had the power suddenly turned off.

Snapshot will return the virtual machine’s operating system to its pre-crash condition, but does little to preserve the consistency of open files or transactional databases that reside on it. On the other hand, an application-consistent backup ensure that all database transactions are completed and all disk transactions are conducted before a snapshot. This ensures data integrity of open files and databases on each backup.

* Changed Block Tracking

Changed block tracking will drastically improve the resources used and time spent on incrementally backups, certainly one can use CBT hypervisor will keep track of what’s changed since last time so that it becomes easier for Veeam to extract the changed data blocks.


* Integrity Checks
Will check the integrity of a backup so that you do that you are left with corrupt data on a repositrory.

* File Selective Image Processing
Here you determine whether you should backup the swap file on a VM, swap files on Windows machines is very dynamic and changing very often even if a VM does not. Surely you do not want to backup this you activate this.

* Synthetic Full
Here you determine how long to keep a VM after it is optionally deleted from your infrastructure.

* Post Job activity
Optionally you can create a script that can be used to write the backup to tape as

After we finished there, we go back to the job setup.

* Enable application-aware image processing
As I mentioned earlier this will perform an application-aware backup.

* Enable guest file system indexing
If you want to index files in the guest VM to backup, check for this. Veeam will perform file indexing and enabling you to perform searches for VM guest OS files via Veeam Backup Enterprise Manager web interface.

Next is to create a schedule for when to backup

after that is done you can run the backup job.

Now you see the details of the job you will see where the bottleneck is on the job and how much data is processed.

In this case, the Hyper-V server that has too much strain before that makes it go so smoothly.

At the same time I have enabled ordinary deduplication and compression, which allows it saves a lot of space on the repository.

Runbook to automate computer rejoin to domain

Ever had the issue with some users (that are mostly working remote) come in to the office, trying to login with his/hers computer and can’t logon?
Many have a policy and a script that is being used to remove computers from the domain that haven’t authenticated for a while (let’s say 60 days)
So what happens ?

1: The user has trouble logging in (because the computer account is deleted)
2: The user contacts helpdesk
3: The helpdesk most likely needs to get a hold of the computer and manually join the computer to the domain again.
4: Or needs someone else who has access join the computer to domain to do it.

So this is a bit time consuming, so what can we do to automate this process?
There are several approaches, the recipe I am going to write now is far from a security best practice but it is just to show you how you can do it. In addition, there are loads of different ways to achieve this.
And by looking at this recipe you will see that I have manually entered the computername it is not fetched from another activity.

The recipe I am writing now has some prerequisites that needs to be meet in order for it to work.
1: Local user on the client-computer that you can use to run the script
2: Firewall opened on the client-computer so we can access the $admin shares.
3: The computer has its IP config in place
    4: And the script is able to reach the computer using hostname.
Now my simple runbook looks like this.

What it does is.
1: Create a folder C:tempscript on the client-computer
2: Copy over my PowerShell scripts from a network share.
3: Runs a PowerShell script from that folder (Which joins the domain, waits 10 second then does a restart)

Now in order to have this automated you should place an “Initialize data” activity where you can enter the computer name which is then sent through the workflow.

1: Create Folder activity (Needs to run with a local user account (Under details you define where the folder should be created, for instance c:tempscript)

2: Copy File(Copies the scripts from a network share and places it into the newly created folder)
3: Run Program (Which is based upon psexec)

Which will run the script that was copied over to the folder.

In addition, remember to run as context of local user under Security pane.
Now the script is pretty simple all it does it stores some variables such as domain, username and password.

Then runs a restart-command after.

The script is

$domain = “domain.domain”
$password = “password” | ConvertTo-SecureString –asPlainText –Force
$username = “$domainadministrator”
$credential = New-Object System.Management.Automation.PScredential($username,$password)
Add-Computer –Domainname $domain –Credential $credential

You should apply a shutdown /r /t 10 in order for the runbook to have time to reply back to Orchestrator

Now what could we do to enhance this runbook ?

1: An Activity to delete the folder with files on the client computer (because it contains a password and should not stay on the computer)
2: If we have a local user and password for each computer we should get the runbook go get the unique username and password from a text file.
3: Generate a new random password for the domain join account each time it is run, then update the script.
4: Get information from AD (I’m pretty sure that this information pops up on the event logs on the DC’s and can become automated process from there)
    5: Or from SCOM ACS module, when SCOM creates an alert which shows computers with these
6: Give the user a notification that the process is happening and should save his/her current work.
7: Automate the process to a self-service portal (but this on the other hand would grant users to run this task on any computer)

So in the end we might have something like this.

I’m going to do give deeper into this in the next couple of posts.

Baselines and auto remediation SCCM2012

With Baselines in ConfigMgr 2012, you have the ability to check whenever a client is compliant with the rules that you the IT-pro set in your environment.
This could for instance be if clients have the latest version of java installed (I’m going to show how you can check for this later on)
You have multiple options for what you can check, it could be

* Registry
* File check
* Active Directory Query
* SQL and WQL query
* Assembly
* Script (PowerShell, Jscript og VBscript)

But not every option has the ability to auto-remediate (meaning that it we can for instance run another script if a warning is issued)
And there are also other options as well, if we have configured a connection with SCSM we can get it to automatically open a incident ticket to the helpdesk for futher investigation.
More on that later.

Now a baseline in ConfigMgr consists of 1 or multiple Configuration Items. For instance, we can have a baseline that check for multiple configurations.

* What version of antivirus is the computer running ?
* Does the user have the latest update ofr Windows ?
* Does the computer have the latest firmware installed ?

All of these Configuration items make up a baseline (let’s call it corporate laptops )

I can start first with showing a easy baseline which consists of 2 Configuration items which check for
version of Internet explorer and what MP the CCM agent is the right one. (If these values that ConfigMgr finds are not the same as the one we define if will throw an alert)
You can find the Compliance Settings under à Assets and Compliance menu

NOTE: The User Data and Profiles settings is new from SP1

We start by right-clicking on Configuration Items and choose create new.
Here we enter the necessary information

Next we define which platform we want this CI to run on (Now if you don’t want HUGE amounts of data which are not relevant you should only pick those OSes which you need this running on.

Click next à here we define what we actually are looking for.

So from here we choose “New” now we are going to look for
Type = File and from here we can browse on a regular desktop computer in my case I am going to look on my SCCM server.

And here I choose that the file “iexplore.exe” must be file version = 10.00 to be compliant.
If we now press OK, we get back to the previous menu. So close this and go back to the ConfigMgr console since now we are going to create a new CI from scratch again.

Now we can add a new CI which does a registry check.

This will check the registry if the client has configmgr.demo.local as its FSP via registry.

Now we are done creating the two CI we can save it, go back to the configmgr console, and create a baseline.
We right click on baselines and choose create new à

From here add the CI we created earlier. Now we could also add multiple baselines now we could for instance have 3 baselines.
Where 1 is laptop, 1 is for security compliance, 1 is for CRM system version (which is going to be a baseline deployed to HR users which has laptops )
Now that we have added the CI press OK and go back to the console, now we have to deploy this baseline to some clients.

Right click on the baseline and press deploy, from here we define if we want the baseline deployed to users or computer and when we want it to be run.

We could also push this to SCOM if we wish to get some sort of message there.
So now, we just press OK. After the Baseline is deployed it might take some time before it appears on the clients (You could force it by running policy update on the clients)
To view the baselines assigned to a client, open control panel and configuration manager applet à Configurations

Here we can see that the baseline is Non-compliant, and we can view a HTML report to see why it is not-compliant.

Now what if we want an auto-remediate policy to trigger?
Instead of getting the alert, getting helpdesk to follow up to fix it we can make Configuration Manager to fix itself.

As I stated earlier, Configuration Manager can only remediate some CI’s

Remediate noncompliant rules when supported – Select this option if you want Configuration Manager to automatically remediate noncompliant rules. Configuration Manager can automatically remediate the following rule types:

Registry value – The registry value is remediated if it is noncompliant, and created if it does not exist.
Script (by automatically running a remediation script).
WQL Query

Now we can alter the deployment since in some cases we want the baselines to report if non-compliant and in some cases we want the same baselines to auto remediate on specific clients.
So all we need to do it open the baseline deployment and alter this setting

Remediate noncompliant rules when supported

In other cases where you cannot auto-remediate for instance if you have a baseline that checks for java versions you can create a dynamic collection which installs the latest version of Java.

Trouble with ConfigMgr SP1 client

If you have had some issue with installing SP1 client on computer with the error message
Installation error 0x800b0101: System Center 2012 Configuration Manager Service Pack 1 client

Couldn’t verify ‘C:WINDOWSccmsetupMicrosoftPolicyPlatformSetup.msi’ authenticode signature. Return code 0x800b0101

There is now a hotfix available from Microsoft to download à

Managing Windows Embedded with ConfigMgr SP1

With Configuration Manager Service Pack 1 you now have the ability to manage Windows Embedded devices.
Now you had this ability with previous versions as well but now, it is more integrated within the Console, and ConfigMgr is now “aware” of the write filter that is enabled on the devices.

Now a write filter will redirect all writes that are intended for the disk to ram or a disk cache.
I am going to show you how write filters affect the system and how you can deploy ConfigMgr to a system with Windows Embedded.

I start by installing a Windows Embedded system

Here I choose Build an Image, now I could choose a template or I could choose components from a list. Here I did choose Application compability

And with Application Compability Write filter is enabled. We can see that by opening cmd as admin and running ewfmgr.exe c:

We can see that enhanced write filter is enabled on volume c: and the overlay is placed in RAM. Which means that all all-temporary files are written to RAM and purged at reboot.
Now we could do a ewfmgr –disable c: but this command would be purged at reboot.
If I create files on the local harddrive like so.

And if I reboot now they will be purged.

Now in order to install the configmgr agent I would have to disable the Enhanced write filter, restart, and then install the agent.
To shutdown EHWF on a Windows Embedded system, you need to run the command ewfmgr.exe –commitanddisable because you need to commit the disable command to the system in order for it to disable.
So when we reboot now we can proceed with the installation.

If we run the ewfmgr c: we can now see that write filter is disabled.
So now we can run a regular agent install.

Microsoft recommends using File-based write filter so you can set expections to files that should still contain data.
In case set exceptions for these files and registry in order for the client to maintain the CCM data.




After the agent installation is complete we can again enable the write filter.

Now we do not have to run the commitanddisable command since the command goes straight to the system and not the overlay but we still need a reboot.
We can now see the agent appearing in ConfigMgr as it should.

Now there is not many options to do with a Windows Embedded device, the magic is when you try to deploy an application to a WES collection.
ConfigMgr will automatically handle the write filter and will disable and enable it by default in maintance hours. If we look at the deployment wizard you will get this option.
“Commit changes at deadline or during a maintance window” Remember to check for this if you want to deploy to WES systems.

Now I choose the deadline to now and ran a policy update on the client, and with Windows Embedded clients you are unable to deploy to the users just computers (And you are not able to make an application “available”)
We can see that the client wants to restart.

When it restarts you can now see that it is in Service Mode, so when we deploy a application trough configmgr it will enter service mode where only administrators are able to login to the terminal.

Now there isn’t much documentation on this subject yet online.

But the abilities ConfigMgr has against Windows Embedded is for Software deployment, Endpoint protection, Software updates, task sequences.

Ready for the Cloud? Find out with MAP

Now this post is not intended as a business decision type-of-post, but more from a technical standpoint J if your infrastructure is ready for the next level.
MAP (Microsoft Assessment and Planning Toolkit) is a tool that inventories your infrastructure, generates reports based on the data, and runs it against best practices to see if it is a match.
Microsoft recently released a new version of this tool (In version 8) you can download the tool here à

Now the tool itself Is pretty simple, it comes with a SQL express where it will store all the data is gets. So first time you start it you have to create a local database.

Now as you can see here you have many options to what you can check.
For instance if we go to the Cloud option it can check whenever you can move your infrastructure to Azure or your users can move to Office365
And if you choose to run performace data against your virtual hosts you can get a report on what kind of Azure machine type your VM’s are (

Now if we go to the Desktop pane we can see if our computers are ready to transition to Windows 8 and Office 13

Now you can either use WMI discovery or SCCM to get this information, and if I run a report against Windows 8 readiness it will generate a report to show what clients we need to upgrade hardware in order to meet the requirements.
In my case I don’t need to do anything since I already have Windows 8
But ill show you an example of an report

And the coolest part of this tool is the usage tracking bit. IT allows for instance to monitor how many are actually using Lync Enterprise and how many are using standard, when you have this data you can compare it to the list with who actually has given enterprise option. (It does this by connecting to the monitoring service in Lync) The same goes for SQL (You can use it to monitor how many users are connected to a SQL server in date range.
The best part about this program is that it is free!
Many have tried it and it gives a clear picture about your infrastructure.

Managing Windows Azure via Windows PowerShell

Windows Azure is now in a preview phase with its IaaS solution, which allows you to create and have your own virtual machines in Windows Azure.
Which allows you now to have the whole specter of As-a-service solution in Windows Azure.
IaaS means that you are responsible for the virtual machines; Microsoft will handle the rest of the physical layers (Which includes Networking. Storage, Hardware, Hypervisor)

If you look to the left side On-premise (Which is a private cloud setup) you would manage the entire specter, but in this case with Azure we can only manage the blue parts (IaaS)
But back to the point of this post, right now you have the option to manage your Azure setup either with the web-portal or if you have setup System Center or you can use PowerShell.
Microsoft has created a good portion of cmdlets available to use against Azure this allows for more tasks to be automated.

In order to setup PowerShell against Azure a couple of components are needed.

1: Download the PowerShell Azure cmdlets module
you can download them from this Link à
or for other OS à


After it is installed you have the set the execution policy in PowerShell to remotesigned
You have to open PowerShell as administrator before running this command à
Set-executionpolicy RemoteSigned

Now we can import the Azure module in PowerShell.
Import-Module “C:Program Files (x86)Microsoft SDKsWindows AzurePowerShellAzureAzure.psd1”

But before we continue we have to upload an management certificate to the management portal.

we need to create a self-signed certificate on the local computer which we are going to upload to the management portal (This allows for authentication with Azure )
to create this we need to makecert tool which is a part of the visual studio application.

We run this command in order to create the certificate

makecert -sky exchange -r -n “CN=<CertificateName>” -pe -a sha1 -len 2048 -ss My “<CertificateName>.cer”

This certificate will store itself in the local user | personal store of certificate on the computer

Right click this and choose export à and choose a folder to store it in.
After this open the Azure management portal and go to settings à Management Certificates à

And from here upload the certificate that you just created.
After it is uploaded it should appear like this

After this is complete, open the PowerShell session again and run the command Get-AzurePublishSettingsFile this will open a browser window to Azure which will get a small configuration file, which contains where your Azure instance is where the PowerShell module needs to connect etc.

Download the file, after that you have to import it into the PowerShell session

After this is done we can start to use PowerShell against Azure if we use the command
get-command –module Azure we will list all the commands available to use against Azure

And for instance we can start and stop VM’s running there à

Will come back with more in a later post.