Monthly Archives: February 2014

Configuration Manager with App-V and SCS

Now been encoutering some cases lately regarding use of App-V and Shared Content Store deployed from Configuration Manager. There are a couple of things that are worth taking note of here.

* Shared Content Store is an client configuration!

This can be enabled either during deployment of the App-V client by using this command.

appv_client_setup.exe /SHAREDCONTENTSTOREMODE=1 /q

Or using PowerShell

Set-AppvClientConfiguration -SharedContentStoreMode 1

The shared content store is very usefull in RSDH / XenApp and VDI implementations since it does not store app-v packages in the local cache. So instead of caching frequest access packages it uses NTFS points to point to a network share instead.

And when using App-V with Citrix PVS is the way to go!

Problem is when using Configuration Manager to deliver the packages to the App-V clients. You have two ways to deliver App-V packages. Either will streaming the content from the distribution point into the cache. Or by downloading the packages from the distribution point and then running the packages.

image

Important thing to note here is that the deployment type must be stream content from distribution point and not Download content from distribution point! since the Configuration Agent handles the App-V delivery it will ignore the SCS setting in the App-V client.

Internet Explorer 10 and WebCache with Roaming profiles

So worked with a issue with Internet Explorer 10 and its webcache and roaming profiles. The trouble is that with IE10 is that it stores its cache in the local appdata folder. (Same goes for IE11) With IE9 we do not have this issue.

This folder –> %LocalAppData%MicrosoftWindowsWebCache

Here it stores the frequently access sites, password and so on.
The problem with using regular roaming users is that that %LocalAppData% folder is not part of the roaming profile user and will always stay on the server a user accessed. So the problem occurs when a user accesses a new server and again needs to build up their resources in IE10.

One solution is to create a group policy which alters the default behaviour of if Localappdata to allow to be part of the roaming profile.

[HKEY_CURRENT_USERSoftwareMicrosoftWindows NTCurrentVersionWinlogon]

“ExcludeProfileDirs”=”AppData\Local;AppData\LocalLow;$Recycle.Bin”

Change to:

[HKEY_CURRENT_USERSoftwareMicrosoftWindows NTCurrentVersionWinlogon]

“ExcludeProfileDirs”=”AppData\LocalLow;$Recycle.Bin”

This allows the local appdata folder to syncronized with romaing profiles as well. Now this has its advantages since it is easy to setup and easy to change with Group Policy, but it takes everything inside the folder to sync so you might need to look at the size of the romaing profiles. You can also use deduplication feature in Windows Server 2012 to save some space.

Another solution is to create a symlink here we can do it for a particular folder in order to reduze the roaming profile folder.

mklink C:UsersuserAppDataLocalMicrosoftInternet Explorer \roamingprofilefolder /D

This needs to be run as administrator in order to take effect, so if you have RES for instance you can run this at logon to setup the link.

Using data dedupliation with Configuration Manager

Microsoft recetly published an blogpost with how we can use data deduplication with Configuration Manager. http://blogs.technet.com/b/configmgrteam/archive/2014/02/18/configuration-manager-distribution-points-and-windows-server-2012-data-deduplication.aspx

Now the reason why you would use Data deduplication is to save space, since this works on a file level it allows us to remove redudant chunks on files within a volume.

So instead of having file with a chuch of A B C D E F F F we would just have A B C D E F, im simple terms . I have written about how to use data deduplication within one of m y previous posts here, this shows how to trigger a schedule and setting it up using PowerShell http://msandbu.wordpress.com/2012/09/12/windows-server-2012-storage-redefined-part-1/

Now in terms of using it with Configuration Manager, there are some couple of things you might want to note.It is supported using data deduplication on a distribution point, but not on the source files. Meaning that we can use deduplication on volumes where the content library is located. This on the other hand, allows us to reduce a good amount of storage on our distribution point, but again it requires that the server is running Windows Server 2012 or 2012 R2.

Introducing Atlantis ILIO USX

So Atlantis recently had a webinar where they went trough the different aspects of their new product, USX.

USX is a extension of the previous product ILIO with some more storage features which allows them to tier and setup different forms of storage disks and with the additional benefit of having RAM included in the mix.

The webinar was last week and has already been published available on Youtube –>

I suggest you check this our if you are interested in Software Defined Storage!

Dell VRTX førsteinntrykk og nyheter med Windows Storage Spaces

Jeg ser mange forespørseler rundt dette både ute i markedet samt på sosiale medier så derfor måtte jeg bare skrive om dette på norsk Smilefjes

Tidene endrerer seg, det samme gjør teknologien. Det siste året har det vært mye fokus rundt konvergert infrastruktur. I essensen betyr at man slår sammen flere komponenter til en enkelt løsning. Tradisjonelle løsninger baserer seg vanligvis på 3 hoved komponenter, servere, nettverk og lagring.

Drawing1

Så visst man skal sette opp et virtualiseringsmiljø er man avhengig av disse 3 komponentene. Alle de store leverandørere gjør mye her for å få på plass løsninger, som f.eks Storage spaces med Windows Server gjør at man ikke er så avhengig av et dyrt SAN for å få en god ytelse å kjøre et virtuelt miljø på.

Samtidig som nettverk begynner å bli mer og mer virtualisert så vil man begynne å styre dette direkte fra virtualiseringsmiljøet i stedet for ute på enheten selv. Men selv om all denne teknologien kommer på plass så er man avhengig av at de 3 hardware komponentene er satt på plass å det er her konvergert infrastruktur kommer inn.

Dell har designet en ny konvergert løsning som kalles PowerEdge VRTX, rettet mot større kunder med behov for lokal infrastruktur på utekontorer eller som et komplett virtuelt datasenter for  mindre til mellomstore bedrifter. Denne kan faktisk stå under en pult da det er mulig å få den levert med hjul. Kan fort tenkes at den bråker mye men neida, den er like stille som en vanlig stasjonær eller laptop. Men det som er unikt med VRTX er at den slår sammen de 3 komponentene (servere, nettverk og lagring) i et chassis. 

No compromise on scalable performance

Et eksempel, så kan denne bestå av 4x M520 Server (Definerte VRTX M520 servere siden VRTX utgavene har egen firmware) med 12x 3,5” disker. Alle serverene i chassiet deler midplane slik at alle serverene kan få tilgang til lagringen og nettverk. Lagringen blir presentert til serverene som SAS disker, og via management consolet til VRTXen definerer vi hvilken server node som skal ha tilgang til lagrings enhetene. Den kommer også med 3 full lengde PCI-E sloter som gjør at vi f.eks. etterhvert som det blir supportert sette inn et Nvidi GRID kort. I tillegg kommer VRTX med 5 halvlengde PCI-E slot. Totalt 8 PCI-E sloter utøker bruksområde,

du kan kople på ekstern lagring hvis du vokser ut av VRTX, ulike GPU og nettverkskort.

Her er det flere muligheter som dukker opp avhengig av hva den skal benyttes til.
Vi kan f.eks bruke den som en scale-out file server med SMB 3.0, hvor f.eks to noder som scale out-file server og 2 noder som Hyper-V servere. Da har vi et ferdig virtuelt miljø kjørende på samme løsning. Med 2012 R2 kan vi også bruke SSD disker som en del av løsning å få Storage Tiering funksjonalitet, refererer til min tidligere artikkel –> https://msandbu.wordpress.com/2013/09/18/storage-tiering-for-scale-out-file-server-jbod-sas/ For å få enda bedre ytelse på lagringen.

En annen ting kan også være å levere VDI løsninger, med f.eks Nvidia grid skjermkort satt inn, er det fult mulig å sette opp en fullstendig VDI løsning med lagring og mulighet til å levere “tunge” grafiske applikasjoner enten via Citrix eller Microsoft RemoteFX. Så det blir spennende å følge med på dette fremover å hvordan dette spiller sammen med Microsoft!

Jeg ser også at Dell og Microsoft har fått et tettere samarbeid, både på lagring og leveranse av VDI løsninger. Nylig har også Dell annonsert at de har kommet med hardware støtte for Storage Spaces i Windows Server med PowerVault løsningen sin. Dette gjør det enklere å sette opp billige lagringsløsninger for virtualiseringsmiljø med Hyper-V. Dere kan lese mer om nyheten på Microsoft annonseringen her –> http://blogs.technet.com/b/server-cloud/archive/2014/02/11/today-dell-announced-hardware-support-for-storage-spaces-in-windows-server.aspx 

Visst det skulle være noen spørsmål så er bare å legge igjen en kommentar! Smilefjes

Atlantis releases USX

So this is hot news! For those that aren’t familiar with Atlantis it is a company that appeared 4 years ago and the worked with one object RAM as primary storage.

They had the ILIO product around for a couple of years now, pointing at the VDI and XenApp marked.

ILIO is a virtual appliance which runs on top of either Hyper-V or VMware ESX and there it takes physical ram from the host and present it out as storage to the hypervisor (either as iSCSI or NFS) And on top of this is can do inline deduplication and compression to mention a few features. Which allows for a great performance improvement and offloading the back end SAN.

So that was a quick overview of ILIO. With USX Atlantis takes it a bit further. They describe it as Software defined Storage. Meaning that we are going to be able to mix and match a bunch of different storage vendors and drives to create it as a single resource and having the ability to run from RAM as well.

From Atlantis’s own website

Atlantis ILIO USX (Unified Software-defined Storage)—the industry’s first in-memory software-defined storage solution, enabling IT to support up to 5x more VMs on the existing storage resources they already have today, dropping the cost of storage by up to 50%.

Atlantis ILIO USX gives IT the flexibility to get more out of existing storage investments, and creates new software-defined storage hybrid arrays, hyper-converged systems, and all-flash arrays by aggregating and pooling the SSDs, SAS, flash, and RAM for any number of servers.

Atlantis ILIO USX eliminates the inefficiencies of storage silos, that have been created to support specific applications, by unifying all storage types into a highly-optimized pool of storage resources that are made available to all applications. Policy-based control then optimizes capacity, availability and performance based on application needs, resulting in lower storage costs and better VM performance.

source: http://www.atlantiscomputing.com/solutions/overview

They have also added a techincal FAQ which shows a bit about the flexibility this is going to deliver.

What type of application workloads does Atlantis ILIO USX support?
A. Atlantis ILIO USX was designed from the ground up to support any server workload and has a range of storage volume types that provide optimal capacity reduction, performance and availability for the target application. It is suitable for a wide variety of application workloads from MS-SQL to Exchange and big data workloads such as Hadoop.

Q. What type of storage can be used?
A. Any storage that can be presented to the hypervisor can be pooled and optimized by Atlantis ILIO USX. This includes SAN, NAS, flash arrays and local DAS including SATA, SAS, flash, SSD and RAM.

Q. Do you provide High Availability (HA)?
A. Yes. Atlantis ILIO USX provides integrated HA and data protection for the Atlantis ILIO USX storage volume. There is no single point of failure. Atlantis ILIO USX HA has no reliance on external HA functionality provided by the hypervisor layer. Customers can still use the hypervisor’s HA and DRS functionality to provide VM level protection.

Q. Can Atlantis ILIO USX create a hyperconverged infrastructure (storage and compute combined)?
A. Yes. Atlantis ILIO USX can pool local server resources such as SAS, Flash, SSD and RAM to create a hyper-converged infrastructure.

Q. Does Atlantis ILIO USX support VMware vSAN?
A. Yes, Atlantis ILIO USX can pool VMware vSAN along with any other types of storage and provide optimization that improves performance and reduces capacity utilization for vSAN.

Q. Can I pool storage between public and private clouds?
A. Yes, this has been tested with Amazon Storage Gateway, EC2 and S3. For solution details, please contact Atlantis Computing.

Q. What hypervisors are supported?
A. Atlantis ILIO USX has been designed to be hypervisor agnostic. The initial release will support VMware vSphere 5.x or later. Additional hypervisor platform support will be delivered in a future release. Atlantis ILIO Desktop Virtualization products are already available on VMware vSphere, Microsoft Hyper-V and Citrix XenServer.

Q. Are all writes committed to physical storage?
A. Atlantis ILIO USX has the option to configure a storage volume to provide different levels of protection. All storage volume types have the capability of guaranteeing that writes are committed down to physical storage before being acknowledged back to the application layer.
Q. What are the minimum requirements

 

You can read more about it here –> http://www.atlantiscomputing.com/downloads/Atlantis_ILIO_USX_Solution_Brief.pdf

Automating Citrix Netscaler and PowerShell

This is something I have been wanting to do for some time now, and now that I am doing a lot of research for my upcoming book, this subject poped up in my head…. How can we automate setup on a Citrix Netscaler ?

Citrix Netscaler has a NITRO protocol which is in essence a REST interface, which means that we have an API to communicate with on the Netscaler. We can also make custom applications using C# and JAVA since within the NITRO SDK comes with common libraries for both.

You can download the Netscaler SDK for each build in mycitrix.com
Link to the latest SDK –> http://www.citrix.com/downloads/netscaler-adc/sdks/netscaler-sdk-release-101.html

Extract the Csharp tar file and browse into the lib folder. Here we have to import the two library files.

$path1 = Resolve-Path Newtonsoft.Json.dll
[System.Reflection.Assembly]::LoadFile($path1)
$path = Resolve-Path nitro.dll
[System.Reflection.Assembly]::LoadFile($path)

After we have imported the library files we can start a connection to Netscaler. First of we can either code the variables here NSIP, Username and password before or we can use read-host command. In this example the NSIP of the Netscaler is set to 192.168.88.3 and the username and password is default nsroot Smilefjes As you can see security is my top priority Smilefjes

$nsip = “192.168.88.3”
$user = “nsroot”
$pass = “nsroot”

$nitrosession = new-object com.citrix.netscaler.nitro.service.nitro_service($nsip,”http”)
$session = $nitrosession.login($user,$pass)

This COM object is the one that contains the common services against the Netscaler for instance

  • Login / Logout
  • Save Config
  • Restart
  • Enable / Disable features

If we wanted to for instance do a restart we would need to use the same object. For instance some examples to save config and restart.

$session = $nitrosession.save_config()

$session = $nitrosession.reboot($true)

Since the Com object is already loaded we can just run the commands directly. Just to name a few (refer to the SDK documentation for info about all the classes)
So what are some of the basic configurations that we need to do on a Netscaler? First of we need to change the default hostname for instance.

$hostname = New-Object com.citrix.netscaler.nitro.resource.config.ns.nshostname
$hostname.hostname = “NSpowershell”;
$ret_value=[com.citrix.netscaler.nitro.resource.config.ns.nshostname]::update($nitrosession,$hostname) 

Next we should also add an DNS server to the Netscaler so It can do hostname lookups.

$dns = New-object com.citrix.netscaler.nitro.resource.config.dns.dnsnameserver
$dns.ip = “192.168.88.10”;
$ret_value=[ com.citrix.netscaler.nitro.resource.config.dns.dnsnameserver]::add($nitrosession,$dns)

And then if we want it to do load-balancing we first need to add a server or two which we want it to load-balace.

$server1 = New-Object com.citrix.netscaler.nitro.resource.config.basic.server
$server1.name = “Powershell”;
$server1.ipaddress = “192.168.88.100”;  
$ret_value=[com.citrix.netscaler.nitro.resource.config.basic.server]::add($nitrosession,$server1)

Next we need to bind that server to a service.

$service1 = New-Object com.citrix.netscaler.nitro.resource.config.basic.service
$service1.name = “IIS”;
$service1.servicetype = “HTTP”;
$service1.monitor_name_svc =”http”;
$service1.port=”80″;
$service1.servername=”MSSQL”;
$ret_value=[com.citrix.netscaler.nitro.resource.config.basic.service]::add($nitrosession,$service1)

And lastly create a load balanced vServer and do a service to vServer binding.

$lbvserver1 = New-Object com.citrix.netscaler.nitro.resource.config.lb.lbvserver
$lbvserver1.name=”lbvip_sample”;
$lbvserver1.servicetype=”http”;
$lbvserver1.port=”8080″;
$lbvserver1.ipv46=”192.168.88.25″;
$lbvserver1.lbmethod=”ROUNDROBIN”;
$lbvserver1.servicename=”IIS”      
$ret_value=[com.citrix.netscaler.nitro.resource.config.lb.lbvserver]::add($nitrosession,$lbvserver1)

$lb_to_service = New-object com.citrix.netscaler.nitro.resource.config.lb.lbvserver_service_binding
$lb_to_service.name = “lbvip_sample”;
$lb_to_service.servicename = “IIS”;
$ret_value=[com.citrix.netscaler.nitro.resource.config.lb.lbvserver_service_binding]::add($nitrosession,$lb_to_service)

And of course lastly remember to save the config of the Netscaler

So there you have it, some example Netscaler/PowerShell commands! I just getting started here myself so I will return when I have some more usefull commands and im going to make a custom setup script as well Smilefjes

Configuration Manager support center

Yesterday, Microsoft released an beta version of a new support tool for Configuration Manager called Configuration Manager Support Center. Which at the moment can be downloaded from Microsoft connect –> http://connect.microsoft.com/ConfigurationManagervnext/Downloads/DownloadDetails.aspx?DownloadID=52192

This tool supports every OS that configuration Manager 2012 R2 supports. It requires that you install the tool on a system that has configuration manager client installed or you will get messages that you cannot connect. It also requires .Net 4.5 installed as well in order to run.

The tool gives us a good overview of logs, WMI, certificates, registry of a client so first we have to start a collection of the different components.

image

It also gives a good overview of basic troubleshooting stuff

image

And it has a built-in CMtrace as well Smilefjes

image

So a good tool to have in your toolset, but remember still in beta so give it a try!

Cross platform monitoring System Center Operations Manager

First of, this is a looong post Smilefjes

This is a subject that actually I presented at the NIC conferance in Norway in january.
How we can use Operations Manager to monitor other worksloads other then Microsoft / Windows. Since in most enterprises they have a lot of different platforms such as:
Linux, Vmware, Citrix, Cisco, Microsoft and of course many are looking at towards cloud solutions such as Amazon and Azure.

So im going to show short on each topic how we can use operations manager to monitor all of these solutions.

Now by itself Operations Manager has a good extensive list of monitoring options against Microsoft workloads such as

* Exchange
* SharePoint
* System Center
* Lync
* Active Directory

You can see here for a comprehensive list of Management Packs available for Operations Manager –> http://social.technet.microsoft.com/wiki/contents/articles/16174.microsoft-management-packs.aspx

And of course there is support for Network devices and some Unix/Linux distroes.

The list of supported Network Devices is here –> http://www.microsoft.com/en-us/download/details.aspx?id=26831 Note that operations manager uses SNMP and ICMP for monitoring Network devices.

For UNIX/LINUX based devices you have a newly added managmenet pack –> http://www.microsoft.com/en-us/download/details.aspx?id=29696
It supports CentOS, SUSE Linux, Red Hat, Solaris and Ubuntu and so on.

Now all of the options i’ve list so far is built-in capabilities. Operations Manager works with using agents (Except for Network devices) you have an agent installed, you import a management pack which contains the logic such as rules and alerts, views and reports and you start getting notifications.

So when monitoring for instance Hyper-V we need an agent installed on our Hyper-V agents and the Hyper-V management pack.  There is also an VMM management pack which gives us a more detailed overovew of our Hyper-V / Cloud infrastructure
Hyper-V

image

VMM

image

Monitoring Citrix Netscaler

For Network devices, we need to have the SNMP service installed on our management server. This can be done using Server Manager or the PowerShell command.

Install-Windowsfeature SNMP-service

After that is done we define the service to allow SNMP packets from hosts.

image

After this is done we have to do some changes to the network device. If we for instance want to monitor Citrix Netscaler we first need to download Netscaler management pack from Citrix. If we have a Netscaler running in our enviroment we have a download pane in the GUI

image

And download the management pack

image

Then import the management pack to SCOM. Which can be done under administation –> management packs –> import.

Then we have to add some SNMP configuration to Netscaler to allow it to communicate with SCOM. This can be done using the CLI command

8

Community string is used for authentication against the SCOM server.  Next we need to run a network discovery rule

Make sure that the default account here has the same credentials as the community string we entered on the Netscaler

ns1

Then under Devices, enter IP address and choose SNMP version 1 / 2 and bind the run account

ns1

After we ran the discovery we have the Netscaler device appear in our infrastructure under network devices.

12

Monitoring XenDesktop

Monitoring XenDesktop 7.x requires a Managment Pack from a Citrix partner called ComTrade. They make Management Packs for most of the Citrix products. The setup is pretty basic and install the agent that they come with on the XenDesktop Controller and on the Management Server and add an license

image

Import the management Packs for XenDesktop.We also have to define the agent installed on the XenDesktop Delivery Controller as an Proxy, this allows it to fetch data outside of its object.

And voila we have a custom view for XenDesktop which gives us a good overview of the Site and can also view how many sessions on the site.

image

As a part of the transition to the Cloud many are looking at a hybrid cloud solution where we have a combined on-premise and a public cloud provider, but one of the problems that appear is monitoring cloud services on the cloud provider.

Monitoring XenServer

Again, since this is a Citrix product it requires a management pack from ComTrade. XenServer is using a custom built FreeBSD so we cannot use the regular Unix/linux management pack to monitor it. On theo ther hand using the Management Pack from ComTrade gives us the total overview.

In order to monitor a XenServer we need a regular server running as an proxy agent. This server will be running as an Xenserver management proxy, so this will connect to the XenServer pool and gather data and report back to Management Server.

First we need again to enter a connection to the pool from the proxy agent

image¨

Then enter a license (or else the agent will not forward any information at all)

and voila!
image

 

Monitoring Azure

Monitoring services in Azure is not as easy as It seems, we can use S2S VPN and have an agent installed on all VMs running there, or setup a gateway server but this only covers the virtual machines and does not cover the other roles there.

Microsoft luckily created a managmenet pack that we can use to monitor Azure services directly from Operations Manager. You can find it here –> http://www.microsoft.com/en-us/download/details.aspx?id=38414

After importing the management pack we will get a new pane under Administration called Windows Azure, here we have to setup Operations Manager against an Azure account we wish to monitor.

Here we have to enter a subscription ID and a Management Certificate against our account

After we are done here, we acn go to authoring and setup  Azure monitoring. Since it by default does not start to monitor objects in Azure, we have to define which objets it should monitor.

Here we can monitor our Cloud Services, Subscription, Virtual Machines and Storage Containers. So after we have configured what we want it to monitor it will start generating alerts.

image

Monitoring Amazon Web Services

Amazon has done a good job when creating its Management Pack for Web Services. (Which can be downloaded from here –> https://aws.amazon.com/windows/system-center/

It contains good information and gives a good overview of most of your infrastructure running in Amazon.

To setup monitoring, import the management pack. Go into Authoring pane and run the Amazon Web Services under Management Pack objects. Here we need to define a watcher node (which will be used to communicate with Amazon as define a run as account.
The run as account should be in form of an Access Key ID and the Secret Access ID using Basic Authentication.

After we have that setup it will start gathering info and start monitoring objects as they appear.

image

Monitoring Unix/Linux agents

Monitoring Unix/Linux requires that we import the management pack for monitoring Unix/Linux, which can be found here –> http://www.microsoft.com/en-us/download/details.aspx?id=29696

Now in my case I want to monitor ubuntu, then I need to use the Universal Linux MP. Since ubuntu does not have its own management pack. After I’ve imported that I have to setup two accounts under Adminsitration –> Unix/Linux accounts

ONe for agent maintance and one for monitoring.  Both of these have to be bound to a profile. (You can see more about accounts which need to be defined here –> http://technet.microsoft.com/en-us/library/hh287150.aspx)

After that we have to setup a discovery (note the linux server needs to be entered with a DNS name)

image

Monitoring VMware

Monitoring VMware from operations manager, requires an Management pack from Veeam.
The management pack requires that we have some extra components installed on a server which has an Operations manager agent installed. This server is used to communicate with vCenter and get info from the Vmware enviroment.

These components are web services which allow communication flow

•Veeam Vmware Collector

•Veeam Virtualization Extensions Services

•Veeam Virtualization Extensions UI

(These components can be installed on the same server)

After these components has been installed we have to setup connection to vCenter from the Extensions Services web gui.

image

After this is done we will start to get information into Operations Manager.

image

 

Now there are also some other Management Packs which are on Microsoft Pinpoint which shows other third party products which we can monitor from Operations Manager.
Many third party vendors do not have their management pack available on Pinpoint to contact your vendor in case you are unsure if they have a management pack.  Important to note that this is just to show the possbilities we have with Operations Manager, important to many management packs will in many cases slow down your setup and requires alot of tuning before it works as you want it to Smilefjes