Q: What’s your Windows template approach?

November 7th, 2010 by jason Leave a reply »

Once upon a time, I was a Windows Server administrator.  Most of my focus was on Windows Server deployment and management. VMware virtualization was a large interest but my Windows responsibilities dwarfed the amount of time I spent with VMware.  One place where these roads intersect is Windows templates.  Because a large part of my job was managing the Windows environment, I spent time maintaining “the perfect Windows template”.  Following were the ingredients I incorporated:

Applications    
Adobe Acrobat Reader Advanced Find & Replace Beyond Compare
Diskeeper MS Network Monitor MS Resource Kits
NTSEC Tools Latest MS RDP Client Symantec Anti-Virus CE
MS UPHClean VMware Tools Windows Admin Pack
Windows Support Tools Winzip Pro Sysinternals Suite
Windows Command Console BGINFO CMDHERE
Windows Perf Advisor MPS Reports GPMC
SNMP    

 

Tweaks    
Remote Desktop enabled Remote Assistance disabled Pagefile
Complete memory dump DIRCMD=/O env. variable PATH tweaks
taskmgr.exe in startup, run minimized SNMP Desktop prefs.
Network icon in System Tray Taskbar prefs.  
C: 12GB D: 6GB  
Display Hardware acceleration to Full*    
     
* = if necessary    

 

VMware virtualization is now and has been my main focus going on two years.  By title, I’m no longer a Windows Server administrator and I don’t care to spend a lot of time worrying about what’s in my templates.  I don’t have to worry about keeping several applications up to date.  In what I do now, it’s actually more important to consistently work with the most generic Windows template as possible.  This is to ensure that projects I’m working with on the virtualization side of things aren’t garfed up by any of the 30+ changes made above.  Issues would inevitably appear and each time I’d need to counter productively deal with the lists above as possible culprits.  As such, I now take a minimalist approach to Windows templates as follows:

Applications
VMware Tools

 

Tweaks    
C: 20GB VMXNET3 vNIC Activate Windows
wddm_video driver* Disk Alignment Display Hardware acceleration to Full*
     
* = if necessary    

 

In large virtualized environments, templates may be found in various repositories due to network segmentation, firewalls, storage placement, etc.  As beneficial as templates are, keeping them up to date can become a significant chore and the time spent doing so eats away at the time savings benefit which they provide.  Deployment consistency is key in reducing support and incident costs but making sure templates in distributed locations are consistent is not only a chore, but it is of paramount importance.  If this is the scenario you’re fighting, automated template and/or storage replication is needed.  Another solution might be to get away from templates altogether and adopt a scripted installation which is another tried and true approach which provides automation and consistency, but without the hassle of maintaining templates.  The hassle in this case isn’t eliminated completely.  It’s shifted into other areas such as maintaining PXE boot services, maintaining PXE images, and maintaining post build/application installation scripts.  I’ve seen large organizations go the scripted route in lieu of templates.  One reason could simply be that scripted virtual builds are strategically consistent with the organization’s scripted physical builds.  Another could be the burden of maintaining templates as I discussed earlier.  Is this a hint that templates don’t scale in large distributed environments?

Do you use templates and if so, what is your approach in comparison to what I’ve written about?

Advertisement

No comments

  1. CianoKuraz says:

    Hi, nice post
    i personally use templates..i founded them the best solution for our production Datacenter. we have a grow of 20-25 vm Server peer month..vm are windows 2008 R2 and Redhat 5 (they are the 90% of OS deployed)..i personally find templates very easy to manage, update with UM and easy to deploy sure. My initial setup is almost the same as yours .. with windows 2008 R2 i use a C drive of 40 Gb (i already now that with 20 gb just 1 minute after the deploy the user will call me to expand the disk :), i think this is a common issue)..just one more thing ..i usually use thick disks (due to some organizational problems..) and use deduplication on storage size.
    bye

  2. Tim Oudin says:

    Very similar to your current situation I am attempting to maintain as few possible templates while still having the resources available for my team. Unfortunately that mean minimum 6 Windows and about 10 Linux templates at each site. We have started exploring, and implemented, Microsoft Deployment Toolkit as a solution for deployment of Windows images. We’re still in the early stages of working through feasibility of MDT versus a template strategy though the huge upside is that MDT has absolutely no ties whatsoever to an ESX host!

  3. Ed Grigson says:

    Interesting post. I found myself thinking about this at VMworld this year when everyone was talking cloud and I was thinking ‘but we still have plenty of post deployment tasks which aren’t automated’…

    Simplicity is the main driver for us, as we don’t have a large team to maintain templates. We started off with three templates stores (one at each primary site) but quickly found that version control was too time consuming so we’ve reverted to a single template store. Luckily our bandwidth is OK so we can clone to a remote site without a hugh time penalty.

    Like you we now keep our templates relatively clean. Some software agents don’t play well with templates (WSUS, Backup Exec and some AV agents need reinstalling after SYSPREP for Windows servers) and we prefer to avoid too many specialised templates in case we need to update the base configuration. Because of this there are more post-template tasks to complete however, which we need to script. Given this I can see the appeal of a totally scripted approach – if we’re going to maintain scripts why maintain templates too.

  4. Brandon says:

    No PVSCSI on your Windows templates? There previously were issues with low I/O VMs, but that is resolved as of 4.1. Both PVSCSI and VMXNET3 are also compatible with FT as of 4.1 as well, just in case that is a concern.

    As for the OS itself, on 2003 I also tend to kill off all the accessories and minimize things I find useless on a server, much like Windows 2008 is by default. +1 for BGINFO, it is worth teaking that thing for your environment’s “unique qualities” for sure. I always made sure to display the manufacturer so one could quickly differentiate a VM from a physical server when you hit it via RDP.

    As for moving away from templates completely, I don’t know if that is a good idea either. There are so many VM settings that need to be kept up with… boot delay set to at least 5000, BIOS settings to disable the unnecessary COM ports and such. I remove the floppy controller too. Plus, if you follow VMware’s security guide there are a plethora of advanced options that are a pain to hand jam every time. I guess a mix of both would be a good idea. A template for everything that remains “static”, but trying to keep up with Windows updates, VMware Tools updates, etc can become a huge chore if the environment is big enough. Not to mention if you have a diverse set of OSes in the environment.

  5. jason says:

    @Brandon
    PVSCSI on an as needed basis. Limitations are pointed out in VMware KB Article 1010398.
    I also worte up a blog post on this back in March: http://www.boche.net/blog/index.php/2010/03/25/configuring-disks-to-use-vmware-paravirtual-scsi-pvscsi-adapters/

    As for detailed settings in the VM shell config such as diabling devices or setting a boot delay, those can still be maintained with VM templates, without the template containing an installed OS.

  6. Brandon says:

    Interesting thoughts on not using it by default. I certainly have, and actually, the CPU utilization across the host(s) dropped very noticably. You did say that performing an ROI study should be performed, but it is hard to quantify the overall host CPU utilization drops due to paravirtualization when added up per VM. Its hard to quantify even when doing it wholesale and then doing a comparison, but it definitely was there. I wish I had solid numbers, I really do, but I don’t so maybe it is all just meaningless speculation.

    Most of the limitations listed in the KB article, *** for us *** were not a big deal. The limitations on the Linux guests would make it a bit cumbersome. So I agree if you wanted to have one base template, then PVSCSI probaby should be configured post deployment in that case. The only other thing I can think of with one template is that you have to be sure to modify the options on the VM for the proper OS post deployment. Otherwise you might have VMTools deployment issues later.

  7. Nick says:

    Keeping it simple works best for me. Personally I keep one VM of each OS flavor running off the domain for clean deployment to internal network or DMZ. Only minimal OS tweaks (most noted in your post) and only tools installed. With them running I can keep them current with patch deployments and will clone to template with date stamps every few months. The same VM is also used as a source for hardware images. When the hardware image is deployed we have scripts to remove tools and inject hardware specific drivers.
    Once we moved to this style of template management everything became much easier to maintain.