Posts Tagged ‘Documentation’

Setup for Microsoft cluster service

April 1st, 2009

Setting up a Microsoft cluster on VMware used to be a fairly straight forward task with a very minimal set of considerations. Over time, the support documentation has evolved into something that looks like it was written by the U.S Internal Revenue Service. I was an Accountant in my previous life and I remember Alternative Minimum Tax code that was easier to follow than what we have today, a 50 page .PDF representing VMware’s requirements for MSCS support. Even with that, I’m not sure Microsoft supports MSCS on VMware. The Microsoft SVVP program supports explicit versions and configurations of Windows 2000/2003/2008 on ESX 3.5 update 2 and 3, and ESXi 3.5 update 3 but no mention is made regarding clustering. I could not find a definitive answer on the Microsoft SVVP program site other than the following disclaimer:

For more information about Microsoft’s policies for supporting software running in non-Microsoft hardware virtualization software please refer to http://support.microsoft.com/?kbid=897615. In addition, refer to http://support.microsoft.com/kb/957006/ to find more information about Microsoft’s support policies for its applications running in virtual environments.

At any rate, here are some highlights of MSCS setup on VMware Virtual Infrastructure, and by the way, all of this information is fair game for the VMware VCP exam.

Prerequisites for Cluster in a Box

To set up a cluster in a box, you must have:

* ESX Server host, one of the following:

* ESX Server 3 – An ESX Server host with a physical network adapter for the

service console. If the clustered virtual machines need to connect with external

hosts, then an additional network adapter is highly recommended.

* ESX Server 3i – An ESX Server host with a physical network adapter for the

VMkernel. If the clustered virtual machines need to connect with external

hosts, a separate network adapter is recommended.

* A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you

need an FC HBA (QLogic or Emulex).

You can set up shared storage for a cluster in a box either by using a virtual disk or by

using a remote raw device mapping (RDM) LUN in virtual compatibility mode

(non‐pass‐through RDM).

When you set up the virtual machine, you need to configure:

* Two virtual network adapters.

* A hard disk that is shared between the two virtual machines (quorum disk).

* Optionally, additional hard disks for data that are shared between the two virtual

machines if your setup requires it. When you create hard disks, as described in this

document, the system creates the associated virtual SCSI controllers.

Prerequisites for Clustering Across Boxes

The prerequisites for clustering across boxes are similar to those for cluster in a box.

You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* Shared storage must be on an FC SAN.

* You must use an RDM in physical or virtual compatibility mode (pass‐through

RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage.

Prerequisites for Standby Host Clustering

The prerequisites for standby host clustering are similar to those for clustering across

boxes. You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* You must use RDMs in physical compatibility mode (pass‐through RDM).

You cannot use virtual disk or RDM in virtual compatibility mode

(non‐pass‐through RDM) for shared storage.

* You cannot have multiple paths from the ESX Server host to the storage.

* Running third‐party multipathing software is not supported. Because of this

limitation, VMware strongly recommends that there only be a single physical path

from the native Windows host to the storage array in a configuration of

standby‐host clustering with a native Windows host. The ESX Server host

automatically uses native ESX Server multipathing, which can result in multiple

paths to shared storage.

* Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the

physical Windows machine.

Cluster in a Box Cluster Across Boxes Standby Host Clustering
Virtual disks Yes No No
Pass-through RDM (physical compatibility mode) No Yes Yes
Non-pass-through RDM (virtual compatibility mode) Yes Yes No

Caveats, Restrictions, and Recommendations

This section summarizes caveats, restrictions, and recommendation for using MSCS in

a VMware Infrastructure environment.

* VMware only supports third‐party cluster software that is specifically listed as

supported in the hardware compatibility guides. For latest updates to VMware

support for Microsoft operating system versions for MSCS, or for any other

hardware‐specific support information, see the Storage/SAN Compatibility Guide for

ESX Server 3.5 and ESX Server 3i.

* Each virtual machine has five PCI slots available by default. A cluster uses four of

these slots (two network adapters and two SCSI host bus adapters), leaving one

PCI slot for a third network adapter (or other device), if needed.

* VMware virtual machines currently emulate only SCSI‐2 reservations and do not

support applications using SCSI‐3 persistent reservations.

* Use LSILogic virtual SCSI adapter.

* Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4.

VMware recommends Windows Server 2003.

* Use two‐node clustering.

* Clustering is not supported on iSCSI or NFS disks.

* NIC teaming is not supported with clustering.

* The boot disk of the ESX Server host should be on local storage.

* Mixed HBA environments (QLogic and Emulex) on the same host are not

supported.

* Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not

supported.

* Clustered virtual machines cannot be part of VMware clusters (DRS or HA).

* You cannot use migration with VMotion on virtual machines that run cluster

software.

* Set the I/O time‐out to 60 seconds or more by modifying

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\

TimeOutValue.

The system might reset this I/O time‐out value if you recreate a cluster. You must

reset the value in that case.

* Use the eagerzeroedthick format when you create disks for clustered virtual

machines. By default, the VI Client or vmkfstools create disks in zeroedthick

format. You can convert a disk to eagerzeroedthick format by importing,

cloning, or inflating the disk. Disks deployed from a template are also in

eagerzeroedthick format.

* Add disks before networking, as explained in the VMware Knowledge Base article

at http://kb.vmware.com/kb/1513.

phew!

VMware raises the bar on CPU consolidation ratio support

April 1st, 2009

VMware has updated its Configuration Maximums support document (one of my favorite most documents in the VMware document library). Most notable is the increase in number of supported virtual CPUs per core:

  • Previously, ESX and ESXi 3.5 Update 2 and earlier supported 8 virtual CPUs per core and in special cases, 11 virtual CPUs per core if the workloads were VDI
  • The new version of the document shows ESX and ESXi 3.5 Update 3 and later support 20 virtual CPUs per core across the board – with no special circumstances for VDI workloads

One thing to note however is the fact that the number of total virtual CPUs per host or total number of virtual machines per host did not change. They remain at 192 and 170 respectively.

So we’re not increasing the total number of VMs an ESX or ESXi host will support. VMware is saying they can support the same number of VMs and vCPUs on less physical CPU cores. This may be due to more powerful CPUs entering the market (such as the Intel Nehalem). Or maybe VMware is addressing customers who have traditionally light CPU workloads and need to reach higher CPU consolidation ratios. Or maybe it has something to do with blade servers or Cisco’s UCS (or Project California). At any rate, VMware is encouraging the virtualization of more with less. Maybe it’s an economy thing. Who knows. It’s good for us though since VMware still licenses by the socket and not the core. We can power 160 VMs with an 8 core box (dual quads or quad duals).

While we’re on the subject, is anyone coming close to 170 VMs per host? What’s the most impressive consolidation ratio you’ve seen? I’d like to hear about it. As in the Citrix world, I don’t think it’s a matter of “Do we have the hardware today to handle it?” – the answer is yes. It’s more the exposure of 170 VMs on a single host and do we want to go down that road.

VI Toolkit Quick Reference Guide

March 14th, 2009

Virtu-Al (Alan Renouf) has posted a great two-page cheat sheet for the VMware VI Toolkit version 1.5.

This gem of a document is similar to VI3 card created by Forbes Guthrie over at vReference.com. Excellent job gentlemen!

While you’re at Virtu-Al’s site, check out all the sample code and scripts.  Chances are you could implement one or more of these puppies in your environment to configure ESX or ESXi.  Scripting is definitely one of the ways to become more efficient and agile and it’s a great way to ensure consistency across your environment.  PowerShell and VI Toolkit is where’s it at.  I think they are going to be here for a long time.

Microsoft Performance Monitor tweaks

February 17th, 2009

Today I discovered the workarounds to a few issues in Microsoft Performance Monitor that have bugged me for quite a while (read: years).

Issue 1: Vertical lines are displayed in the Sysmon tool that obscure the graph view

2-17-2009 9-41-08 PM

Cause: This behavior occurs when there are more than 100 data points to be displayed in chart view.

Resolution: Microsoft KB article 283110

To enable or disable this behavior:

  1. Start Regedit.exe.
  2. Navigate to the following key:
  3. HKEY_CURRENT_USER\Software\Microsoft\SystemMonitor
  4. On the Edit menu, click New, and then click DWord Value.
  5. Type the following value in the Name box:
  6. DisplaySingleLogSampleValue
  7. Set the value to 1 if you do not want to view the vertical line indicators, or set the value to 0, which is the default setting, to display the vertical indicators.

Result:

2-17-2009 9-47-48 PM

Issue 2: When looking at large numbers in Performance Monitor (Windows XP), comma separators do not exist thus making it difficult to interpret large numbers.

2-17-2009 9-49-26 PM

Cause: Microsoft

Resolution: Microsoft KB article 300884

Follow these steps, and then quit Registry Editor:

  1. Click Start, click Run, type regedit, and then click OK.
  2. Locate and then click the following key in the registry:
  3. HKEY_CURRENT_USER\Software\Microsoft\SystemMonitor\
  4. On the Edit menu, point to New, and then click DWORD Value.
  5. Type DisplayThousandsSeparator, and then press ENTER.
  6. On the Edit menu, click Modify.
  7. Type 1, and then click OK.

Result:

2-17-2009 9-50-51 PM

Extra credit:  Check out Microsoft KB article 281884 for one additional tweak that deals with viewing PIDs in Performance Monitor counters.

Virtualization Wiki launched

February 11th, 2009

Rynardt Spies, proprietor of the VirtualVCP blog, has launched VI-Pedia, the Virtualization Open Wiki.

It looks like Rynardt has already begun populating the Wiki with links to VMware’s HCL information.  I think the following information which I posted over at the Petri IT Knowledgebase would also prove to be useful on the Wiki:

Community-Supported Hardware/Software for VMware Infrastructure
http://www.vmware.com/resources/communitysupport/
In 2007, VMware began maintaining a web page of non-HCL hardware that works with VMware ESX. This is a list of hardware and software components that have been reported to work with VMware Infrastructure, either by the community or by the individual vendors themselves. Great for people trying to build a cheap lab out of dubious or whitebox hardware. If your hardware is not on the official VMware HCL, check this list to see if someone has reported that your particular piece of hardware works with ESX.

Additional Resources for Community-Supported Hardware/Software for VMware Infrastructure
http://www.vm-help.com/
http://www.vm-help.com/Whitebox_HCL.php
http://ultimatewhitebox.com/
http://www.vmweekly.com/articles/hardware_recommendations_to_build_cheap_esx_server/1/
http://www.mikedipetrillo.com/mikedvirtualization/2008/10/building-a-500-vmware-esxi-host.html

Thank you for putting this together Rynardt!

VMGURU to release 4 chapters of VI3 book today

February 10th, 2009

Scott Herold of VMGuru.com and co-author of the book VMware Infrastructure 3: Advanced Technical Design Guide and Advanced Operations Guide has announced today the release of four of the book’s chapters in PDF format today.

I’ve read the previous version of this book a few years ago and I’m in the middle of reading the current version.  I HIGHLY recommend this book.  It is worth it’s weight in gold and the fact that the authors are going to begin giving it away for free to the virtualization community is baffling to me but yet at the same time it is a symbol of their generosity and commitment to providing the community with top notch technical and operations detail on VMware virtual infrastructure.

Generally speaking, many technical authors don’t make a pile of money writing books.  Be sure to thank the authors Ron Oglesby, Scott Herold, and Mike Laverick for their hard work and generosity.

More information about this book can be found here and here.  Stay tuned to VMGuru.com for the official release of these chapters which should happen sometime today.

Three VirtualCenter security tips Windows administrators should know

January 15th, 2009

Good morning!  I’d like to take the opportunity to talk a bit about something that has been somewhat of a rock in my shoe as a seasoned Windows administrator from the NT 3.5 era:  The VirtualCenter (vCenter Server, VirtualCenter Management Server, VCMS, VC, etc.) security model, or more accurately, its unfamiliar mechanics that can catch Windows administrators off guard and leave them scratching their heads.

Tip #1: The VCMS security model revolves around privileges, roles, and objects.  The more than 100 privileges define rights, roles are a collection of privileges, and roles are assigned to objects which are entities in the virtual infrastructure as shown in the diagram borrowed below:

1-15-2009 11-24-45 AM

Windows administrators will be used to the concept of assigning NTFS permissions to files, folders, and other objects in Active Directory.  It is very common for Windows objects to contain more than one Access Control Entry (ACE) which can be a group (such as “Accounting”, “Marketing”, etc.) or an explicit user (such as “Bob”, Sally”, etc.)  The same holds true for assigning roles to object in VC.

In some instances, which are not uncommon at all, a user may be granted permission to an object by way of more than one ACE.  For example, if both the Accounting and Marketing groups were assigned rights, and Sally was a member of both those groups, Sally would have rights to the object through both of those groups.  Using this same example, if the two ACEs defined different permissions to an object, the end result is a cumulative, so long as the ACE doesn’t contain “deny” which is special:  Sally would have the combined set of permissions.  The same holds true in VC.

Let’s take the above example a step further.  In addition to the two groups, which Sally is a member of, being ACLd to an object, now let’s say Sally’s user account object itself is an explicit ACE in the ACL list.  In the Windows world, the effect is Sally’s rights are still cumulative combining the three ACEs.  This is where the fork in the road lies in the VirtualCenter security model.  Roles explicitly assigned to a user object trump all other assigned or inherited permissions to the same object.  If the explicit ACE defines less permissions, the effective result is Sally will have less permissions than what her group membership would have provided.  If the explicit ACE defines more permissions, the effective result is Sally will have more permissions than what her group membership would have provided.  This is where Windows based VC administrators will be dumbfounded when a user suddenly calls with tales of things gray’d out in VirtualCenter, not enough permissions, etc.  Of course the flip side of the coin is a junior administrator suddenly finds themselves with cool new options in VC.  “Let’s see what this datastore button does”

Moral of the story from a real world perspective:  Assigning explicit permissions to user accounts in VC without careful planning will yield somewhat unpredictable results when inheritance is enabled (which is typical).  To take this to extremes, assigning explicit permissions to user accounts in VC, especially where inheritance in the VC hierarchy is involved, is a security and uptime risk when a user ends up with the wrong permissions accidentally.  For security and consistency purposes, I would avoid assigning permissions explicitly to user accounts unless you have a very clear understanding of the impacts currently and down the road.

Tip #2: Beware the use of the built in role Virtual Machine Administrator.  It’s name is misleading and the permissions it has are downright scary and not much different than the built in Administrator role.  For instance, the Virtual Machine Administrator role:  can modify VC and ESX host licensing, has complete control over the VC folder structure, has complete control over Datacenter objects, has complete control over datastores (short of file management), can remove networks, has complete control over inventory items such as hosts and clusters.  This list goes on and on.  I have three words:  What The Hell?!  I don’t know – the way my brain works is those permissions stretch well beyond the boundaries of what I would delegate for a Virtual Machine Administrator.

Moral of the story from a real world perspective:  Use the Virtual Machine Administrator role with extreme caution.  There is little disparity between the Administrator role and the Virtual Machine Administrator role, minus some items for Update Manager and changing VC permissions themselves. Therefore, any user who has the Virtual Machine Administrator role is practically an administrator.  The Virtual Machine Administrator role should not be used unless you have delegations that would fit this role precisely.  Another option would be clone the role and strip some of the more datacenter impactful permissions out of it.

Tip #3: Audit your effective VirtualCenter permissions on a regular basis, especially if you have large implementation with many administrators “having their hands in the cookie jar” so to speak.  If you use groups to assign roles in VC, then that means you should be auditing these groups as well (above and beyond virtualization conversations, administrative level groups should be audited anyway as a best practice).  This whitepaper has a nice Perl script for dumping VirtualCenter roles and permissions using the VMware Infrastructure Perl Toolkit.  Use of the script will automate the auditing process quite a bit and help transform a lengthy mundane task into a quicker one.  While you’re at it, it wouldn’t be a bad idea to periodically check tasks and events to see who is doing what.  There should be no surprises there.

Moral of the story from a real world perspective:  Audit your VirtualCenter roles and permissions.  When an unexpected datacenter disaster occurs from users having elevated privileges, one of the first questions to be asked in the post mortem meeting will be what your audit process is.  Have a good answer prepared.  Even better, avoid the disaster and down time through the due diligence of auditing your virtual infrastructure security.

For more information about VirtualCenter security, check out this great white paper or download the .pdf version from this link.  Some of the information I posted above I gathered from this document.  The white paper was written by Charu Chaubal, a technical marketing manager at VMware and Ph.D. in numerical modeling of complex fluids, with contributions from Doug Clark, and Karl Rummelhart.

If VirtualCenter security talk really gets your juices flowing, you should check out a new podcast launched by well known and respected VMTN community member/moderator and book author Edward Haletky that starts today called Virtualization Security Round Table.  It is sure to be good!