VMware documentation library updates

April 2nd, 2009 by jason No comments »

Quick note:  In case you missed it (like I did), VMware has updated most of their VMware Infrastructure 3 documentation.  If you’re a documentation junkie (like me), you’ll want to re-download all of VMware’s VI3 documentation.  About 75% of the documents have new file names as well.

http://www.vmware.com/support/pubs/vi_pages/vi_pubs_35u2.html

Setup for Microsoft cluster service

April 1st, 2009 by jason No comments »

Setting up a Microsoft cluster on VMware used to be a fairly straight forward task with a very minimal set of considerations. Over time, the support documentation has evolved into something that looks like it was written by the U.S Internal Revenue Service. I was an Accountant in my previous life and I remember Alternative Minimum Tax code that was easier to follow than what we have today, a 50 page .PDF representing VMware’s requirements for MSCS support. Even with that, I’m not sure Microsoft supports MSCS on VMware. The Microsoft SVVP program supports explicit versions and configurations of Windows 2000/2003/2008 on ESX 3.5 update 2 and 3, and ESXi 3.5 update 3 but no mention is made regarding clustering. I could not find a definitive answer on the Microsoft SVVP program site other than the following disclaimer:

For more information about Microsoft’s policies for supporting software running in non-Microsoft hardware virtualization software please refer to http://support.microsoft.com/?kbid=897615. In addition, refer to http://support.microsoft.com/kb/957006/ to find more information about Microsoft’s support policies for its applications running in virtual environments.

At any rate, here are some highlights of MSCS setup on VMware Virtual Infrastructure, and by the way, all of this information is fair game for the VMware VCP exam.

Prerequisites for Cluster in a Box

To set up a cluster in a box, you must have:

* ESX Server host, one of the following:

* ESX Server 3 – An ESX Server host with a physical network adapter for the

service console. If the clustered virtual machines need to connect with external

hosts, then an additional network adapter is highly recommended.

* ESX Server 3i – An ESX Server host with a physical network adapter for the

VMkernel. If the clustered virtual machines need to connect with external

hosts, a separate network adapter is recommended.

* A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you

need an FC HBA (QLogic or Emulex).

You can set up shared storage for a cluster in a box either by using a virtual disk or by

using a remote raw device mapping (RDM) LUN in virtual compatibility mode

(non‐pass‐through RDM).

When you set up the virtual machine, you need to configure:

* Two virtual network adapters.

* A hard disk that is shared between the two virtual machines (quorum disk).

* Optionally, additional hard disks for data that are shared between the two virtual

machines if your setup requires it. When you create hard disks, as described in this

document, the system creates the associated virtual SCSI controllers.

Prerequisites for Clustering Across Boxes

The prerequisites for clustering across boxes are similar to those for cluster in a box.

You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* Shared storage must be on an FC SAN.

* You must use an RDM in physical or virtual compatibility mode (pass‐through

RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage.

Prerequisites for Standby Host Clustering

The prerequisites for standby host clustering are similar to those for clustering across

boxes. You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* You must use RDMs in physical compatibility mode (pass‐through RDM).

You cannot use virtual disk or RDM in virtual compatibility mode

(non‐pass‐through RDM) for shared storage.

* You cannot have multiple paths from the ESX Server host to the storage.

* Running third‐party multipathing software is not supported. Because of this

limitation, VMware strongly recommends that there only be a single physical path

from the native Windows host to the storage array in a configuration of

standby‐host clustering with a native Windows host. The ESX Server host

automatically uses native ESX Server multipathing, which can result in multiple

paths to shared storage.

* Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the

physical Windows machine.

Cluster in a Box Cluster Across Boxes Standby Host Clustering
Virtual disks Yes No No
Pass-through RDM (physical compatibility mode) No Yes Yes
Non-pass-through RDM (virtual compatibility mode) Yes Yes No

Caveats, Restrictions, and Recommendations

This section summarizes caveats, restrictions, and recommendation for using MSCS in

a VMware Infrastructure environment.

* VMware only supports third‐party cluster software that is specifically listed as

supported in the hardware compatibility guides. For latest updates to VMware

support for Microsoft operating system versions for MSCS, or for any other

hardware‐specific support information, see the Storage/SAN Compatibility Guide for

ESX Server 3.5 and ESX Server 3i.

* Each virtual machine has five PCI slots available by default. A cluster uses four of

these slots (two network adapters and two SCSI host bus adapters), leaving one

PCI slot for a third network adapter (or other device), if needed.

* VMware virtual machines currently emulate only SCSI‐2 reservations and do not

support applications using SCSI‐3 persistent reservations.

* Use LSILogic virtual SCSI adapter.

* Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4.

VMware recommends Windows Server 2003.

* Use two‐node clustering.

* Clustering is not supported on iSCSI or NFS disks.

* NIC teaming is not supported with clustering.

* The boot disk of the ESX Server host should be on local storage.

* Mixed HBA environments (QLogic and Emulex) on the same host are not

supported.

* Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not

supported.

* Clustered virtual machines cannot be part of VMware clusters (DRS or HA).

* You cannot use migration with VMotion on virtual machines that run cluster

software.

* Set the I/O time‐out to 60 seconds or more by modifying

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\

TimeOutValue.

The system might reset this I/O time‐out value if you recreate a cluster. You must

reset the value in that case.

* Use the eagerzeroedthick format when you create disks for clustered virtual

machines. By default, the VI Client or vmkfstools create disks in zeroedthick

format. You can convert a disk to eagerzeroedthick format by importing,

cloning, or inflating the disk. Disks deployed from a template are also in

eagerzeroedthick format.

* Add disks before networking, as explained in the VMware Knowledge Base article

at http://kb.vmware.com/kb/1513.

phew!

VMware raises the bar on CPU consolidation ratio support

April 1st, 2009 by jason No comments »

VMware has updated its Configuration Maximums support document (one of my favorite most documents in the VMware document library). Most notable is the increase in number of supported virtual CPUs per core:

  • Previously, ESX and ESXi 3.5 Update 2 and earlier supported 8 virtual CPUs per core and in special cases, 11 virtual CPUs per core if the workloads were VDI
  • The new version of the document shows ESX and ESXi 3.5 Update 3 and later support 20 virtual CPUs per core across the board – with no special circumstances for VDI workloads

One thing to note however is the fact that the number of total virtual CPUs per host or total number of virtual machines per host did not change. They remain at 192 and 170 respectively.

So we’re not increasing the total number of VMs an ESX or ESXi host will support. VMware is saying they can support the same number of VMs and vCPUs on less physical CPU cores. This may be due to more powerful CPUs entering the market (such as the Intel Nehalem). Or maybe VMware is addressing customers who have traditionally light CPU workloads and need to reach higher CPU consolidation ratios. Or maybe it has something to do with blade servers or Cisco’s UCS (or Project California). At any rate, VMware is encouraging the virtualization of more with less. Maybe it’s an economy thing. Who knows. It’s good for us though since VMware still licenses by the socket and not the core. We can power 160 VMs with an 8 core box (dual quads or quad duals).

While we’re on the subject, is anyone coming close to 170 VMs per host? What’s the most impressive consolidation ratio you’ve seen? I’d like to hear about it. As in the Citrix world, I don’t think it’s a matter of “Do we have the hardware today to handle it?” – the answer is yes. It’s more the exposure of 170 VMs on a single host and do we want to go down that road.

VMware Tools “Not Running”

March 31st, 2009 by jason No comments »

I ran into an disturbing problem this evening in the lab. While in the Virtual Infrastructure Client (VIC), I attempted to perform a graceful shut down on a VM by right clicking on it and choosing Shut Down Guest. Unfortunately the graceful shutdown and restart options were grayed out which is a good indicator that the VMware Tools are not installed or not running. I logged into the VM and strangely enough, the VMware Tools were installed and the VMware Tools service was running. Even stranger, when I went back to the VIC, the VMware Tools status now showed “Tools OK”.

It was then that I noticed VMware Tools status was showing “Not Running” for a whole slew of other VMs which I knew had tools installed.

A quick search uncovered a recently updated VMware KB article 1008709VMware Tools status shows as not running after running VMware Consolidated Backup“. Mind you, I’m not running VCB in the lab (thank God and Veeam), however, the description in the KB article mostly matched my situation.

During the normal VMware Consolidated Backup (VCB) operation, the VMware Tools status changes from OK to Not Running for some time during the initial snapshot operation, but it returns to OK after the VCB operation completes.

However, on hosts installed with the patch bundle ESX350-200901401-SG, the VMware Tools status on the virtual machines may stay as Not Running even after the VCB operation completes.

Although the KB article specifically ties the problem to VCB, the problem is not limited to VCB in my experience. Other applications that perform snapshots can cause the behavior, such as the product I’m using: Veeam Backup 3.0. The root cause stems from a January 2009 VMware patch: ESX350-200901401-SG.

There are a few work arounds, the second of which I discovered on my own:

  1. Restart the mgmt-vmware service immediately after the backup job is done. This changes the Tools status to OK. You can write a cron job to do it periodically.OR
  2. Log in and log out, or log out if you are already logged in, from the virtual machine. This changes the Tools status to OK if it was showing as Not running.OR
  3. Use VCBMounter to look for virtual machine name or UUID rather than virtual machine IP. Virtual machine IP only works when the status of tools is OK, but virtual machine name and UUID works even if the Tools status shows as Not running.

After reading the KB article, I ran a service mgmt-vmware restart and after about a minute, the VMware Tools status for all my VMs changed in status from “Not Running” to “Tools OK”. The host and all of its VMs briefly disconnected as well but don’t worry, they’ll come back on their own shortly.

Until VMware releases a permanent fix, it sounds like I can expect this behavior daily after each Veeam backup completes.

By the way, if you’re running VCB, this condition will cause future VCB backups to fail if the VCBMounter is set to look for the virtual machine IP rather than virtual machine name or UUID. Nobody likes failed backups so please make sure you get this sorted out in your environment if the problem exists.

VMware ESX/ESXi 3.5 Update 4 released

March 30th, 2009 by jason No comments »

Today VMware released Update 4 for it’s flagship bare metal ESX and free ESXi products. The build number has been incremented to build 153875.

Release notes include:

What’s New

Notes:

  1. Not all combinations of VirtualCenter and ESX Server versions are supported and not all of these highlighted features are available unless you are using VirtualCenter 2.5 Update 4 with ESX Server 3.5 Update 4. See the ESX Server, VirtualCenter, and VMware Infrastructure Client Compatibility Matrixes for more information on compatibility. (ESX 3.5u4 is not compatible with versions of VirtualCenter prior to version 2.5u2)
  2. This version of ESX Server requires a VMware Tools upgrade.

The following information provides highlights of some of the enhancements available in this release of VMware ESX Server:

Expanded Support for Enhanced vmxnet Adapter This version of ESX Server includes an updated version of the VMXNET driver (VMXNET enhanced) for the following guest operating systems:

  • Microsoft Windows Server 2003, Standard Edition (32-bit)
  • Microsoft Windows Server 2003, Standard Edition (64-bit)
  • Microsoft Windows Server 2003, Web Edition
  • Microsoft Windows Small Business Server 2003
  • Microsoft Windows XP Professional (32-bit)

The new VMXNET version improves virtual machine networking performance and requires VMware tools upgrade.

Enablement of Intel Xeon Processor 5500 Series – Support for the Xeon processor 5500 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

QLogic Fibre Channel Adapter Driver Update – The driver and firmware for the QLogic fibre channel adapters have been updated to version 7.08-vm66 and 4.04.06 respectively. This release provides interoperability fixes for QLogic Management Tools for FC Adapters and enhanced NPIV support.

Emulex Fibre Channel Adapter Driver Update The driver for Emulex Fibre Channel Adapters has been upgraded to version 7.4.0.40. This release provides support for the HBAnyware 4.0 Emulex management suite.

LSI megaraid_sas and mptscsi Storage Controller Driver Update – The drivers for LSI megaraid_sas and mptscsi storage controllers have been updated to version 3.19vmw and 2.6.48.18 vmw respectively. The upgrade improves performance and enhance event handling capabilities for these two drivers.

Newly Supported Guest Operating Systems – Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the Guest Operating System Installation Guide: http://www.vmware.com/pdf/GuestOS_guide.pdf.

  • SUSE Linux Enterprise Server 11 (32-bit and 64-bit).
  • SUSE Linux Enterprise Desktop 11 (32-bit and 64-bit).
  • Ubuntu 8.10 Desktop Edition and Server Edition (32-bit and 64-bit).
  • Windows Preinstallation Environment 2.0 (32-bit and 64-bit).

Furthermore, pre-built kernel modules (PBMs) were added in this release for the following guests:

  • Ubuntu 8.10
  • Ubuntu 8.04.2

Newly Supported Management Agents – Refer to VMware ESX Server Supported Hardware Lifecycle Management Agents for the most up-to-date information on supported management agents.

Newly Supported I/O Devices – in-box support for the following on-board processors, IO devices, and storage subsystems:

    SAS Controllers and SATA Controllers:

The following are newly supported SATA Controllers.

  • PMC 8011 (for SAS and SATA drives)
  • Intel ICH9
  • Intel ICH10
  • CERC 6/I SATA/SAS Integrated RAID Controller (for SAS and SATA drivers)
  • HP Smart Array P700m ControllerNotes:
    1. Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5 (KB 1008673).
    2. Storing VMFS datastores on native SATA drives is not supported.

Network Cards: The following are newly supported network interface cards:

  • HP NC375i Integrated Quad Port Multifunction Gigabit Server Adapter
  • HP NC362i Integrated Dual port Gigabit Server Adapter
  • Intel 82598EB 10 Gigabit AT Network Connection
  • HP NC360m Dual 1 Gigabit/NC364m Quad 1 Gigabit
  • Intel Gigabit CT Desktop Adapter
  • Intel 82574L Gigabit Network Connection
  • Intel 10 Gigabit XF SR Dual Port Server Adapter
  • Intel 10 Gigabit XF SR Server Adapter
  • Intel 10 Gigabit XF LR Server Adapter
  • Intel 10 Gigabit CX4 Dual Port Server Adapter
  • Intel 10 Gigabit AF DA Dual Port Server Adapter
  • Intel 10 Gigabit AT Server Adapter
  • Intel 82598EB 10 Gigabit AT CX4 Network Connection
  • NetXtreme BCM5722 Gigabit Ethernet
  • NetXtreme BCM5755 Gigabit Ethernet
  • NetXtreme BCM5755M Gigabit Ethernet
  • NetXtreme BCM5756 Gigabit Ethernet

Expanded Support: The E1000 Intel network interface card (NIC) is now available for NetWare 5 and NetWare 6 guest operating systems.

Onboard Management Processors:

  • IBM system management processor (iBMC)

Storage Arrays:

  • SUN StorageTek 2530 SAS Array
  • Sun Storage 6580 Array
  • Sun Storage 6780 Array

Twitter explained in 267 seconds

March 29th, 2009 by jason No comments »

I was the guy on the left until last fall when John Troyer showed me how useful and powerful this tool can be during the VMworld 2008 virtualization conference. Properly used, it’s a real time professional networking and knowledge sharing tool, commonly called a microblog.

Thanks to the internet, the delivery of information to the masses can be ranked as follows in order of most timely to least timely:

  1. Twitter
  2. RSS feeds via blog posts and news articles
  3. Email
  4. Traditional mail

Notice I left out instant messaging (IM). IM from a technology perspective is as timely as Twitter except it differs significantly in one facet:

  • Tweets (messages in Twitter) are multicasted to hundreds, thousands, or millions of people instantly.
  • Instant Messages are spoken in one on one conversations. It could take days or weeks for spoken information to travel to the volumes of people that Twitter has the ability to reach instantly. Not only that, but think about how broken the message will become after it is repeated by dozens or hundreds of people. Like that old childhood game “Telephone to Norway”. An IM that originally started with “The sky is blue” may eventually end up as “Jesus had a 24 inch LCD”.

Although Twitter can be used with a web browser, getting the most out of it involves a combination of things like following the right people, using 3rd party Twitter clients like TweetDeck, setting up searches to refine incoming tweets only to what you want to see, etc. These are the things that will really help narrow the scope and define its intended use through customization.

But if you use Twitter merely for being a social butterfly, then yeah, it’s pretty much like how the guy on the left describes it. Not that there’s anything wrong with that…

Each person makes Twitter what they want it to be.  With that in mind, it’s not so easy to stereotype its use.

Thanks for the link to the video William Lam.

GuessMyOS plugin released

March 29th, 2009 by jason No comments »

Andrew the magnificent (vExpert Andrew Kutz of Hyper9) has unleashed a new plugin for the VMware Virtual Infrastructure Client called “GuessMyOS“.

System Requirements:

  • Microsoft Windows Installer 3.1
  • Microsoft .NET 3.5 (might as well install the SP1 version while you’re at it)
  • VMware Virtual Infrastructure Client

Andrew is the plugin Master. Now that he is officially and fully commissioned by Hyper9 to crank out cool stuff (instead of coding on his spare time), expect neato tools at a more consistent pace. I highly advise following his H9Labs RSS feed to stay up to date with his latest works:

http://community.hyper9.com/blogs/h9labs/rss.aspx

Oh. What does it do? Remember VMware GSX Server and the web MUI where VMs were graphically represented by the guest OS thumbnail? That’s what it does, but now for ESX and ESXi. One thing you’ll notice is that in the Hosts and Clusters view, it displays the thumbnail in the left column, but not the main window pane on the right side of the screen. Same behavior in the Virtual Machines and Templates view. Maybe in the next version. Thanks a lot Andrew and keep up the great work! I can absolutely say that we live in a better VMware world with you in it.

3-29-2009 8-03-42 PM