Archive for September, 2009

Align Datastore Names With VM Names Using Storage vMotion

September 30th, 2009

Does it bug you when the registered names of your VM do not match the folder and file names on the datastore? It can be difficult to identify VMs when browsing the datastore if the folder and file names do not match the VM name. Or maybe the VM names generally match what’s on the datastore but there are some case sensitivity discrepancies. I for one an uncomfortable with these situations. While fixing the problem by bringing the datastore folder/file names into alignment with the VM name isn’t impossible, the process is painful when done manually and requires an outage of the VM.

Here’s a simple trick I’m sure many may already be aware of. I remember hearing about it quite a while ago (I don’t remember where) but had forgotten about it until today. Let VMware Storage VMotion take care of the problem for you. During the Storage VMotion process, the destination folder/file names are synchronized with the name of the VM on the fly with no outage.

For example, let’s say we create a VM with a registered name of “old_name”. The datastore backing folder has the name of “old_name” and the files inside are also prefixed with “old_name”.vmdk, .vmx, etc.

Now we go ahead and change the name of the VM to “new_name” in vCenter. The datastore backing folder and files still have the “old_name” and now obviously don’t match the registered VM name.

To bring the datastore backing folder and file names back in synchronization with the registered VM, we can perform a Storage VMotion. In doing so, the backing folder and files will be dynamically renamed as they land on the new datastore. In this case, they will be renamed to “new_name”.

This solution is a heck of a lot easier than powering down the VM and renaming all the files, as well as modifying the corresponding metadata in some of the files.

Update 9/27/11: As reported by Gary below and validated in my lab, this trick no longer works in vSphere 5.0 with respect to file names within the folder.  As an example, after renaming the VM in vCenter inventory and then subsequently Storage vMotioning the VM, the destination folder name will match the VM however the .vmx and .vmdk files inside will not.  This is unfortunate as I have used this trick method many times.

Update 11/7/12: Over a year later, vSphere 5.1 is shipping and this feature is still disabled.  VMware KB Article 2008877 has not been updated since the launch of vSphere 5.1 If I were a customer, I’d be upset.  As an avid user of the product, I’m upset as much about the carelessness and complacency of VMware as I am about the disabling of the feature.

Update 12/21/12: Duncan Epping reports Storage vMotion file renaming is back in vSphere 5.0 Update 2.  Read more about that here.  This is a wonderful birthday present for me.

Update 1/25/13: Duncan Epping further clarifies that Storage vMotion file renaming in vSphere 5.0 Update 2 requires an addition to the advanced setting in vCenter (Add the key “provisioning.relocate.enableRename” with value “true” and click “add”).  Read more about that here.  Duncan further hints that Storage vMotion file renaming may be coming to vSphere 5.1 Update 1.  No promises of course and this is all just speculation.

Update 4/30/13: Duncan’s prophecy came to realization late last week when VMware released vSphere 5.1 Update 1 which restores Storage vMotion file renaming.  As pointed out by Cormac here and similar to the update above, an advanced setting in vCenter is required (Add the key “provisioning.relocate.enableRename” with value “true” and click “add”).

8 New ESX(i) 4.0 Patches Released; 7 Critical

September 25th, 2009

Eight new patches have been released for ESX(i) 4.0 (6 for ESX, 2 for ESXi).  Previous versions of ESX(i) are not impacted.

7 of the 8 patches are rated critical and should be evaluated quickly for application in your virtual infrastructure.

ID: ESX400-200909401-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0
Updates vmx and vmkernel64
This patch fixes some key issues such as:
* Guest operating system shows high memory usage on Nehalem based systems, which might trigger memory alarms in vCenter.
* monitor or vmkernel fails when running certain guest operating systems with a 32-bit monitor running in binary translation mode.

See http://kb.vmware.com/kb/1014019 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909402-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates VMware Tools
This patch includes the following fixes
* Updated VMware SVGA and mouse device drivers for supported Linux guest operating systems that use Xorg 7.5.
* PBMs for Debian 5.0.1.
* PBMs for SUSE Linux Enterprise 11 VMI kernel.

See http://kb.vmware.com/kb/1014020 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909403-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates bnx2x
This patch fixes the following issues:
* Virtual machines experience a network outage when they run with older versions of VMware Tools (ESX 3.0.x)
* A network outage is experienced if the MTU value is changed on a Broadcom Netxtreme II 10gig NIC.
* unloading the driver causes a host reboot.

See http://kb.vmware.com/kb/1014021 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909404-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates ixgbe
This patch fixes the following issue:
* A vSphere ESX Host that has NIC teaming configured with the ixgbe driver for the physical NICs might fail if one of the physical NICs goes down.

See http://kb.vmware.com/kb/1014022 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909405-BG Impact: HostGeneral Release date: 2009-09-24 Products: esx 4.0.0 Updates perftools
This patch fixes the following issue:
* esxtop utility might quit with the error message “VMEsxtop_GrpStatsInit() failed” when attempting to monitor network status on ESX.

See http://kb.vmware.com/kb/1014023 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909406-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates hpsa
This patch fixes the following issue:
* A virtual machine might fail after the Storage Port controller is reset on ESX hosts that have the HPSA driver connected to an SAS array.
* Hosts cannot detect more than 2 HPSA controllers due to the limited driver heap size.

See http://kb.vmware.com/kb/1014024 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESXi400-200909401-BG Impact: Critical Release date: 2009-09-24 Products: embeddedEsx 4.0.0 Updates Firmware
This patch fixes some key issues such as:
* Guest operating system shows high memory usage on Nehalem based systems, which might trigger memory alarms in vCenter.
* monitor or vmkernel fails when running certain guest operating systems with a 32-bit monitor running in binary translation mode.
See http://kb.vmware.com/kb/1014026 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESXi 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESXi400-200909402-BG Impact: Critical Release date: 2009-09-24 Products: embeddedEsx 4.0.0 Updates Tools
This patch includes the following fixes
* Updated VMware SVGA and mouse device drivers for supported Linux guest operating systems that use Xorg 7.5.
* PBMs for Debian 5.0.1.
* PBMs for SUSE Linux Enterprise 11 VMI kernel.

See http://kb.vmware.com/kb/1014027 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESXi 4.0 should add an additional patch download URL as described in KB 1013134

Lab Manager 4 and vDS

September 19th, 2009

VMware Lab Manager 4 enables new functionality in that fenced configurations can now span ESX(i) hosts by leveraging vNetwork Distributed Switch (vDS) technology which is a new feature in VMware vSphere. Before getting overly excited, remember that vDS is a VMware Enterprise Plus feature only and it’s only found in vSphere. Without vSphere and VMware’s top tier license, vDS cannot be implemented and thus you wouldn’t be able to enable fenced Lab Manager 4 configurations to span hosts.

Host Spanning is enabled by default when a Lab Manager 4 host is prepared as indicated by the green check marks below:

When Host Spanning is enabled, an unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled. This Lab Manager service VM cannot be powered down, suspended, VMotioned, etc.:

One ill side effect of this new Host Spanning technology is that an ESX(i) host will not enter maintenance mode while Host Spanning is enabled. For those new to Lab Manager 4, the cause may not be so obvious and it can lead to much frustration. An unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled and a running VM will prevent a host from entering maintenance mode. Maintenance mode will hang at the infamous 2% complete status:

The resolution is to first cancel the maintenance mode request. Then, manually disable host spanning in the Lab Manager host configuration property sheet by unchecking the box. Notice the highlighted message in pink telling us that Host Spanning must be disabled in order for the host to enter standby or maintenance mode. Unpreparing the host will also accomplish the goal of removing the service VM but this is much more drastic and should only be done if no other Lab Manager VMs are running on the host:

After reconfiguring the Lab Manager 4 host as described above, vSphere Client Recent Tasks shows the service VM is powered off and then removed by the Lab Manager service account:

At this time, invoke the maintenance mode request and the host will now be able to migrate all VMs off and successfully enter maintenance mode.

While Lab Manager 4 Host Spanning is a step in the right direction for more flexible load distribution across hosts in a Lab Manager 4 cluster, I find the process for entering maintenance mode counter intuitive, cumbersome, and at the beginning when I didn’t know what was going on, frustrating. Unsuccessful maintenance mode attempts have always been somewhat mysterious in the past because vCenter Server doesn’t give us much information to pinpoint the problem as far as what’s preventing the maintenance mode. This situation now adds another element to the complexity. VMware should have enough intelligence to disable Host Spanning for us in the event of a maintenance mode request, or at the very least, tell us to shut it off since it is conveniently and secretly enabled by default during host preparation. Of course, all of this information is available in the Lab Manager documentation, but who reads that, right? :)

After enabling FT on a VM – subtleties to expect

September 16th, 2009

While using VMware vSphere, you may encounter a situation where you cannot edit the memory resource settings (shares, reservations, and limits) for a particular VM on the resources tab. The memory resource settings section will be completely grayed out. In addition, a label will clearly state “Memory resources-cannot edit” as shown below:

In this particular instance, the underlying cause for this condition is VMware Fault Tolerance (FT) has been enabled on the FT “primary” VM. The fact that the memory resource settings cannot be modified is by design and is used as a means to help guarantee the FT “secondary” VM stays in vLockstep with the primary. What has actually happened is that when FT was enabled on the VM, a memory reservation was set equal to the amount of memory configured for the VM. This eliminates VMkernel swap file for the VM managed by the host in all cases, not just for FT enabled VMs.

What other subtle changes can you expect when you enable VMware Fault Tolerance (FT) on a VM?

DRS will be disabled for the FT enabled primary VM, although it may be VMotioned manually in cases where maintenance needs to be performed on the ESX(i) host. FT secondaries may also be migrated by right clicking on the FT primary VM and choosing the Fault Tolerance menu item to “Migrate Secondary”:

Thin provisioned disks will be converted to a Thick type:

A FT “secondary” VM will be created on another host in the cluster which will consume CPU and memory on the secondary host. It will share VM storage with the FT “primary”. VM networking is disabled on the FT “secondary” to eliminate the obvious problem of a duplicate machine on the network, however, other considerable host based network traffic will be consumed for two purposes:

  1. Initial creation of the FT “secondary” – dedicated VMotion network is used
  2. Continuous FT logging traffic – dedicated FT logging network is used

If hardware MMU feature exists in the host CPUs (AMD RVI/Intel EPT), the feature is disabled in the VM. This will force a power off of the VM before FT can be enabled.

Storage vMotion will be disabled for the FT enabled VM.

The hypervisor may slow down execution of the FT “primary” VM if the FT “secondary” is not able to keep pace with the FT “primary” using vLockstep technology.

Snapshotting functionality will be disabled. Furthermore and maybe more importantly, backups requiring snapshot technology won’t work.

Virtual hardware that is not compatible with FT will be disabled (ie. USB, Sound, etc.)

vSMP (multiple CPUs) in the VM is not supported. FT can’t be enabled.

Physical RDMs in the VM are not supported. FT can’t be enabled.

For more information on VMware Fault Tolerance, see VMware vSphere™ 4 Fault Tolerance: Architecture and Performance, VMware vSphere Availability Guide, and Xtravirt’s Disaster Recovery and VMware vSphere 4.0 Fault Tolerance whitepaper,

Thank you Gabe and Brenda

September 14th, 2009

I’d like to take a moment to thank two people, Gabe and Brenda, for their new and continuous friendship. They hail from the Netherlands and the pair are two of the nicest, funniest, and fun loving people I’ve met. I was first introduced to them in person earlier this year in Cannes, France during the VMworld Europe 2009 virtualization conference. Gabe was attending as a VMware user and Brenda joined him to study conference attendees in their preferred habitat, as well as for some sight seeing. Being from the U.S., I was quite out of my environment while traveling for the first time in France but they made me feel welcomed, teaching me some of the local customs as well as bits and pieces of the French language: “Merci beaucoup” – “Thank you very much” – a valuable phrase for a clueless tourist to individually thank each person for their assistance.

I met up with them again at VMworld 2009 in San Francisco, CA. This time they treated myself, my wife, and my kids to a nice Italian dinner Thursday evening after the conclusion of the conference. In addition, they showered my children with authentic Dutch gifts. Gabe and Brenda, if you are reading this, we very much appreciated this – Thank You! I hope one day we will meet again so that I can reciprocate. Chances are good as I’ve mentally committed to attend at least one VMworld annually, expending whatever efforts it takes to get there.

Where can you find this dynamic duo?

Brenda maintains a very interesting blog called Virtual Gipsy which offers an Anthropologist’s perspective of a tight knit virtualization community. Follow her on Twitter: @b_renda

Gabe runs an excellent virtualization blog called Gabe’s Virtual World and is particularly good with video editing. Follow him on Twitter: @gabvirtualworld

Saturday Grab Bag

September 12th, 2009

Here’s a collection of quick hits I’ve been meaning to get to. Individually, their content is a bit on the short side for the length I normally like to write so I thought I’d throw them together in a single post and see how it comes out.

Tasks and Events List Lengths

First up is the listing of Tasks and Events in the vSphere Client. Have you ever started troubleshooting an issue in the vSphere client by looking at the Tasks or Events and the chronological listing of events doesn’t go back far enough to the date or time you’re looking for? Not finding the logs you’re looking for in the vSphere Client usually means you need to open a PuTTY session and start sifting through logs in /var/log/ or /var/log/vmware/ in the Service Console. The reason for this is that the vSphere Client, by default, is configured to tail the last 100 entries in the Tasks or Events list. You can find this setting in your vSphere Client by choosing “Edit|Client Settings” then choose the “Lists” tab:

Simply increase the value from 100 to whatever you’d like, with 1,000 being the highest allowable value. Notice that when this number is increased, you will immediately see more history. In other words, you don’t have to necessarily wait for time to pass and more historical events to accumulate to see the additional rows of information. Also note that this is a vSphere Client setting which is retained client side and applies to both vCenter Server and ESX(i) host connections.

Collecting diagnostic information for VMware products

Like any offering from a software or hardware vendor, VMware products aren’t perfect. During your VMware experience, you may run into a problem which requires the intervention of VMware support. More often than not, VMware is going to ask you to generate a support bundle which consists of a collection of diagnostic and configuration files and logs. Following this paragraph is a link to VMware KB1008524 which contains links to creating support bundles for various VMware products. Note that in some cases there are different methods for different versions of the same product. If you choose to create a VMware SR online, it is helpful to have created these log bundles in advance so you can attach them to the SR. If you’ve done VMware support long enough, you already know how to FTP log bundles to VMware after an SR number has been generated.

Collecting diagnostic information for VMware products

New VMware Update Manager won’t download ESX(i) patches

Scenario: You’ve built a new VMware vCenter Server in addition to a new VMware Update Manager Server (VUM). After properly configuring Update Manager as well as the necessary internet, proxy, baseline, and scheduled task settings, VUM proceeds to download Windows, Linux, and application patches, but it won’t download ESX(i) host patches. As I found out by trench experience, the cause is because no ESX(i) hosts have been added to the vCenter Server and thus no hosts are being managed by VUM. You need to add at least one ESX(i) host to vCenter Server before VUM will be triggered to suck down all the host updates. One might then ask why guest patches are being downloaded. The only answer I have for the inconsistent behavior is due the fact that ESX(i) host patches are downloaded from VMware, while guest OS and application patches are downloaded from a completely different source, Shavlik. The mechanics behind the download processes obviously differ between the two.

What vCenter Server is this ESX(i) host managed by?

Scenario: You administer a large VMware virtual infrastructure with many vCenter Servers. You need to manage or configure a host or cluster but haven’t the slightest idea what vCenter Server to connect to. You can easily find out by attempting a Virtual Infrastructure Client connection to the host in question. Shortly after providing the necessary host credentials, the IP address of the vCenter Server managing this host will be revealed:

Now in theory, you could establish a Virtual Infrastructure Client connection to the IP address, however, I don’t like this because it dirties up the cached connection list with IP addresses which are meaningless short of having them all memorized. I prefer to take it a step further by opening a Command Prompt and using the command ping -a to reveal the name of the vCenter Server managing the host:

The command above reveals jarjar.boche.mcse as the vCenter Server which is managing the ESX(i) host I was wanting to manage via the vCenter Server.

I’m sure a PowerShell expert will follow up with a script which makes this process easier but this a good example to follow if you don’t have PowerShell or the VI Toolkit (Power CLI) installed.

Top 3 New York Style Cheesecake Offerings

September 5th, 2009
  1. Timberlodge Steakhouse – Easily and consistently the best cheesecake I’ve ever had. Excellence from the whipped topping to the graham cracker crust. Ginormous portions also.
  2. Rainforest Cafe – Had this last night in San Francisco. It doesn’t come real close to Timberlodge cheesecake but it’s pretty good and will definitely do in a pinch.
  3. Fogo De Chao – Had this cheesecake Wednesday night after the Q3 Minneapolis VMUG. I don’t like the hard outer texture as much, however, once I dug in, I found it to be very delicious. It has a tasty graham cracker crust similar to Timberlodge cheesecake and the strawberries and whipped topping were great as well.
  4. Cheesecake Factory – One would think that by virtue of their name, they’d have the best. Not so. I keep going back expecting it will be better and it never is. It’s not a far fetched idea that one day I’ll find a cheesecake that will push Cheesecake Factory into 4th place. At that point they should really feel ashamed. It’s official, Cheesecake Factory has now fallen to 4th place.