Posts Tagged ‘VMware Tools’

VMware Tools causes virtual machine snapshot with quiesce error

July 30th, 2016

Last week I was made aware of an issue a customer in the field was having with a data protection strategy using array-based snapshots which were in turn leveraging VMware vSphere snapshots with VSS quiesce of Windows VMs. The problem began after installing VMware Tools version 10.0.0 build-3000743 (reported as version 10240 in the vSphere Web Client) which I believe is the version shipped in ESXI 6.0 Update 1b (reported as version 6.0.0, build 3380124 in the vSphere Web Client).

The issue is that creating a VMware virtual machine snapshot with VSS integration fails. The virtual machine disk configuration is simply two .vmdks on a VMFS-5 datastore but I doubt the symptoms are limited only to that configuration.

The failure message shown in the vSphere Web Client is “Cannot quiesce this virtual machine because VMware Tools is not currently available.”  The vmware.log file for the virtual machine also shows the following:

2016-07-29T19:26:47.378Z| vmx| I120: SnapshotVMX_TakeSnapshot start: ‘jgb’, deviceState=0, lazy=0, logging=0, quiesced=1, forceNative=0, tryNative=1, saveAllocMaps=0 cb=1DE2F730, cbData=32603710
2016-07-29T19:26:47.407Z| vmx| I120: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: vmfsSparse grain size is set to 1 for ‘/vmfs/volumes/51af837d-784bc8bc-0f43-e0db550a0c26/rmvm02/rmvm02-000001.
2016-07-29T19:26:47.408Z| vmx| I120: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: vmfsSparse grain size is set to 1 for ‘/vmfs/volumes/51af837d-784bc8bc-0f43-e0db550a0c26/rmvm02/rmvm02_1-00000
2016-07-29T19:26:47.408Z| vmx| I120: SNAPSHOT: SnapshotPrepareTakeDoneCB: Prepare phase complete (The operation completed successfully).
2016-07-29T19:26:56.292Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:07.790Z| vcpu-0| I120: Tools: Tools heartbeat timeout.
2016-07-29T19:27:11.294Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:17.417Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:17.417Z| vmx| I120: Msg_Post: Warning
2016-07-29T19:27:17.417Z| vmx| I120: [msg.snapshot.quiesce.rpc_timeout] A timeout occurred while communicating with VMware Tools in the virtual machine.
2016-07-29T19:27:17.417Z| vmx| I120: —————————————-
2016-07-29T19:27:17.420Z| vmx| I120: Vigor_MessageRevoke: message ‘msg.snapshot.quiesce.rpc_timeout’ (seq 10949920) is revoked
2016-07-29T19:27:17.420Z| vmx| I120: ToolsBackup: changing quiesce state: IDLE -> DONE
2016-07-29T19:27:17.420Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Done with snapshot ‘jgb': 0
2016-07-29T19:27:17.420Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (31).
2016-07-29T19:27:17.420Z| vmx| I120: VigorTransport_ServerSendResponse opID=ffd663ae-5b7b-49f5-9f1c-f2135ced62c0-95-ngc-ea-d6-adfa seq=12848: Completed Snapshot request.
2016-07-29T19:27:26.297Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.

After performing some digging, I found VMware had released VMware Tools version 10.0.9 on June 6, 2016. The release notes identify the root cause has been identified and resolved.

Resolved Issues

Attempts to take a quiesced snapshot in a Windows Guest OS fails
Attempts to take a quiesced snapshot after booting a Windows Guest OS fails

After downloading and upgrading VMware Tools version 10.0.9 build-3917699 (reported as version 10249 in the vSphere Web Client), the customer’s problem was resolved. Since the faulty version of VMware Tools was embedded in the customer’s templates used to deploy virtual machines throughout the datacenter, there were a number of VMs needing their VMware Tools upgraded, as well as the templates themselves.

vMA 5.1 Patch 1 Released

April 5th, 2013

Expendable news item here only worthy of a Friday post.  For those who may have missed it, VMware has released an update to the vSphere Management Assistant (vMA) 5.1 appliance formally referred to as Patch 1.  This release is documented in VMware KB 2044135 and the updated appliance bits can be downloaded here.  Log in, choose the VMware vSphere link, then the Drivers & Tools tab.

Patch 1 bundles with it the following enhancements:

  • The base operating system is updated to SUSE Linux Enterprise Server 11 SP2 (12-Jan-2013).
  • JRE is updated to JRE 1.6.0_41, which includes several critical fixes.
  • VMware Tools is updated to 8.3.17 (build 870839).
  • A resxtop connection failure issue has been fixed.
    In vMA 5.1, resxtop SSL verification checks has been enabled. This might cause resxtop to fail when connecting to hosts and displays an exception message similar the following:
    HTTPS_CA_FILE or HTTPS_CA_DIR not set.
    This issue is fixed through this patch.

Update VMware Tools via Windows System Tray

May 31st, 2012

A Windows platform owner may inquire why he or she is unable to update an Out-of-date VMware tools installation using the VMware Tools applet in the system tray.  Clicking on the Update Tools button either produces an error similar to Update Tools failed or nothing at all happens.

Snagit Capture

Although the option to update VMware Tools is generally available via the system tray, the functionality is disabled by default in the VM shell.  The solution to the issue can be found in VMware KB 2007298 Updating VMware Tools fails with the error: Update Tools failed. Edit the virtual machine’s vmx file.

Shut down the virtual machine and add the following line to the virtual machine’s .vmx configuration file via Edit Settings | Options | General | Configuration Parameters:

isolation.tools.guestInitiatedUpgrade.disable = “FALSE”

Power on the virtual machine.  From this point forward, a VMware Tools update can be successfully performed from within the guest VM.

VMware Tools install – A general system error occurred: Internal error

June 16th, 2010

When you invoke the VMware Tools installation via the vSphere Client, you may encounter the error “A general system error occurred: Internal error“.

6-16-2010 9-45-42 AM

One thing to check is that the VM shell has the correct operating system selected for the guest operating system type.  For example, a setting of “Other (32-bit)” will cause the error since VMware cannot determine the correct version of the tools to install in the guest operating system because the flavor of guest operating system is unknown (ie. Windows or Linux).

6-16-2010 10-45-49 AM

Other causes for this error can be found at VMware KB Article 1004718:

The virtual machine has CD-ROM configured.
The windows.iso is present under the /vmimages/tools-iso/ folder.
The virtual machine is powered on.
The correct guest operating system selected. For example, if the guest operating system is Windows 200, ensure you have chosen Windows 2000 and not Other.

Windows 2008 R2 and Windows 7 on vSphere

March 28th, 2010

If you run Windows Server 2008 R2 or Windows 7 as a guest VM on vSphere, you may be aware that it was advised in VMware KB Article 1011709 that the SVGA driver should not be installed during VMware Tools installation.  If I recall correctly, this was due to a stability issue which was seen in specific, but not all, scenarios:

If you plan to use Windows 7 or Windows 2008 R2 as a guest operating system on ESX 4.0, do not use the SVGA drivers included with VMware Tools. Use the standard SVGA driver instead.

Since the SVGA driver is installed by default in a typical installation, it was necessary to perform a custom installation (or scripted perhaps) to exclude the SVGA driver for these guest OS types.  Alternatively, perform a typical VMware Tools installation and remove the SVGA driver from the Device Manager afterwards.  What you ended up with, of course, is a VM using the Microsoft Windows supplied SVGA driver and not the VMware Tools version shown in the first screenshot.  The Microsoft Windows supplied SVGA driver worked and provided stability as well, however one side effect was that mouse movement via VMware Remote Console felt a bit sluggish.

Beginning with ESX(i) 4.0 Update 1 (released 11/19/09), VMware changed the behavior and revised the above KB article in February, letting us know that they now package a new version of the SVGA driver in VMware Tools in which the bits are populated during a typical installation but not actually enabled:

The most effective solution is to update to ESX 4.0 Update 1, which provides a new WDDM driver that is installed with VMware Tools and is fully supported. After VMware Tools upgrade you can find it in C:\Program Files\Common Files\VMware\Drivers\wddm_video.

After a typical VMware Tools installation, you’ll still see a standard SVGA driver installed.  Following the KB article, head to Windows Device Manager and update the driver to the bits located in C:\Program Files\Common Files\VMware\Drivers\wddm_video:

    

The result is the new wddm driver, which ships with the newer version of VMware Tools, is installed: 

After a reboot, the crisp and precise mouse movement I’ve become accustomed to over the years with VMware has returned.  The bummer here is that while the appropriate VMware SVGA drivers get installed in previous versions of Windows guest operating systems, Windows Server 2008 R2 and Windows 7 require manual installation steps, much like VMware Tools installation on Linux guest VMs.  Add to this the fact that the automated installation/upgrade of VMware Tools via VMware Update Manager (VUM) does not enable the wddm driver.  In short, getting the appropriate wddm driver installed for many VMs will require manual intervention or scripting.  One thing you can do is to get the wddm driver installed in your Windows Server 2008 R2 and Windows 7 VM templates.  This will ensure VMs deployed from the templates have the wddm driver installed and enabled.

The wddm driver install method from VMware is helpful for the short term, however, it’s not the scalable and robust long term solution.  We need an automated solution from VMware to get the wddm driver installed.  It needs to be integrated with VUM.  I’m interested in finding out what happens with the next VMware Tools upgrade – will the wddm driver persist, or will the VMware Tools upgrade replace the wddm version with the standard version?  Stay tuned.

Update 11/6/10:  While working in the lab tonight, I noticed that with vSphere 4.1, the correct wddm video driver is installed as part of a standard VMware Tools installation on Windows 7 Ultimate x64 – no need to manually replace the Microsoft video driver with VMware’s wddm version as this is done automatically now.

Update 12/10/10: As a follow up to these tests, I wanted to see what happens when the wddm driver is installed under ESX(i) 4.0 Update 1 and its corresponding VMware Tools, and then the VM is moved to an ESX(i) 4.1 cluster and the VMware Tools are upgraded.  Does the wddm driver remain in tact, or will the 4.1 tools upgrade somehow change the driver?  During this test, I opted to use Windows 7 Ultimate 32-bit as the guest VM guinea pig.  A few discoveries were made, one of which was a surprise:

1.  Performing a standard installation of VMware Tools from ESXi 4.0 Update 1 on Windows 7 32-bit will automatically install the wddm driver, version 7.14.1.31 as shown below.  No manual steps were required to install this driver forcing a 2nd reboot.  I wasn’t counting on this.  I expected the Standard VGA Graphics Adapter driver to be installed as seen previously.  This is good.

SnagIt Capture

After moving the VM to a 4.1 cluster and performing the VMware Tools upgrade, the wddm driver was left in tact, however, its version was upgraded to 7.14.1.40.  This is also good in that the tools ugprade doesn’t negatively impact the desired results of leveraging the wddm driver for best graphics performance.

SnagIt Capture

More conclusive testing should be done with Windows 7 and Windows Server 2008 R2 64-bit to see if the results are the same.  I’ll save this for a future lab maybe.

VMware ESX Guest OS I/O Timeout Settings (for NetApp Storage Systems)

October 29th, 2009

You may already be aware that installing VMware Tools in a Windows VM configures a registry value which controls the I/O timeout for all Windows disk in the event of a short storage outage. This is to help the guest operating system survive high latency or temporary outage conditions such as SAN path failover or maybe a network failure in Ethernet based storage.  VMware Tools changes the Windows default value of 10 seconds for non-cluster nodes, 20 seconds for cluster nodes, to 60 seconds (or x03c hex).

Did you know that disk I/O timeout is a configurable parameter in other guest operating systems as well? And why not, it makes sense that we would want every guest OS to be able to outlast a storage deficiency.

NetApp offers a document titled VMware ESX Guest OS I/O Timeout Settings for NetApp Storage Systems. It’s published as kb41511 and you’ll need a free NetApp NOW account to access the document. This white paper serves a few useful purposes:

  • Defines recommended disk I/O timeout settings for various guest operating systems on NetApp storage systems
  • Defines benchmark disk I/O timeout settings for various guest operating systems which could be used on any storage system, including local SCSI
  • In some cases provides scripts to make the necessary changes
  • Explains the methods to make the disk I/O timeout changes on the following guest operating systems:
    • RHEL4
    • RHEL5
    • SLES9
    • SLES10
    • Solaris 10
    • Windows

Now on the subject disk I/O timeouts, understand the above is to be used as chance for extending the uptime of a VM during adverse storage conditions. As in life, there are no guarantees. A guest OS with high disk I/O activity may not be able to tolerate sustained read and/or write requests for the duration of the timeout value. Windows guests may freeze or BSOD. Linux guests may go read-only on their root volumes which requires a reboot. Which brings me to the next point…

A larger timeout value isn’t necessarily better. In extending disk I/O timeout values, we’re applying virtual duct tape to an underlying storage issue which needs further looking into. Given the complex and wide variety of shared storage systems available to the datacenter today, storage issues can be caused by many variables including but not limited to disks (spindles), target controllers, fabric components such as fibre cables, SFP/GBICs, HBAs, fabric switches, zoning, network components such as copper cabling, NICs, network switches, routers, and firewalls. Also keep in mind that while the OS may survive the disk I/O interruption, application(s) running on the OS platform may not.  Applications themselves implement response timeout values which are likely going to be hard coded and non-configurable by a platform or virtualization administrator in the application itself.

Lastly, try to remember that if you go through the effort of increasing your disk I/O timeout values on Windows guests beyond 60 seconds, future installation of VMware Tools or other applications/updates may reset the disk I/O timeout back to 60 seconds.  What this means is that in medium to large environments, you’re going to need an automated method to deploy custom disk I/O timeout values at least for Windows guests.  For those with NetApp storage, NetApp pushes these standards firmly, along with other VMware best practices which I’ll save for a future blog article.

Update 4/28/10:  VMware Tools for vSphere installation doesn’t change the disk timeout setting if a custom value was previously set (ie. 190 seconds)

Update 9/12/11:  See also VMware KB article 1009465 Increasing the disk timeout values for a Linux 2.6 virtual machine

vSphere Virtual Machine Performance Counters Integration into Perfmon

July 8th, 2009

VMware introduced the VMware Descheduled Time Accounting Service as a new VMware Tools component in ESX 3.0. The goal was to account for inconsistent CPU cycles allocated to the guest VM by the VMkernel to provide accurate performance statistics using standard performance monitoring tools within the guest VM. Although the service was not installed and enabled with VMware Tools by default, nor did it ever escape the bonds of experimental support status, I found the service to be both stable and reliable and it was a standard installation component in one of my production datacenters. One caveat was that the service only supported uniprocessor guest VMs having a single vCPU.

The VMware Descheduled Time Accounting Service was deprecated in VMware vSphere. More accurately, it was sort of replaced by a new vSphere feature called Virtual Machine Performance Counters (Integrated into Perfmon). To quote VMware:

“Virtual Machine Performance Counters Integration into Perfmon — vSphere 4.0 introduces the integration of virtual machine performance counters such as CPU and memory into Perfmon for Microsoft Windows guest operating systems when VMware Tools is installed. With this feature, virtual machine owners can do accurate performance analysis within the guest operating system. See the vSphere Client Online Help.”

The vSphere Client Online Help has this to say about Virtual Machine Performance:

“In a virtualized environment, physical resources are shared among multiple virtual machines. Some virtualization processes dynamically allocate available resources depending on the status, or utilization rates, of virtual machines in the environment. This can make obtaining accurate information about the resource utilization (CPU utilization, in particular) of individual virtual machines, or applications running within virtual machines, difficult. VMware now provides virtual machine-specific performance counter libraries for the Windows Performance utility. Application administrators can view accurate virtual machine resource utilization statistics from within the guest operating system’s Windows Performance utility.”

Did you notice the explicit statement about Perfmon? Perfmon is Microsoft Windows Performance Monitor or perfmon.exe for short. Whereas the legacy VMware Descheduled Time Accounting Service supported both Windows and Linux guest VMs, its successor currently supports Perfmon ala Windows guest VMs only. It seems we’ve gone backwards in functionality from a Linux guest VM perspective. Another pie in the face for shops with Linux guest VMs.

Rant…

I understand that Windows guest VMs are the low hanging fruit for software development and features, but VMware needs to make sure some love is spread through the land of Linux as well. Folks with Linux shops are still struggling with basic concepts such as Linux guest customization as well as flexibility and automation of VMware Tools installation in the Linux guest OS. If VMware is going to tout their support for Linux guest VMs, I’d like to see more of a commitment than what is currently being offered. There’s more to owning a virtualized infrastructure than powering on instances on top of a hypervisor. Building it is the easy part. Managing it can be much more difficult without the right tools. Flexibility and ease with in the management tools is critical, especially as virtual infrastructures grow.

/Rant…

So, taking a look at a VMware vSphere Windows VM with current VMware Tools, I launched Perfmon. The installation of VMware Tools installs two new Performance Objects along with various associated counters:

  • VM Memory
    • Memory Active in MB
    • Memory Ballooned in MB
    • Memory Limit in MB
    • Memory Mapped in MB
    • Memory Overhead in MB
    • Memory Reservation in MB
    • Memory Shared in MB
    • Memory Shared Saved in MB
    • Memory Shares
    • Memory Swapped in MB
    • Memory Used in MB
  • VM Processor
    • % Processor Time
    • Effective VM Speed in MHz
    • Host processor speed in MHz
    • Limit in MHz
    • Reservation in MHz
    • Shares

Observing some of the counter names, it’s interesting to see that VMware has given us direct insight into the hypervisor resource configuration settings via Performance Monitor from inside the guest VM. While this may be useful for VI Administrators who manage both the VI as well as the guest operating systems, it may be disservice to VI Administrators in environments where guest OS administration is delegated to another support group. The reason why I say this is that some of these new counters disclose an “over commit” or “thin provisioning” of virtual hardware resources which I’d rather not reveal to other supports groups. What they don’t know won’t hurt them. Revealing some of the tools in our bag of virtualization tricks may bring about difficult discussions we don’t really want to get into or perhaps provoke the finger of blame to be perpetually pointed in our direction whenever a guest OS problem is encountered.

I’ve grabbed a few screen shots from my lab which show the disparity between native Perfmon metrics and the new vSphere Virtual Machine Performance Counters. In this example, I compare %Processor Time from the Perfmon native Processor object against the %Processor Time from the VM Processor object which was injected into the VM during the vSphere VMware Tools installation. It’s interesting to note, and you should be able to clearly see it in the graph, that the VM Processor %Processor time is consistently double that of the Perfmon native Processor % Processor Time counter. Consider this when you are providing performance information for a guest VM or one of its applications. If you choose the native Perfmon counter, you could be reporting performance data with 100% margin of error as shown in the case below. This is significant and if used for capacity planning purposes could lead to all sorts of problems.

7-8-2009 9-15-20 PM

7-8-2009 10-17-02 PM

One other important item to note is that you may recall I said towards the beginning that the legacy VMware Descheduled Time Accounting Service only supported uniprocessor VMs. The same appears to be true for the new vSphere Virtual Machine Performance Counters. In the lab I took a single CPU VM which had the vSphere Virtual Machine Performance Counters, and I adjusted the vCPU count to 4. After powering on with the new vCPU count, the vSphere Virtual Machine Performance Counters disappeared from the pulldown list. VMware needs to address this shortcoming. Performance statistics on vSMP VMs are just as important, if not more important, than performance statistics on uniprocessor VMs. vSMP VM resource utilization needs to be watched more closely for vSMP justification purposes.

So VMware, in summary, here is what needs work with vSphere Virtual Machine Performance Counters:

  1. Must support vSMP VMs
  2. Must support Linux VMs
  3. Support for Solaris VMs would also be nice
  4. More objects: VM Disk and VM Networking

Update: On Friday July 11th, 2009, I received the following email response from Praveen Kannan, a VMware Product Manager. Praveen has given me permission to reprint the response here. It is an encouraging read:

Hi Jason,

I read your recent blog post on the Perfmon integration in vSphere 4.0. I’m the product manager for the feature and wanted to reach out to on your findings and feedback regarding the feature.

First off, thanks for the detailed post on the intricacies of the feature and the screenshots. I think this post would be very helpful to the community! Much appreciated…

1) note on vmdesched

We’ve deprecated vmdesched in vSphere 4.0 because it was primarily an experimental feature that we didn’t recommend putting in production. More importantly, vmdesched adds overhead to the guest and is not compatible with some of the newer kernels out there and so the Perfmon integration is our answer to improve on the current state and provide accurate CPU accounting to VM owners that can be deployed in production and is integrated well with VMware Tools for out-of-box functionality.

2) Linux support for accurate counters

The Perfmon integration in vSphere 4.0 leverages the guest SDK API to get to the accurate counters from the hypervisor and that is available on Linux GOS as well. All you need is to have the VMware Tools installed to get access to the guest SDK interface. We couldn’t provide something like Perfmon on Linux since there aren’t many broadly used tools/APIs that we can standardize on.

There are some discussions internally to solve the accounting issue on Linux guests in a much simplified manner but I can’t go into the specific details at this time. Rest assured, I can tell you that we are looking into the problem for Linux workloads.

On a side note, the Perfmon implementation exposes the two new counter groups through WMI (you can almost think of the Perfmon integration as a WMI provider that sits on top of the guest SDK interface and provide access to the counters). What this means is any in-guest agent, benchmarking, reporting tool etc. can quickly adapt to use these “accurate” counters using WMI

So for Linux guests, you can refer to the guest SDK documentation on how someone can modify their Linux agents, tools etc. to talk to the “accurate” counters. The programming guide for vSphere guest SDK 4.0 is available at http://www.vmware.com/support/developer/guest-sdk/. The list of available perf counters is in Page 11 of the PDF (Accessor functions for VM data).

You can in fact use the older 3.5 version of the guest SDK API as well if you want to implement something that works with existing VI3 environments (yes, this SDK has been around for a while!). The only difference is that the vSphere version of the API has a few extra counters but you will get access to the important counters such as CPU utilization in the older API itself.

3) over commit, thin provisioning counters

Interesting feedback that I’ll take back to engineering :) This is something that we need to think about for sure

4) uni-processor Perfmon?

I’m really surprised with your observations after moving to a 4 vCPUs. Not sure what’s going on but AFAIK, we report the _Total (aggregate) of all CPU utilization in one metric in the “VM Processor” counter group in Perfmon. What that means is regardless of how many CPUs in-guest, we do provide the _Total of CPU Utilization. Maybe you may have run into a bug. I’ll check with engineering on this anyways to confirm my understanding.

Just so you know we have a “standalone” version of the Perfmon tool that works with existing VI 3.5 environments. We’ve posted details about this experimental tool and the binaries on our performance blog here:

http://communities.vmware.com/blogs/drummonds/2009/06/18/using-perfmon-for-accurate-esx-performance-counters

The reason I mentioned the standalone version is because on my test box running 3.5 with the standalone version of Perfmon, I was able to see the _Total on a 2 vCPU VM. I haven’t yet tested your findings on a vSphere test box yet but I look into it…

So to help us investigate this, could you please do the following?

a. re-install VMware tools on a test Windows VM after switching to 4 vCPUs and check if the problem is reproducible

b. if you have the 3.5 version of VMware tools running on a VI3 setup, download the standalone version of the Perfmon tool and install it on a Windows VM and check if the 4-vCPU problem is observed. I haven’t tested the same standalone version of Perfmon on a vSphere 4.0 setup (with 4.0 version of the tools) but I wouldn’t be surprised if the standalone version does work. You may want to snapshot the VM before you attempt this though so you can rollback.

5) more counters such as disk and networking

Some background…our main focus in 4.0 was to solve the immediate customer pain-point, namely the CPU accounting issue inside the guest for VM owners. Also, what we heard is that VI admins didn’t want to give out VI client access to VM owners whenever they wanted to look at “accurate” counters for CPU utilization. In fact, the memory counters in Perfmon were sort of a bonus since it was already available in the guest SDK interface :)

Importantly, other counters when measured inside the guest such as Memory, Disk and Network don’t really suffer from accounting problems (i.e. they are accurate) as compared to CPU utilization numbers captured over a period of time (which may be accounted different due to the scheduling and de-scheduling the hypervisor does). So the numbers for Disk, Memory and Network when captured inside the Windows guest will be the same as the VI client.

However, I do recognize that as more and more customers start using this integration, there will soon be a need for providing disk and network counters as well. This is definitely on my radar to address in a future release.

Hope the information I provided helps in better understanding the Perfmon integration in vSphere 4.0 and also answer some of your questions in the blog post.

Looking forward to your findings with the 4 vCPU VMs. LMK if you have any questions in the interim.

P.S: Do feel free to use the information discussed here for your blog where you deem useful…

Have a good weekend…


Praveen Kannan
Product Manager
VMware, Inc.


After some more investigation in another test VM, I replied to Praveen with the following information:

Praveen,

In my previous test, I had a 1 vCPU Windows Server 2003 VM. The VM Memory
and VM Processor objects were listed in the pulldown lits in perfmon. After
upgrading the VM to 4 vCPUs, the VM Memory and VM Processor objects were no
longer listed in the pulldown list in perfmon. So you see, the objects were
not available thus the counters (including _Total) were not available.

Today, I deployed a 1 vCPU Windows Server 2003 VM from a 1 vCPU template.
When I ran perfrmon, the VM Memory and VM CPU objects were missing (VMware
Tools was up to date). I closed perfmon and reopened it. Then the 2 VM
objects were there.

Then I upgraded the VM to a 4 vCPU VM. I ran perfmon and both the VM
objects were there.

Following that, I encountered more problems. I was able to choose the VM Processor object, but the counters for the object were all missing. Definitely a bug somewhere with these. Please advise.