Posts Tagged ‘vSphere Client’

VMware vCenter as a vCloud Director vApp

February 27th, 2012

Snagit CaptureThe way things work out, I tend to build a lot of vCenter Servers in the lab.  Or at least it feels like I do.  I need to test this.  A customer I’m meeting with wants to specifically see that.  I need don’t want to taint or impact an existing vCenter Server which may already be dedicated to something else having more importance.  VMware Site Recovery Manager is a good example.  Each time I bring up an environment I need a pair of vCenter Servers which may or not be available.  Whatever the reason, I’ve reached the point where I don’t need to experience the build process repeatedly.

The Idea

A while ago, I had stood up a private cloud for the Technical Solutions/Technical Marketing group at Dell Compellent.  I saved some time by leveraging that cloud environment to quickly provision platforms I could install vCenter Server instances on.  vCenter Servers as vApps – fantastic use case.  However, the vCenter installation process is lengthy enough that I wanted something more in terms of automated cookie cutter deployment which I didn’t have to spend a lot of time on.  What if I took one of the Windows Server 2008 R2 vApps from the vCD Organization Catalog, deployed it as a vApp, bumped up the vCPU and memory count, installed the vSphere Client, vCenter Server, licenses, a local MS SQL Express database, and the Dell Compellent vSphere client plug-in (download|demo video), and then added that vApp back to the vCD Organization Catalog?  Perhaps not such a supported configuration by VMware or Microsoft, but could I then deploy that vApp as future vCenter instances?  Better yet, build a vApp consisting of a pair of vCenter Servers for the SRM use case?  It sounded feasible.  My biggest concerns were things like vCenter and SQL Express surviving the name and IP address change as part of the vCD customization.


Although I ran into some unrelated customization issues which seemed to have something to do with vCD, Windows Server 2008 R2, and VMXNET3 vNICs (error message: “could not find network adapters as specified by guest customization. Log file is at c:\windows\temp\customize-guest.log.” I’ll save that for a future blog post if I’m able to root cause the problem), the Proof of Concept test results thus far have been successful.  After vCD customization, I was able to add vSphere 5 hosts and continue with normal operations from there.

Initially, I did run into one minor issue and that was hosts would fall into a disconnected status approximately two minutes after being connected to the vCenter Server.  This turned out to be a Windows Firewall issue which was introduced during the customization process.  Also, there were some red areas under the vCenter Service Status which pointed to the old instance name (most fixes for that documented well by Rick Vanover here, plus the vCenter Inventory Service cleanup at VMware KB 2009934).

The Conclusion

To The Cloud!  You don’t normally hear that from me on a regular basis but in this case it fits.  A lengthy and increasingly cumbersome task was made more efficient with vCloud Director and vSphere 5.  Using the Linked Clone feature yields both of its native benefits: Fast Provisioning and Space Efficiency.  I’ll continue to leverage vCD for similar and new use cases where I can.  Lastly, this solution can also be implemented with VMware Lab Manager or simply as a vSphere template.  Caveats being Lab Manager retires in a little over a year and a vSphere Template won’t be as space efficient as a Linked Clone.

How to properly remove vSphere datastores

January 18th, 2012

Right click on the datastore object and choose Delete, right? Wrong.

Following are two good VMware articles outlining the correct procedure for removing datastores in a vSphere environment:


Path Set for Dell Storage Forum 2012 London

January 11th, 2012

Snagit Capture

In just a few days, Dell Storage Forum 2012 kicks off at the Grange St Paul’s Hotel in London. I will be in attendance and I hope that you will have the chance to join myself and the rest of the Dell staff and of course an array of storage customers, channel partners, enthusiasts, and analysts. At DSF your appetite will be satisfied with Executive lead Keynote sessions, Breakout sessions delivered by Technical Experts, Instructor lead training, and Hands-on/Self-Paced labs covering Compellent Storage Center, Dell EqualLogic, and PowerVault storage.

This venue won’t be an exact carbon copy of past DSF events. Dell Storage will be showcasing an updated product roadmap and we’ll also see new product announcements. One of the announcements you’ll hear about is the availability of Compellent Storage Center 6.0. As a Technical Marketing Product Specialist who spends all time working on the VMware integration points, this is a release I’ve been looking forward to since starting my career at Dell Compellent in May of last year. This is a significant launch for Dell Compellent from an architectural perspective. SC 6.0 now leverages the FreeBSD 64-bit platform. The 64-bit architecture is the springboard for new features launched this week (such as multithreading opportunities and 12GB memory per Series 40 controller) and will serve as a key enabler for future scalability, integration, and feature enhancements.

If you’re a current Dell Compellent customer with vSphere 4.1 or newer in your datacenter, you know that through SC 5.5.x we supported one VAAI primitive: Zero Blocks or Write Same. Storage Center 6.0 supports additional VMware vSphere VAAI primitives:

  • Copy Offload
  • Hardware Assisted Locking
  • Of course we still support Block Zeroing

On a side note, VMware also released a 4th VAAI primitive in vSphere 5 focusing on Thin Provisioning for block storage arrays.  However, shortly after the release, VMware pulled support on this primitive (applies to all storage vendors) to work out some kinks.  I wrote about that here.

VAAI excites me because of the performance and scalability gains it brings to the vSphere virtual datacenter in addition to vSphere bolt ons such as VMware View and vCloud Director.

Snagit Capture

Snagit Capture

Compellent SC 6.0 VAAI support:

  • 41% faster block cloning operations on Eager Zeroed Thick and Lazy Zeroed Thick virtual disks
  • 98% faster Eager Zeroed Thick disk creation
  • Up to 100% reduction in Block Zeroing data traffic from host to storage
  • Offloaded operations result in significantly reduced copy traffic between host and storage
  • Offloaded operations result in reduction of ESX(i) host resource and storage fabric utilization

Find more details about VAAI at VMware KB 1021976 vStorage APIs for Array Integration FAQ.

This should be a really great week.  Personally, it will be my first Dell Compellent focused conference.  I do hope to see you there and look forward to some good discussions.  If you’re not able to attend in person, you can use these links to follow the action remotely:

Event Links:

Twitter/Social Media Links:

Other Links:

VMware View 5.0 VDI vHardware 8 vMotion Error

September 20th, 2011

General awareness/heads up blog post here on something I stumbled on with VMware View 5.0.  A few weeks ago while working with View 5.0 BETA in the lab, I ran into an issue where a Windows 7 virtual machine would not vMotion from one ESXi 5.0 host to another.  The resulting error in the vSphere Client was:

A general system error occurred: Failed to flush checkpoint data

I did a little searching and found similar symptoms in VMware KB 1011971 which speaks to an issue that can arise  when Video RAM (VRAM) is greater than 30MB for a virtual machine. In my case it was greater than 30MB but I could not adjust it due to the fact that it was being managed by the View Connection Server.  At the same time, a VMware source on Twitter volunteered his assistance and quickly came up with some inside information on the issue.  He had me try adding the following line to /etc/vmware/config on the ESXi 5.0 hosts (no reboot required):

migrate.baseCptCacheSize = “16777216”

The fix worked and I was able to vMotion the Windows 7 VM back and forth between hosts.  The information was taken back to Engineering for a KB to be released.  That KB is now available: VMware KB 2005741 vMotion of a virtual machine fails with the error: A general system error occurred: Failed to flush checkpoint data! The new KB article lists the following background information and several workarounds:


Due to new features with Hardware Version 8 for the WDDM driver, the vMotion display graphics memory requirement has increased. The default pre-allocated buffer may be too small for certain virtual machines with higher resolutions. The buffer size is not automatically increased to account for the requirements of those new features if mks.enable3d is set to FALSE (the default).


To work around this issue, perform one of these options:

  • Change the resolution to a single screen of 1280×1024 or smaller before the vMotion.
  • Do not upgrade to Virtual Machine Hardware version 8.
  • Increase the base checkpoint cache size. Doubling it from its default 8MB to 16MB (16777216 byte) should be enough for every single display resolution. If you are using two displays at 1600×1200 each, increase the setting to 20MB (20971520 byte).To increase thebase checkpoint cache size:

    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add migrate. baseCptCacheSize to the name column and add 16777216 to the value column.
    8. Click OK to save the change.

    Note: If you don’t want to power off your virtual machine to change the resolution, you can also add the parameter to the /etc/vmware/config file on the target host. This adds the option to every VMX process that is spawning on this host, which happens when vMotion is starting a virtual machine on the server.

  • Set mks.enable3d = TRUE for the virtual machine:
    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add mks.enable3d to the name column and add True to the value column.
    8. Click OK to save the change.
Caution: This workaround increases the overhead memory reservation by 256MB. As such, it may have a negative impact on HA Clusters with strict Admission Control. However, this memory is only used if the 3d application is active. If, for example, Aero Basic and not Aero Glass is used as a window theme, most of the reservation is not used and the memory could be kept available for the ESX host. The reservation still affects HA Admission Control if large multi-monitor setups are used for the virtual machine and if the CPU is older than a Nehalem processor and does not have the SSE 4.1 instruction set. In this case, using 3d is not recommended. The maximum recommended resolution for using 3d, regardless of CPU type and SSE 4.1 support, is 1920×1200 with dual screens.

The permanent fix for this issue did not make it into the recent View 5.0 GA release but I expect it will be included in a future release or patch.

Update 12/23/11: VMware released five (5) non-critical patches last week.  One of those patches is ESXi500-201112401-SG which permanently resolves the issues described above.  Full patch details below:

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • Updates the glibc third party library to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project ( has assigned the names CVE-2010-0296, CVE-2011-0536, CVE-2011-1071, CVE-2011-1095, CVE-2011-1658 and CVE-2011-1659 to these issues.
  • When a hot spare disk that is added to a RAID group is accessed before the disk instance finishes initialization or if the disk is removed while an instance of it is being accessed, a race condition might occur causing the vSphere Client to not display information about the RAID controllers and the vSphere Client user interface might also not respond for a very long time.
  • vMotion fails with the A general system error occurred: Failed to flush checkpoint data!error message when:
    • The resolution of the virtual machines is higher than 1280×1024, or smaller if you are using a second screen
    • The guest operating system is using the WDDM driver (Windows 7, Windows 2008 R2, Windows 2008, Windows Vista)
    • The virtual machine is using Virtual Machine Hardware version 8.
  • Creating host profiles of ESX i 5.0 hosts might fail when the host profile creation process is unable to resolve the hostname and IP address of the host by relying on the DNS for hostname and IP address lookup. An error message similar to the following is displayed:
    Call"HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server"<Server_Name> failed.
    Error extracting indication configuation: [Errno- 2] Name or service not known.
  • In vSphere 5.0, Thin Provisioning is enabled by default on devices that adhere to T10 standards. On such thin provisioned LUNs, vSphere issues SCSI UNMAP commands to help the storage arrays reclaim unused space. Sending UNMAP commands might cause performance issues with operations such as snapshot consolidation or storage vMotion.
    This patch resolves the issue by disabling the space reclamation feature, by default.
  • If a user subscribes for an ESXi Server’s CIM indications from more that one client (for example, c1 and c2) and deletes the subscription from the first client (c1), the other clients (C2) might fail to receive any indication notification from the host.

This patch also provides you with the option of configuring the iSCSI initiator login timeout value for software iSCSI and dependent iSCSI adapters.
For example, to set the login timeout value to 10 seconds you can use commands similar to the following:

  • ~ # vmkiscsi-tool -W -a "login_timeout=10" vmhba37
  • ~ # esxcli iscsi adapter param set -A vmhba37 -k LoginTimeout -v 10

The default login timeout value is 5 seconds and the maximum value that you can set is 60 seconds.
We recommend that you change the login timeout value only if suggested by the storage vendor.

Rogue SRM 5.0 Shadow VM Icons

September 13th, 2011

Snagit CaptureOne of the new features in VMware SRM 5.0 is Shadow VM Icons.  When VMs are protected at the primary site, these placeholder objects will automatically be created in VM inventory at the secondary site.  It may seem like a trivial topic for discussion but it is important to recognize that these placeholder objects represent datacenter capacity which will be needed and consumed on demand if and when the VMs are powered on during a planned migration or disaster recovery operation within SRM.  In previous versions of SRM, the placeholder VMs simply looked like powered off virtual machines.  In SRM 5.0, these placeholder VMs get a facelift to provide better clarity of their disposition.  You can see what these Shadow VM Icons look like in the image to the right.

Each SRM Server maintains its own unique SQL database instance in order to track current state of the environment.  It does a pretty good job of this.  However, at some point you may run into an instance where once SRM protected VMs are no longer protected (by choice or design), yet they maintain the new Shadow VM Icon look which can yield a false sense of protection.  If the VMs truly are not protected, they should have no relationship with SRM and thus should not be wearing the Shadow VM Icon.  I ran into this during an SRM upgrade.  I corrected the rogue icon by removing the VM from inventory and re-added to inventory.  This action is safe to quickly perform on running VMs.

Tech Support Mode Warnings Revisited In vSphere 5

September 2nd, 2011

A few months ago I authored a blog post titled Tech Support mode Warnings.  It dealt with the yellow balloon warnings attached to a host object in vCenter when Local Tech Support Mode was enabled (as well as Remote Tech Support via SSH).

Without surprise, the warnings are back in vSphere 5, albeit with the warning messages slightly changed.

Configuration Issues

ESXi Shell for the host has been enabled

SSH for the host has been enabled

Snagit Capture

In the previous blog post, I referenced VMware’s KB article which stated there was no way to hide the messages while the offending configuration was in place.  That may have been the official stance but it certainly wasn’t the case from a technical standpoint as there are a few workarounds to suppress the messages.

VMware has shown us a little love in vSphere 5.  Both messages can be suppressed with a modification of an Advanced Setting on each host.  Even better, there is no reboot of the host or recycle of a service required.  In my testing, Maintenance Mode was also not required and could be performed with running VMs on the host.  Although if you’re wondering if this is going to be safe to perform in a running production environment, be sure to take a step back and consider not only the immediate impact of the task, but also the longer term impact of the change because by this point you’ve already enabled or you’re thinking of enabling the Local ESXi Shell and/or remote SSH via the network.  Reference your security plan or hardening guidelines before proceeding.

Following is the tweak to suppress the warnings which I found in VMware KB 2003637:

Snagit Capture

Again, this is performed for each host during the time that it is built or after it is deployed.  In the figure above, the change is made via the vSphere Client, but it can also be scripted via command line with esxcfg-advcfg.

Somewhat related, in the same yellow balloon area you may also see a host warning message which states “This host currently has no management network redundancy” as shown below:

Snagit Capture

In production environment, you’ll want to resolve the issue by adding network redundancy for the Management Network.  However, in a lab or test environment, a single Management Network uplink may be acceptable but nonetheless you want the warning messages to disappear.  This warning is squelched by configuring an HA Advanced Option:  das.ignoreRedundantNetWarning with a value of true as shown below.  After that step is completed, Reconfigure for vSphere HA on the host and the warning will disappear.  Reconfigure for HA step will need to be applied separately for each host with a non-redundant Management Network configuration.

Snagit Capture

Update 9/5/11: Duncan Epping also has also written on this subject back in July. Be sure to bookmark his blog, subscribe to his RSS feed, and follow him on Twitter.  He is a nice guy and very approachable.

Update 10/15/12: Added section for “No Management Network Redundancy” which I should have included to begin with.

Tech Support Mode Warnings

June 23rd, 2011

After enabling Local Tech Support Mode on an ESXi host via the DCUI (Direct Console User Interface), a yellow balloon styled warning will be displayed in the vSphere Client:

The Local Tech Support Mode for the host has been enabled

Likewise, if you’ve enabled Remote Tech Support Mode via SSH, you’ll see:

Remote Tech Support Mode(SSH) for the host has been enabled

Snagit Capture

KB Article 1016205 describes this condition as a security measure.  Adhering to the warnings would be a best practice for a production or high risk environment.  However, for lab, development, or environments with adequate perimeter security, it may be desirable to have either or both modes enabled but the warnings throughout the vSphere Client aren’t welcomed.

The VMware KB article goes on to say that there is no way to eliminate the warnings while leaving Local or Remote Tech Support Mode enabled.

Disabling Remote Tech Support Mode (SSH) and Local Tech Support Mode is the only way to prevent this warning.

While there may not be be an advanced configuration exposed, rebooting the host eliminates the conditional warnings.  It has also been reported in the VMware community forums that restarting the hostd service also works as follows, but as a side effect, will likely and temporarily disconnect the host from a vCenter Server:

/etc/init.d/hostd restart