Posts Tagged ‘Lab Manager’

vSphere 5.0 Update 1 and Related Product Launches

March 16th, 2012

VMware has unveiled a point release update to several of their products tied to the vSphere 5 virtual cloud datacenter platform plus a few new product launches.

vCenter 5.0 Update 1 – Added support for new guest operating systems such as Windows 8, Ubuntu, and SLES 11 SP2, the usual resolved issues and bug fixes, plus some updates around vRAM limits licensing.  One other notable – no compatibility at this time with vSphere Data Recovery (vDR) 2.0 according to the compatibility matrix.

ESXi 5.0 Update 1 – Added support for new AMD and Intel processors, Mac OS X Server Lion, updated chipset drivers, resolved issues and bug fixes.  One interesting point to be made here is that according to the compatibility matrix, vCenter 5.0 supports ESXi 5.0 Update 1.  I’m going to stick with the traditional route of always upgrading vCenter before upgrading hosts as a best practices habit until something comes along to challenge that logic.

vCloud Director 1.5.1 – Added support for vSphere 5.0 Update 1 and vShield 5.0.1, plus RHEL 5 Update 7 as a supported server cell platform.  Enhancements were made around firewall rules, AMQP system notifications, log collection, chargeback retention, resolved issues, and added support for AES-256 encryption on Site-to-Site VPN tunnels (unfortunately no vSphere 5.0 Update 1 <-> vCloud Connector 1.5 support).  Oh yes, sometime over the past few months, VMware Marketing has quietly changed the acronym for vCloud Director from vCD to VCD.  We’ll just call that a new feature for 1.5.1 going forward.  I <3 the Marketing team.

Site Recovery Manager 5.0.1 – Added support for vSphere 5.0 Update 1 plus a “Forced Failover” feature which allows VM recovery in cases where storage arrays fail at the protected site which, in the past, lead to unmanageable VMs which cannot be shut down, powered off, or unregistered.  Added IP customization for some Ubuntu platforms.  Many bug fixes, oh yes.  VMware brought back an advanced feature which hasn’t been seen since SRM 4.1 which provided a configurable option, storageProvider.hostRescanCnt, allowing repeated host scans during testing and recovery. This option was removed from SRM 5.0 but has been restored in the Advanced Settings menu in SRM 5.0.1 and can be particularly useful in troubleshooting a failed Recovery Plan. Right-click a site in the Sites view, select Advanced Settings, then select storageProvider. See KB 1008283.  Storage arrays certified on SRM 5.0 (ie. Dell Compellent Storage Center) are automatically certified on SRM 5.0.1.

View 5.0.1 – Added support for vSphere 5.0 Update 1, new Connection Server, Agent, Clients, fixed known issues.  Ahh.. let’s go back to that new clients bit.  New bundled Mac OS X client with support for PCoIP!  I don’t have a Mac so those who would admit to calling me a friend will have to let me know how sharp that v1.4 Mac client is.  As mentioned in earlier release notes, Ubuntu got a plenty of love this week.  Including a new View PCoIP version 1.4 client for Ubuntu Linux.  I might just have to deploy an Ubuntu desktop somewhere to test this client.  But wait, there’s more.  New releases of the View client for Android and iPad tablets.  The Android client adds fixes for Ice Cream Sandwich devices, security stuff, and updates for the Kindle Fire (I need to get this installed on my wife’s Fire).  The updated iPad client improves both connection times as well as external display support but for the most part Apple fans are flipping out simply over something shiny and new.  Lastly, VMware created a one stop shop web portal for all client downloads which can be fetched at

vShield 5.0.1 – Again, added support for vSphere 5.0 Update 1, enhanced reporting and export options, new REST API calls, improved audit logs, simplified troubleshooting, improved vShield App policy management as well as HA enhancements, and enablement of Autodeploy through vShield VIB host modules downloadable from vShield Manager.

So… looking at the compatibility matrix with all of these new code drops, my lab upgrade order will look something like this:

1a. View 5.0 –> View 5.0.1

1b. vCD 1.5 –> VCD 1.5.1

1c. SRM 5.0 –> SRM 5.0.1

1d. vShield App/Edge/Endpoint 5.0 –> 5.0.1

1e. vDR 2.0 –> Go Fish

2. vSphere Client 5.0.1 (it’s really not an upgrade, installs parallel with other versions)

3. vCenter Server 5.0 –> vCenter Server 5.0 Update 1

4. Update Manager 5.0 –> Update Manager 5.0 Update 1

5. ESXi 5.0 –> ESXi 5.0 Update 1

There are a lot of versions in play here which weaves somewhat of a tangled web of compatibility touch points to identify before diving head first into upgrades.  I think VMware has done a great job this time around with releasing products that are, for the most part, compatible with other currently shipping products which provides more flexibility in tactical approach and timelines.  Add to that, some time ago they’ve migrated a two dimensional .PDF based compatibility matrix into an online portal offering interactive input making the set of results customized for the end user.  The only significant things missing in the vSphere 5.0U1 compatibility picture IMO are vCloud Connector, vDR, and based on the results from the compatibility matrix portal – vCenter Operations (output showed no compatibility with vSphere 5.x, didn’t look right to me).  I’ve taken a liberty in creating a component compatibility visual roadmap including most of the popular and currently shipping products vSphere 5.0 and above.  If you’ve got a significant amount of infrastructure to upgrade, this may help you get the upgrade order sorted out quickly.  One last thing – Lab Manager and ESX customers should pay attention to the Island of Misfit Toys.  In early 2013 the Lab Manager ride comes coasting to a stop.  Lab Manager and ESX customers should be formulating solid migration plans with an execution milestone coming soon.

Snagit Capture

VMware vCenter as a vCloud Director vApp

February 27th, 2012

Snagit CaptureThe way things work out, I tend to build a lot of vCenter Servers in the lab.  Or at least it feels like I do.  I need to test this.  A customer I’m meeting with wants to specifically see that.  I need don’t want to taint or impact an existing vCenter Server which may already be dedicated to something else having more importance.  VMware Site Recovery Manager is a good example.  Each time I bring up an environment I need a pair of vCenter Servers which may or not be available.  Whatever the reason, I’ve reached the point where I don’t need to experience the build process repeatedly.

The Idea

A while ago, I had stood up a private cloud for the Technical Solutions/Technical Marketing group at Dell Compellent.  I saved some time by leveraging that cloud environment to quickly provision platforms I could install vCenter Server instances on.  vCenter Servers as vApps – fantastic use case.  However, the vCenter installation process is lengthy enough that I wanted something more in terms of automated cookie cutter deployment which I didn’t have to spend a lot of time on.  What if I took one of the Windows Server 2008 R2 vApps from the vCD Organization Catalog, deployed it as a vApp, bumped up the vCPU and memory count, installed the vSphere Client, vCenter Server, licenses, a local MS SQL Express database, and the Dell Compellent vSphere client plug-in (download|demo video), and then added that vApp back to the vCD Organization Catalog?  Perhaps not such a supported configuration by VMware or Microsoft, but could I then deploy that vApp as future vCenter instances?  Better yet, build a vApp consisting of a pair of vCenter Servers for the SRM use case?  It sounded feasible.  My biggest concerns were things like vCenter and SQL Express surviving the name and IP address change as part of the vCD customization.


Although I ran into some unrelated customization issues which seemed to have something to do with vCD, Windows Server 2008 R2, and VMXNET3 vNICs (error message: “could not find network adapters as specified by guest customization. Log file is at c:\windows\temp\customize-guest.log.” I’ll save that for a future blog post if I’m able to root cause the problem), the Proof of Concept test results thus far have been successful.  After vCD customization, I was able to add vSphere 5 hosts and continue with normal operations from there.

Initially, I did run into one minor issue and that was hosts would fall into a disconnected status approximately two minutes after being connected to the vCenter Server.  This turned out to be a Windows Firewall issue which was introduced during the customization process.  Also, there were some red areas under the vCenter Service Status which pointed to the old instance name (most fixes for that documented well by Rick Vanover here, plus the vCenter Inventory Service cleanup at VMware KB 2009934).

The Conclusion

To The Cloud!  You don’t normally hear that from me on a regular basis but in this case it fits.  A lengthy and increasingly cumbersome task was made more efficient with vCloud Director and vSphere 5.  Using the Linked Clone feature yields both of its native benefits: Fast Provisioning and Space Efficiency.  I’ll continue to leverage vCD for similar and new use cases where I can.  Lastly, this solution can also be implemented with VMware Lab Manager or simply as a vSphere template.  Caveats being Lab Manager retires in a little over a year and a vSphere Template won’t be as space efficient as a Linked Clone.

Deploy ESX & ESXi With Hidden Lab Manager 4 Switch

February 17th, 2011

SnagIt Capture200 million years from now, divers off the west coast of the U.S. will make an incredible discovery.  Miles beneath the Pacific Ocean, in a location once known as the Moscone Center in San Francisco, evidence will emerge which reveals spectacular gatherings that once took place.  Humans from around the globe would assemble semi-annually to celebrate virtualization and cloud technologies from a company named VMware which made its mark throughout history as the undisputed and mostly uncontested leader in its space.  What this company did changed the way mankind did business forever.  Companies and consumers alike were provided with tremendous advantages, flexibility, and cost savings.

At these events, massive amounts of compute resources were harnessed to power “virtual laboratories”.  These laboratories (or labs as they were called for short) were dynamically provisioned on demand and at large scale by the attendees themselves.  Archaeologists in Miami, Florida and Ashburn, Virginia made similar discoveries and they believe that the three sites were somehow linked together for the twice a year event called “VMworld”.  Scientists estimate that the combined amount of resources would easily be able to support the deployment 50,000+ “virtual machines” in just a few days.

How did they accomplish this?  Without a doubt, by automating.  The fossilized remains suggest they may have used one of their own development products called “Lab Manager” which was first introduced in the year 2006 A.D. and retired by vCloud Director just seven years later in 2013 according to the scriptures.  The Lab Manager product was a special use case tool which many businesses with internal software development processes flourished by, and a whole lot more when it morphed into vCD.  What wasn’t widely shared or known beyond the VMware staff was that it shipped with some special abilities that were locked and hidden.  Scientists believe these abilities assisted in the automated deployment of virtualized ESX and ESXi hosts within Lab Manager.  This was the key to automating the VMworld labs.  Details aren’t 100% complete but there’s enough information such that future researchers may be able to find or synthesize the missing DNA to recreate a functional replica of what once existed. 

Disclaimer: What follows is not supported by VMware.  Before you get carried away with excitement, ask yourself if this is something you should be doing in your environment.

The Lab Manager 4 configuration is stored in a SQL Express database installed locally on the Lab Manager 4 server.  To unlock the virtualized ESX(i) support, a hidden switch must be flipped in the database.  Add a row to the “Config” table in the Lab Manager database:

Cat: settings
Name: EsxVmSupportEnabled
Value: 1

This can be accomplished this by:

  1. granting a domain account the SysAdmin role using the SQL Server 2005 Surface Area Configuration tool inside the Lab Manager server
  2. and then executing the following query via a Microsoft SQL Server Management Studio on a remote SQL 2005 server (or use OSQL locally if you know how that tool works):

SnagIt Capture

The next step is to Clear Cache via the Uber Administration Screen in the Lab Manager web interface (this screen is available with or without the above database hack).  How does one get to this uber-admin page?  Log into the Lab Manager web interface as an administrator, click the About hyperlink on the left edge Support menu.  Once at the About page, Use CTRL+U to access the uber-admin page.  Click the Clear Cache button:

SnagIt Capture

Next step.  By virtue of having installed and performed the initial configuration of Lab Manager at this point, it is assumed one has already prepared the Lab Manager hosts with the default Lab Manager Agent.  To facilitate the automated deployment of virtual ESX(i) hosts in Lab Manager, the special ESX-VM support specific Lab Manager agents need to be installed.  To do this, simply Disable your Lab manager hosts, Unprepare each Lab Manager host, then Prepare again.  Because the hidden database switch was flipped in a previous step, Lab Manager will now install the ESX-VM support specific Lab Manager agent on each ESX(i) host.

The next two steps do not exploit a hidden feature, however, they do need to be followed for virtual ESX(i) deployment.  Navigate to Settings | Guest Customization.  Uncheck the box labeled Only Allow Publishing of Templates With a Version of VMware Tools That Supports Guest Customization.

SnagIt Capture

In the final step, Enterprise Plus customers making use of the vDS must disable host spanning on each Lab Manager host by unchecking the box Enable host for Host Spanning:

SnagIt Capture

Now that the required changes have been made to support virtual ESX(i) hosts in Lab Manager, the resulting changes can be seen within Lab Manager.

Create a new VM Template.  I’ll call this one ESXi 4.  Take a look at the new virtualized VMware ESX(i) Guest OS types are now available for templating and ultimately deployment:

SnagIt Capture

Immediately after creating the base template, select it and choose Properties.  Here we see several new fields for automating the deployment of virtual ESX(i) hosts: Licensing, credentials, shared storage connectivity, and vCenter configuration:

SnagIt Capture

For an ESX guest OS type, an additional field for configuring a VMkernel interface is made available:

SnagIt Capture

Finally, create a Configuration using one or more of the new virtual ESX(i) templates and take a look at the custom buttons that show up:  Configure vPod, Add ESX-VMs to External vCenter, Attach External NFS to ESX-VMs, and Attach External iSCSI to ESX-VMs.  These added functions could be used for manual provisioning post deployment, copying files, or for troubleshooting:

SnagIt Capture

This is enough to get started and experiment with.  Unfortunately, it’s not 100% complete.  What’s missing is a guest customization script which runs inside the virtual ESX(i) host post deployment and contains more of the automation needed to deploy unique and properly configured virtual ESX(i) hosts in Lab Manager.  Perhaps one day these scripts will be discovered and shared, or recreated.

The Future of VMware Lab Manager

September 12th, 2010

With the release of VMware vCloud Director 1.0 at VMworld 2010 San Franciso, what’s in store for VMware Lab Manager?  The future isn’t entirely clear for me.  I visualize two potential scenarios:

  1. Lab Manager development and product releases continue in parallel with VMware vCloud Director.  Although the two overlap in functionality in certain areas, they will co-exist on into the future in perfect harmony.
  2. VMware vCloud Director gains the features, popularity, pricing, and momentum needed to obsolete and sunset Lab Manager.

I’ve got no formal bit of information from VMware regarding the destiny of Lab Manager. In lieu of that, I’ve been keeping my ear to the rail trying to pick up clues from VMware body language.  Here are some of the items I’ve got in my notebook thus far:

Development Efforts:  First and foremost, what seems obvious to me is that VMware has all but stopped development of Lab Manager well beyond the past year.  Major functionality hasnt been introduced since the 3.x version.  Let’s take a look:

4.0 was released in July 2009 which provided compatibility with the recent launch of vSphere, that’s really it. I don’t count VMware’s half baked attempt at integrating with vDS which they market as DPM for Lab Manager (one problem, the service VMs prevent successful host maintenance mode and, in turn, prevent DPM from working; this bug has existed for over a year with no attempts at fixing).  To further add, the use of the Host Spanning network feature leverages the vDS and implies the requirement Enterprise Plus licensing for the hosts.  This drives up the sticker price of an already costly development solution by some accounts.

4.0.1 was released in December 2009, again to provide compatibility with vSphere 4.0 Update 1. VMware markets this release as introducing compatibility with Windows 7 and 2008 R2 (which in and of itself is not a lie), but anyone who knows the products realizes the key enabler was vSphere 4.0.1 and not Lab Manager 4.0.1 itself.

4.0.2 is released in July 2010 to provide compatibility with vSphere 4.1.  No new features to speak of other than what vSphere 4.1 brings to the table.

SnagIt Capture

Are you noticing the pattern?  Development efforts are being put forth merely to keep up compatibility with the vSphere releases.  Lab Manager documentation hasn’t been updated since the 4.0 release.  The 4.0.1 and 4.0.2 versions both point back to the 4.0 documentation.  Lab Manager documentation hasn’t been updated in over a year even considering two Lab Manager code releases since then.  Further evidence there has been no recent feature development in the Lab Manager product itself.

This evidence seems to make it clear that VMware is positioning Lab Manager for retirement.  The logical replacement is vCloud Director.  I haven’t heard of large scale developer layoffs in Palo Alto so a conclusion could be drawn here that most developer effort was pulled from Lab Manager and put on on vCloud Director 1.0 to get it out the door in Q3 2010.

Bug Fixes & Feature Requests:  This really ties into Development Efforts, but due to its weight, I thought it deserved a category of its own.  Lab Manager has acquired a significant following over the years by delivering on its promise of making software development more efficient and cost effective through automation.  Much like datacenter virtualization itself, a number of customers have become dependent on the technology.  As much as VMware has satisified these customers by maintaining Lab Manager compatibility with vSphere, at the same time customers are getting the short end of the stick.  Customers continue to pay their SnS fees but the value add of SnS is diminishing as VMware development efforts slowed down to a crawl.  At one time, SnS would net you new features, bug fixes, in addition to new versions of the software which provide compatibility with the host platforms.  Instead, the long list of customer feature requests (and great ideas I might add) sits dead in a VMware Communities forum thread like this.  The number of bugs fixed in the last two releases of Lab Manager I can almost count on two hands.  And what about squashing these bugs: this, this, and this?  Almost nothing has changed since Steven Kishi (I believe) exited the role of Director of Product Manager for VMware Lab Manager.

Again, this evidence seems to make it clear that VMware is sending Lab Manager off into the sunset.  Hello vCloud Director.

Marketing Efforts:  From my perspective, VMware hasn’t spent much time focusing on Lab Manager marketing.  By a show of customer or partner hands, who has seen a Lab Manager presentation from VMware in the last 6-12 months?  This ties strongly into the Development Efforts point made above.  Why market a product which seems to be well beyond its half life?  Consistent with the last thought above, marketing has noticably shifted almost entirely from Lab Manager to vCloud Director.

Chalk up another point for the theory which states Lab Manager will be consumed by vCloud Director.

Lack of Clear Communication:  About the only voice in my head (of which there are many) which reasons Lab Manager might be sticking around (other than a VMware announcement of a Lab Manager video tutorial series which has now gone stale) is the fact that VMware has not made it formally and publically clear that Lab Manager is being retired or replaced by vCloud Director.  Although I’m making a positive point here for the going concern of Lab Manager, I think there is ultimately an expiration date of Lab Manager in the not so distant future.  If you understand the basics of vCloud Director or if you have installed and configured it, you’ll notice similarities between it and Lab Manager.  But there is not 100% coverage of Lab Manager functionality and integration.  Until VMware can provide that seamless migration, they obviously aren’t going to pull the plug on Lab Manager.  Quite honestly, I think this is the most accurate depiction of where we’re sitting today.  VMware has a number of areas to address before vCloud Director can successfully replace Lab Manager.  Some are technical such as getting that 100% gap coverage between the two products from a features standpoint.  Some are going to be political/marketing based.  Which customers are ready to replace a tried and true solution with a version 1.0 product?  Some may be cost based.  Will VMware take a 1:1 trade in on Lab Manager for vCloud Director or will there be an uplift fee?  Will Enterprise Plus licensing be a requirement for future versions of vCloud Director?  vCloud Direct0r 1.0 requires Enterprise Plus licensing according to the VMware product’s ‘buy’ page.  Some will be a hybrid.  For instance, existing Lab Manager customers rely on a MS SQL (Express) database.  vCloud Director 1.0 is back ended with Oracle, a costly platform Lab Manager customers won’t necessarily have already in terms of infrastructure and staff.

SnagIt Capture

In summary, this point is an indicator that both Lab Manager and vCloud Director will exist in parallel, however, the signs can’t be ignored that Lab Manager is coasting on fumes.  Its ongoing presence and customer base will require support and future compatibility upgrades from VMware.  Maintaining support on two technologies for VMware is more expensive than to maintain just one.  A larger risk for VMware and customers may be that development efforts for vSphere have to slow down to allow Lab Manager to keep pace.  Even worse, new technology doesn’t see the light of day in vSphere because it cannot be made backward compatible with Lab Manager.  Unless we see a burst in development or marketing for Lab Manager, we may be just a short while away from a formal announcement from VMware stating the retirement of Lab Manager along with the migration plan for Lab Manager customers to become vCloud Director customers.

What are your thoughts?  I’d like to hear some others weigh in.  Please be sure to not disclose any information which would violate an NDA agreement.

Update 2/14/11: VMware has published a VMware vCenter Lab Manager Product Lifecycle FAQ for it’s current customers which fills in some blanks.  Particularly:

What is the future of the vCenter Lab Manager product line?

As customers continue to expand the use of virtualization both inside the datacenter and outside the firewall, we are focusing on delivering infrastructure solutions that can support these expanded scalability and security requirements. As a result, we have decided to discontinue additional major releases of vCenter Lab Manager. Lab Manager 4 will continue to be supported in line with our General Support Policy through May 1, 2013.

When is the current end-of-support date for vCenter Lab Manager 4?

For customers who are current on SnS, General Support has been extended to May 1, 2013.

Are vCenter Lab Manager customers eligible for offers to any new products?

To provide Lab Manager customers with the opportunity to leverage the scale and security of vCloud Director, customers who are active on SnS may exchange their existing licenses of Lab Manager to licenses of vCloud Director at no additional cost. This exchange program is entirely optional and may be exercised anytime during Lab Manager’s General Support period. This provides customers the freedom and flexibility to decide whether and when to implement a secure enterprise hybrid cloud.

The Primary License Administrator can file a Customer Service Request to request an exchange of licenses. For more information on the terms and conditions of the exchange, contact your VMware account manager.

Update 6/25/13: VMware notified its customers via email that support for Lab Manager 4.x has been extended:

June 2013

Dear VMware Valued Customers,VMware is pleased to announce a 1-year extension to the support for VMware vCenter Lab Manager 4.x. As reference, the original end of support date for this product was May 1, 2013. The new official end of support date will be May 21, 2014. This new end of support date aligns with VMware vSphere 4.x (noted in the support lifecycle matrix below as VMware ESX/ESXi 4.x and vCenter Server 4.x) end of support. This new date also allows the vCenter Lab Manager customer base more time to both use the 4.x product and evaluate options for moving beyond vCenter Lab Manager in the near future.

Additional Support Materials:

New VMware vCenter Lab Manager Video Tutorial Series

July 8th, 2010

VMware has started a new Lab Manager video series and has kicked things off by posting three inaugural videos:

  1. Lab Manager Introduction and Product Overview
  2. Organizations within vCenter Lab Manager
  3. Workspaces within vCenter Lab Manager

VMware states that the next videos in the series will be:

  • Managing Users and Groups within vCenter Lab Manager
  • Networking within vCenter Lab Manager

The videos are authored by Graham Daly who works for VMware out of the Cork, Ireland office.  The videos are short at well under 10 minutes each and provide introductory level information on Lab Manager components and administrative containers.  If you haven’t used Lab Manager before, it’s enough to get you curious.

KB article (1020915) is going to act as a central location or a “one-stop-shop” for tutorial style videos which will discuss and demonstrate the various different topics/aspects of the Lab Manager product. As new videos become available, they will be added to the article.

I haven’t seen any books to date on use of Lab Manager.  From a training and education standpoint, the Lab Manager installation guide and the Lab Manager user’s guide actually isn’t too bad.  Someone last night was looking for advice on Lab Manager training and I recommended printing these two .PDF documents out and sticking them in a 3-ring binder like I did.  You’ll be able to whip through them in a few hours as much of the content is repeated time and again in the user’s guide.  Beyond that, the best Lab Manager training is continuous use of the product.  As I stated last night, Lab Manager is a bit of a different animal, even for a VMware junkie (like me).

Boil down the complexity and black magic of the Lab Manager product by looking at it as a tiered application consisting of

  • virtual infrastructure (ESX(i) and vCenter, you know this already),
  • a web front end (that’s the Lab Manager server, which by the way runs great as a VM),
  • and a database (which also runs on the Lab Manager server and only on the Lab Manager server – yep, it’s local MS SQL Express, and yep, it has scaling and migration issues).

The Tomcat on Windows web interface is the front end where Lab Manager environments are built and managed.  The web interface sends tasks to the vCenter Server which in turn commands the ESX(i) hosts (ie. build this VM, register it, power it on, make a snapshot, now clone it, etc.)  State information and other configuration items are stored in the database.  For obvious reasons, the database and vCenter always need to be on the same page.  When they get of sync is where hell begins but I’ll save that discussion for a distant blog post entitled “Lab Manager: fun to build and play with, no fun to troubleshoot”. It’s a lot like Citrix Presentation Server in that respect.

Lab Manager 4 Installation Fails With vSphere VMXNET 3 NIC

November 7th, 2009

A few of the networking requirements for installing a VMware Lab Manager server are:

  1. At least one network card
  2. A static TCP/IP configuration (no DHCP)

Failure to meet the above requirements will result in error message # 5014 during the “Valid NIC Requirement” prerequisite check:

Lab Manager servers make fine virtualization candidates, therefore, it makes sense to deploy them as VMs on existing VMware virtual infrastructure so that they can take advantage of all the benefits VMware brings into the datacenter.

I ran into a new issue installing Lab Manager 4 in a vSphere VM which I configured with a VMXNET 3 virtual NIC. Already aware of the networking requirements, I had configured the virtual NIC with a static TCP/IP address, subnet mask, default gateway, and DNS servers. However, I was surprised to find out that my installation was failing the Valid NIC Requirement prerequisite.

So I resorted to what any certified professional would in this situation: GOOGLE. A quick search revealed scarce results but thankfully one solution. In this VMTN forums thread, a short discussion reveals that the VMXNET 3 virtual NIC is unexpectedly not compatible with the Lab Manager 4 installation prerequisites check. VMTN user MLaskowski012 explains:

“Talked to support and they said they are seeing the same issues. I guess nobody tested LabManager4 with the new hardware. BUT I think I figured out the trick. In device manager under NIC / Advanced if you change the Speed / Duplex from Auto Negotiation 10GB to 1GB Full, run the pre-check it will pass. Then right after you finish the install you can switch back to Auto or 10Gb. Not sure if there are any issues pass that…”

Viola! The trick works. Thank you MLaskowski012 for doing the legwork on this one. Unfortunately, no KB article from VMware yet on this (that I could find), but once again, as it has millions of times in the past, the VMTN community has fulfilled one of its primary purposes: technical support for the community, by the community.

Update 11/8/09: Via lab testing, the same failure and workaround applies to Lab Manager 3 installations with a VMXNET 3 virtual network adapter as well.

Lab Manager 4 and vDS

September 19th, 2009

VMware Lab Manager 4 enables new functionality in that fenced configurations can now span ESX(i) hosts by leveraging vNetwork Distributed Switch (vDS) technology which is a new feature in VMware vSphere. Before getting overly excited, remember that vDS is a VMware Enterprise Plus feature only and it’s only found in vSphere. Without vSphere and VMware’s top tier license, vDS cannot be implemented and thus you wouldn’t be able to enable fenced Lab Manager 4 configurations to span hosts.

Host Spanning is enabled by default when a Lab Manager 4 host is prepared as indicated by the green check marks below:

When Host Spanning is enabled, an unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled. This Lab Manager service VM cannot be powered down, suspended, VMotioned, etc.:

One ill side effect of this new Host Spanning technology is that an ESX(i) host will not enter maintenance mode while Host Spanning is enabled. For those new to Lab Manager 4, the cause may not be so obvious and it can lead to much frustration. An unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled and a running VM will prevent a host from entering maintenance mode. Maintenance mode will hang at the infamous 2% complete status:

The resolution is to first cancel the maintenance mode request. Then, manually disable host spanning in the Lab Manager host configuration property sheet by unchecking the box. Notice the highlighted message in pink telling us that Host Spanning must be disabled in order for the host to enter standby or maintenance mode. Unpreparing the host will also accomplish the goal of removing the service VM but this is much more drastic and should only be done if no other Lab Manager VMs are running on the host:

After reconfiguring the Lab Manager 4 host as described above, vSphere Client Recent Tasks shows the service VM is powered off and then removed by the Lab Manager service account:

At this time, invoke the maintenance mode request and the host will now be able to migrate all VMs off and successfully enter maintenance mode.

While Lab Manager 4 Host Spanning is a step in the right direction for more flexible load distribution across hosts in a Lab Manager 4 cluster, I find the process for entering maintenance mode counter intuitive, cumbersome, and at the beginning when I didn’t know what was going on, frustrating. Unsuccessful maintenance mode attempts have always been somewhat mysterious in the past because vCenter Server doesn’t give us much information to pinpoint the problem as far as what’s preventing the maintenance mode. This situation now adds another element to the complexity. VMware should have enough intelligence to disable Host Spanning for us in the event of a maintenance mode request, or at the very least, tell us to shut it off since it is conveniently and secretly enabled by default during host preparation. Of course, all of this information is available in the Lab Manager documentation, but who reads that, right? 🙂