Posts Tagged ‘vCloud Director’

Expanding vCloud Director Transfer Server Storage

December 5th, 2011

Installing vCloud Director 1.5 can be like installing a VCR.  For the most part, you can get through it without reading the instructions.  However, there may be some advanced or obscure features (such as programming the clock or automatically recording a channel) which require knowledge you’ll only pick up by referring to the documentation.  Such is the case with vCD Transfer Server Storage.  Page 13 of the vCloud Director Installation and Configuration Guide discusses Transfer Server Storage as follows:

To provide temporary storage for uploads and downloads, an NFS or other shared storage volume must be accessible to all servers in a vCloud Director cluster. This volume must have write permission for root. Each host must mount this volume at $VCLOUD_HOME/data/transfer, typically /opt/vmware/vcloud-director/data/transfer. Uploads and downloads occupy this storage for a few hours to a day. Transferred images can be large, so allocate at least several hundred gigabytes to this volume.

This is the only VMware documentation I could find covering Transfer Server Storage.  There is a bit of extra information revealed about Transfer Server Storage upon the initial installation of the vCD cell which basically states that at that point in time you should configure Transfer Server Storage to point to shared NFS storage for all vCD cells to use, or if there is just a single cell, local cell storage may be used:

If you will be deploying a vCloud Director cluster you must mount the shared transfer server storage prior to running the configuration script.  If this is a single server deployment no shared storage is necessary.

Transfer Server Storage is used for uploading and downloading (exporting) vApps.  A vApp is one or more virtual machines with associated virtual disks.  Small vApps in .OVF format will consume maybe 1GB (or potentially less depending on its contents).  Larger vApps could be several hundred GBs or beyond.  By default, Transfer Server Storage will draw capacity from /.  Lack of adequate Transfer Server Storage capacity will result in the inability to upload or download vApps (it could also imply you’re out of space on /).  Long story short, if you skipped the brief instructions on Transfer Server Storage during your build of a RHEL 5 vCD cell, at some point you may run short on Transfer Server Storage and even worse you’d run / out of available capacity.

I ran into just such a scenario in the lab and thought I’d just add a new virtual disk with adequate capacity, create a new mount point, and then adjust the contents of /etc/profile.d/vcloud.sh (export VCLOUD_HOME=/opt/vmware/vcloud-director) to point vCD to the added capacity.  I quickly found out this procedure does not work.  The vCD portal dies and won’t start again.  I did some searching and wound up at David Hill’s vCloud Director FAQ which confirms the transfer folder cannot be moved (Chris Colotti has also done some writing on Transfer Server Storage here in addition to related content I found on the vSpecialist blog).  However, we can add capacity to that folder by creating a new mount at that folder’s location.

I was running into difficulties trying to extend / so I collaborated with Bob Plankers (a Linux and Virtualization guru who authors the blog The Lone Sysadmin) to identify the right steps, in order, to get the job done properly for vCloud Director.  Bob spent his weekend time helping me out with great detail and for that I am thankful.  You rule Bob!

Again, consider the scenario: There is not enough Transfer Server Storage capacity or Transfer Server Storage has consumed all available capacity on /.  The following steps will grow an existing vCloud Director Cell virtual disk by 200GB and then extend the Transfer Server Storage by that amount.  The majority of the steps will be run via SSH, local console or terminal:

  1. Verify rsync is installed. To verify, type rsync followed by enter. All vCD supported versions of RHEL 5 (Updates 4, 5, and 6) should already have rsync installed.  If a minimalist version of RHEL 5 was deployed without rsync, execute yum install rsync to install it (RHN registration required).
  2. Gracefully shut down the vCD Cell.
  3. Now would be a good time to capture a backup of the vCD cell as well as the vCD database if there is just a single cell deployed in the environment.
  4. Grow the vCD virtual disk by 200 GB.
  5. Power the vCD cell back on and at boot time go into single user mode by interrupting GRUB (press an arrow key to move the kernel selection).  Use ‘a‘ to append boot parameters. Append the word single to the end (use a space separator) and hit enter.
  6. Use # sudo fdisk /dev/sda to partition the new empty space:
    1. Enter ‘n’ (for new partition)
    2. Enter ‘p’ (for primary)
    3. Enter a partition number.  For a default installation of RHEL 5 Update 6, 1 and 2 will be in use so this new partition will likely be 3.
    4. First cylinder… it’ll offer a number, probably the first free cylinder on the disk. Hit enter, accept the default.
    5. Last cylinder… hit enter. It’ll offer you the last cylinder available. Use it all!
    6. Enter ‘x’ for expert mode.
    7. Enter ‘b’ to adjust the beginning sector of the partition.
    8. Enter the partition number (3 in this case).
    9. In this step align the partition to a multiple of 128.  It’ll ask for “new beginning of data” and have a default number. Take that default number and round it up to the nearest number that is evenly divisible by 128. So if the number is 401660, I take my calculator and divide it by 128 to get the result 3137.968. I round that up to 3138 then multiply by 128 again = 401664. That’s where I want my partition to start for good I/O performance, and I enter that.
    10. Now enter ‘w’ to write the changes to disk. It’ll likely complain that it cannot reread the partition table but this is safe to ignore.
  7. Reboot the vCD cell using shutdown -r now
  8. When the cell comes back up, we need to add that new space to the volume group.
    1. pvcreate /dev/sda3 to initialize it as a LVM volume. (If you used partition #4 then it would be /dev/sda4).
    2. vgextend VolGroup00 /dev/sda3 to grow the volume.
  9. Now create a filesystem:
    1. lvcreate –size 199G –name transfer_lv VolGroup00 to create a logical volume 199 GB in size named transfer_lv. Adjust the numbers as needed. Notice we cannot use the entire space available due to slight overhead.
    2. mke2fs -j -m 0 /dev/VolGroup00/transfer_lv to create an ext3 filesystem on that logical volume.  The -j parameter indicates journaled, which is ext3.  The -m 0 parameter tells the OS to reserve 0% of the space for the superuser for emergencies. Normally it reserves 5%, which is a complete waste of 5% of your virtual disk.
  10. Now we need to mount the filesystem somewhere where we can copy the contents of /opt/vmware/vcloud-director/data/transfer first.  mount /dev/VolGroup00/transfer_lv /mnt will mount it on /mnt which is a good temporary spot.
  11. Stop the vCloud Director cell service to close any open files or transactions in flight with service vmware-vcd stop.
  12. rsync -av /opt/vmware/vcloud-director/data/transfer/ /mnt to make an exact copy of what’s there. Mind the slashes, they’re important.
  13. Examine the contents of /mnt to be sure everything from /opt/vmware/vcloud-director/data/transfer was copied over properly.
  14. rm -rf /opt/vmware/vcloud-director/data/transfer/* to delete the file and directory contents in the old default location. If you mount over it, the data will still be there sucking up disk space but you won’t be able to see it (instead you’ll see lost+found). Make sure you have a good copy in /mnt!
  15. umount /mnt to unmount the temporary location.
  16. mount /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer (all one line) to mount it in the right spot.
  17. df -h to confirm the mount point is there and vCD data (potentially along with transient transfer storage files) is consuming some portion of it.
  18. To auto mount correctly on reboot:
    1. nano -w /etc/fstab to edit the filesystem mount file.
    2. At the very bottom add a new line (but no blank lines between) that looks like the rest, but with our new mount point. Use tab separation between the fields. It should look like this:
      /dev/VolGroup00/transfer_lv /opt/vmware/vcloud-director/data/transfer/ ext3 defaults 1 2
    3. Ctrl-X to quit, ‘y’ to save modified buffer, enter to accept the filename.
  19. At this time we can either start the vCD cell with service vmware-vcd start or reboot to ensure the new storage automatically mounts and the cell survives reboots. If after a reboot the vCD portal is unavailable, it’s probably due to a typo in fstab.

This procedure, albeit a bit lengthy and detailed, worked well and was the easiest solution for my particular scenario.  There are some other approaches which would work to solve this problem.  One of them would be almost identical to the above but instead of extending the virtual disk of the vCD cell, we could add a new virtual disk with the required capacity and then mount it up.  Another option would be to build a new vCloud Director server with adequate space and then decommission the first vCD server.  This wasn’t an option for me because the certificate key files for the first vCD server no longer existed.

vSphere 5 Configuration Maximums Updated For The Cloud

November 11th, 2011

A few nights ago, Chris Colotti and Dave Hill presented a vCloud Architecture Deep Dive brown bag session.  Among the tips I picked up in that session was a comment from Chris that my most favorite VMware document of all time had been updated within the last 6 weeks – vSphere 5 Configuration Maximums.  Basically what was added was the inclusion of vCloud Director configuration maximums:

Item Maximum
Virtual machine count 20,000
Powered‐On virtual machine count 10,000
Organizations 10,000
Virtual machines per vApp 64
vApps per organization 500
Number of networks 7,500
Hosts 2,000
vCenter Servers 25
Virtual Data Centers 10,000
Datastores 1,024
Catalogs 1,000
Media 1,000
Users 10,000

If you’ve been following the progression of this document, you will have noticed that VMware has been adding more application layer components to it.  That is because VMware has broadened its cloud platform portfolio which is fundamentally dependent on vSphere.  Chris mentioned this in his lecture and I began noticing it a few years ago, vCenter now extends beyond just a tier 2 management application.  It has become a tier 1 cornerstone for other VMware and partner ecosystem cloud applications and infrastructure tools.  Be mindful of this during the design phase and do not neglect its resource and redundancy requirements as your scale your vCloud environment.

Enjoy.  And by the way, Chris has a Dell T310 Server with 20GB RAM for sale.  Check it out.

VMworld 2011 Recap at Nexus Information Systems 9/14

September 12th, 2011

Couldn’t make the big show? No problem!

Join me at Nexus Information Systems Sept. 14th as we recap VMworld 2011! VMworld 2011 took place August 28th – Sept 1st with over 170 unique Breakout Sessions and 30+ Hands On Lab topics offered across four days. We’ll be covering our thoughts on the direction of VMware virtualization, the buzz we observed from the VMware community, and highlights of ecosystem vendors (with a special message from Dell Compellent & others). We’ll cover some specifics on:

  • VMware vSphere 5.0
  • vCloud Director 1.5
  • View 5.0
  • SRM 5.0
  • Tech Previews – AppBlast & Octopus

 

Wednesday, September 14, 2011 from 11:00 AM to 1:00 PM (CT)

Nexus Information Systems
6103 Blue Circle Drive
Hopkins, MN 55343

Lunch will be served

Sign up today!

Sponsored by:

Virtualization Wars: Episode V – VMware Strikes Back

July 12th, 2011

Snagit CaptureAt 9am PDT this morning, Paul Maritz and Steve Herrod take the stage to announce the next generation of the VMware virtualized datacenter.  Each new product and set of features are impressive in their own right.  Combine them and what you have is a major upgrade of VMware’s entire cloud infrastructure stack.  I’ll highlight the major announcements and some of the detail behind them.  In addition, the embargo and NDA surrounding the vSphere 5 private beta expires.  If you’re a frequent reader of blogs or the Twitter stream, you’re going to bombarded with information at fire-hose-to-the-face pace, starting now.

7-10-2011 4-22-46 PM

 

vSphere 5.0 (ESXi 5.0 and vCenter 5.0)

At the heart of it all is a major new release of VMware’s type 1 hypervisor and management platform.  Increased scalability and new features make virtualizing those last remaining tier 1 applications quantifiable.

7-10-2011 4-55-28 PM

Snagit Capture

ESX and the Service Console are formally retired as of this release.  Going forward, we have just a single hypervisor to maintain and that is ESXi.  Non-Windows shops should find some happiness in a Linux based vCenter appliance and sophisticated web client front end.  While these components are not 100% fully featured yet in their debut, they come close.

Storage DRS is the long awaited compliment to CPU and memory based DRS introduced in VMware Virtual Infrastructure 3.  SDRS will coordinate initial placement of VM storage in addition to keeping datastore clusters balanced (space usage and latency thresholds including SIOC integration) with or without the use of SDRS affinity rules.  Similar to DRS clusters, SDRS enabled datastore clusters offer maintenance mode functionality which evacuates (Storage vMotion or cold migration) registered VMs and VMDKs (still no template migration support, c’mon VMware) off of a datastore which has been placed into maintenance mode.  VMware engineers recognize the value of flexibility, particularly when it comes to SDRS operations where thresholds can be altered and tuned on a schedule basis. For instance, IO patterns during the day when normal or peak production occurs may differ from night time IO patterns when guest based backups and virus scans occur.  When it comes to SDRS, separate thresholds would be preferred so that SDRS doesn’t trigger based on inappropriate thresholds.

Profile-Driven Storage couples storage capabilities (VASA automated or manually user-defined) to VM storage profile requirements in an effort to meet guest and application SLAs.  The result is the classification of a datastore, from a guest VM viewpoint, of Compatible or Incompatible at the time of evaluating VM placement on storage.  Subsequently, the location of a VM can be automatically monitored to ensure profile compliance.

7-10-2011 5-29-56 PM

Snagit CaptureI mentioned VASA previously which is a new acronym for vSphere Storage APIs for Storage Awareness.  This new API allows storage vendors to expose topology, capabilities, and state of the physical device to vCenter Server management.  As mentioned earlier, this information can be used to automatically populate the capabilities attribute in Profile-Driven Storage.  It can also be leveraged by SDRS for optimized operations.

The optimal solution is to stack the functionality of SDRS and Profile-Driven Storage to reduce administrative burden while meeting application SLAs through automated efficiency and optimization.

7-10-2011 7-34-31 PM

Snagit CaptureIf you look closely at all of the announcements being made, you’ll notice there is only one net-new release and that is the vSphere Storage Appliance (VSA).  Small to medium business (SMB) customers are the target market for the VSA.  These are customers who seek some of the enterprise features that vSphere offers like HA, vMotion, or DRS but lack the fibre channel SAN, iSCSI, or NFS shared storage requirement.  A VSA is deployed to each ESXi host which presents local RAID 1+0 host storage as NFS (no iSCSI or VAAI/SAAI support at GA release time).  Each VSA is managed by one and only one vCenter Server. In addition, each VSA must reside on the same VLAN as the vCenter Server.  VSAs are managed by the VSA Manager which is a vCenter plugin available after the first VSA is installed.  It’s function is to assist in deploying VSAs, automatically mounting NFS exports to each host in the cluster, and to provide monitoring and troubleshooting of the VSA cluster.

7-10-2011 8-03-42 PM

Snagit CaptureYou’re probably familiar with the concept of a VSA but at this point you should start to notice the differences in VMware’s VSA: integration.  In addition, it’s a VMware supported configuration with “one throat to choke” as they say.  Another feature is resiliency.  The VSAs on each cluster node replicate with each other and if required will provide seamless fault tolerance in the event of a host node failure.  In such a case, a remaining node in the cluster will take over the role of presenting a replica of the datastore which went down.  Again, this process is seamless and is accomplished without any change in the IP configuration of VMkernel ports or NFS exports.  With this integration in place, it was a no-brainer for VMware to also implement maintenance mode for VSAs.  MM comes in to flavors: Whole VSA cluster MM or Single VSA node MM.

VMware’s VSA isn’t a freebie.  It will be licensed.  The figure below sums up the VSA value proposition:

7-10-2011 8-20-38 PM

High Availability (HA) has been enhanced dramatically.  Some may say the version shipping in vSphere 5 is a complete rewrite.  What was once foundational Legato AAM (Automated Availability Manager) is now finally evolving to scale further with vSphere 5.  Some of the new features include elimination of common issues such as DNS resolution, node communication between management network as well as storage along with failure detection enhancement.  IPv6 support, consolidated logging into one file per host, enhanced UI and enhanced deployment mechanism (as if deployment wasn’t already easy enough, albeit sometimes error prone).

7-10-2011 3-27-11 PMFrom an architecture standpoint, HA has changed dramatically.  HA has effectively gone from five (5) fail over coordinator hosts to just one (1) in a Master/Slave model.  No more is there a concept of Primary/Secondary HA hosts, however if you still want to think of it that way, it’s now one (1) primary host (the master) and all remaining hosts would be secondary (the slaves).  That said, I would consider it a personal favor if everyone would use the correct version specific terminology – less confusion when assumptions have to be made (not that I like assumptions either, but I digress).

The FDM (fault domain manager) Master does what you traditionally might expect: monitors and reacts to slave host & VM availability.  It also updates its inventory of the hosts in the cluster, and the protected VMs each time a VM power operation occurs.

Slave hosts have responsibilities as well.  They maintain a list of powered on VMs.  They monitor local VMs and forward significant state changes to the Master. They provide VM health monitoring and any other HA features which do not require central coordination.  They monitor the health of the Master and participate in the election process should the Master fail (the host with the most datastores and then the lexically highest moid [99>100] wins the election).

Another new feature in HA the ability to leverage storage to facilitate the sharing of stateful heartbeat information (known as Heartbeat Datastores) if and when management network connectivity is lost.  By default, vCenter picks two datastores for backup HA communication.  The choices are made by how many hosts have connectivity and if the storage is on different arrays.  Of course, a vSphere administrator may manually choose the datastores to be used.  Hosts manipulate HA information on the datastore based on the datastore type. On VMFS datastores, the Master reads the VMFS heartbeat region. On NFS datastores, the Master monitors a heartbeat file that is periodically touched by the Slaves. VM availability is reported by a file created by each Slave which lists the powered on VMs. Multiple Master coordination is performed by using file locks on the datastore.

As discussed earlier, there are a number of GUI enhancements which were put in place to monitor and configure HA in vSphere 5.  I’m not going to go into each of those here as there are a number of them.  Surely there will be HA deep dives in the coming months.  Suffice it to say, they are all enhancements which stack to provide ease of HA management, troubleshooting, and resiliency.

Another significant advance in vSphere 5 is Auto Deploy which integrates with Image Builder, vCenter, and Host Profiles.  The idea here is centrally managed stateless hardware infrastructure.  ESXi host hardware PXE boots an image profile from the Auto Deploy server.  Unique host configuration is provided by an answer file or VMware Host Profiles (previously an Enterprise Plus feature).  Once booted, the host is added to vCenter host inventory.  Statelessness is not necessarily a newly introduced concept, therefore, the benefits are strikingly familiar to say ESXi boot from SAN: No local boot disk (right sized storage, increased storage performance across many spindles), scales to support of many hosts, decoupling of host image from host hardware – statelessness defined.  It may take some time before I warm up to this feature. Honestly, it’s another vCenter dependency, this one quite critical with the platform services it provides.

For a more thorough list of anticipated vSphere 5 “what’s new” features, take a look at this release from virtualization.info.

 

vCloud Director 1.5

Snagit CaptureUp next is a new release of vCloud Director version 1.5 which marks the first vCD update since the product became generally available on August 30th, 2010.  This release is packed with several new features.

Fast Provisioning is the space saving linked clone support missing in the GA release.  Linked clones can span multiple datastores and multiple vCenter Servers. This feature will go a long way in bridging the parity gap between vCD and VMware’s sun setting Lab Manager product.

3rd party distributed switch support means vCD can leverage virtualized edge switches such as the Cisco Nexus 1000V.

The new vCloud Messages feature connects vCD with existing AMQP based IT management tools such as CMDB, IPAM, and ticketing systems to provide updates on vCD workflow tasks.

vCD originally supported Oracle 10g std/ent Release 2 and 11g std/ent.  vCD now supports Microsoft SQL Server 2005 std/ent SP4 and SQL Server 2008 exp/std/ent 64-bit.  Oracle 11g R2 is now also supported.  Flexibility. Choice.

vCD 1.5 adds support for vSphere 5 including Auto Deploy and virtual hardware version 8 (32 vCPU and 1TB vRAM).  In this regard, VMware extends new vSphere 5 scalability limits to vCD workloads.  Boiled down: Any tier 1 app in the private/public cloud.

Last but not least, vCD integration with vShield IPSec VPN and 5-tuple firewall capability.

vShield 5.0

VMware’s message about vShield is that it has become a fundamental component in consolidated private cloud and multi-tenant VMware virtualized datacenters.  While traditional security infrastructure can take significant time and resources to implement, there’s an inherent efficiency in leveraging security features baked into and native to the underlying hypervisor.

Snagit Capture

There are no changes in vShield Endpoint, however, VMware has introduced static routing in vShield Edge (instead of NAT) for external connections and certificate-based VPN connectivity.

 

Site Recovery Manager 5.0

Snagit CaptureAnother major announcement from VMware is the introduction of SRM 5.0.  SRM has already been quite successful in providing simple and reliable DR protection for the VMware virtualized datacenter.  Version 5 boasts several new features which enhance functionality.

Replication between sites can be achieved in a more granular per-VM (or even sub-VM) fashion, between different storage types, and it’s handled natively by vSphere Replication (vSR).  More choice in seeding of the initial full replica. The result is a simplified RPO.

Snagit Capture

Another new feature in SRM is Planned Migration which facilitates the migration protected VMs from site to site before a disaster actually occurs.  This could also be used in advance of datacenter maintenance.  Perhaps your policy is to run your business 50% of the time from the DR site.  The workflow assistance makes such migrations easier.  It’s a downtime avoidance mechanism which makes it useful in several cases.

Snagit CaptureFailback can be achieved once the VMs are re protected at the recovery site and the replication flow is reversed.  It’s simply another push of the big button to go the opposite direction.

Feedback from customers has influenced UI enhancements. Unification of sites into one GUI is achieved without Linked Mode or multiple vSphere Client instances. Shadow VMs take on a new look at the recovery site. Improved reporting for audits.

Other miscellaneous notables are IPv6 support, performance increase in guest VM IP customization, ability to execute scripts inside the guest VM (In guest callouts), new SOAP based APIs on the protected and recovery sides, and a dependency hierarchy for protected multi tiered applications.

 

In summary, this is a magnificent day for all of VMware as they have indeed raised the bar with their market leading innovation.  Well done!

 

VMware product diagrams courtesy of VMware

Star Wars diagrams courtesy of Wookieepedia, the Star Wards Wiki

Watch VMware Raise the Bar on July 12th

July 11th, 2011

On Tuesday July 12th, VMware CEO Paul Maritz and CTO Steve Herrod are hosting a large campus and worldwide event where they plan to make announcements about the next generation of cloud infrastructure.

The event kicks off at 9am PDT and is formally titled “Raising the Bar, Part V”. You can watch it online by registering here.  The itinerary is as follows:

  • 9:00-9:45 Paul and Steve present – live online streaming
  • 10:00-12:00 five tracks of deep dive breakout sessions
  • 10:00-12:00 live Q&A with VMware cloud and virtualization experts
    • Eric Siebert
    • David Davis
    • Bob Plankers
    • Bill Hill

In addition, by attending live you also have the chance to win a free VMworld pass.  More details on that and how to win here.

I’m pretty excited both personally and for VMware.  This is going to be huge!

The Future of VMware Lab Manager

September 12th, 2010

With the release of VMware vCloud Director 1.0 at VMworld 2010 San Franciso, what’s in store for VMware Lab Manager?  The future isn’t entirely clear for me.  I visualize two potential scenarios:

  1. Lab Manager development and product releases continue in parallel with VMware vCloud Director.  Although the two overlap in functionality in certain areas, they will co-exist on into the future in perfect harmony.
  2. VMware vCloud Director gains the features, popularity, pricing, and momentum needed to obsolete and sunset Lab Manager.

I’ve got no formal bit of information from VMware regarding the destiny of Lab Manager. In lieu of that, I’ve been keeping my ear to the rail trying to pick up clues from VMware body language.  Here are some of the items I’ve got in my notebook thus far:

Development Efforts:  First and foremost, what seems obvious to me is that VMware has all but stopped development of Lab Manager well beyond the past year.  Major functionality hasnt been introduced since the 3.x version.  Let’s take a look:

4.0 was released in July 2009 which provided compatibility with the recent launch of vSphere, that’s really it. I don’t count VMware’s half baked attempt at integrating with vDS which they market as DPM for Lab Manager (one problem, the service VMs prevent successful host maintenance mode and, in turn, prevent DPM from working; this bug has existed for over a year with no attempts at fixing).  To further add, the use of the Host Spanning network feature leverages the vDS and implies the requirement Enterprise Plus licensing for the hosts.  This drives up the sticker price of an already costly development solution by some accounts.

4.0.1 was released in December 2009, again to provide compatibility with vSphere 4.0 Update 1. VMware markets this release as introducing compatibility with Windows 7 and 2008 R2 (which in and of itself is not a lie), but anyone who knows the products realizes the key enabler was vSphere 4.0.1 and not Lab Manager 4.0.1 itself.

4.0.2 is released in July 2010 to provide compatibility with vSphere 4.1.  No new features to speak of other than what vSphere 4.1 brings to the table.

SnagIt Capture

Are you noticing the pattern?  Development efforts are being put forth merely to keep up compatibility with the vSphere releases.  Lab Manager documentation hasn’t been updated since the 4.0 release.  The 4.0.1 and 4.0.2 versions both point back to the 4.0 documentation.  Lab Manager documentation hasn’t been updated in over a year even considering two Lab Manager code releases since then.  Further evidence there has been no recent feature development in the Lab Manager product itself.

This evidence seems to make it clear that VMware is positioning Lab Manager for retirement.  The logical replacement is vCloud Director.  I haven’t heard of large scale developer layoffs in Palo Alto so a conclusion could be drawn here that most developer effort was pulled from Lab Manager and put on on vCloud Director 1.0 to get it out the door in Q3 2010.

Bug Fixes & Feature Requests:  This really ties into Development Efforts, but due to its weight, I thought it deserved a category of its own.  Lab Manager has acquired a significant following over the years by delivering on its promise of making software development more efficient and cost effective through automation.  Much like datacenter virtualization itself, a number of customers have become dependent on the technology.  As much as VMware has satisified these customers by maintaining Lab Manager compatibility with vSphere, at the same time customers are getting the short end of the stick.  Customers continue to pay their SnS fees but the value add of SnS is diminishing as VMware development efforts slowed down to a crawl.  At one time, SnS would net you new features, bug fixes, in addition to new versions of the software which provide compatibility with the host platforms.  Instead, the long list of customer feature requests (and great ideas I might add) sits dead in a VMware Communities forum thread like this.  The number of bugs fixed in the last two releases of Lab Manager I can almost count on two hands.  And what about squashing these bugs: this, this, and this?  Almost nothing has changed since Steven Kishi (I believe) exited the role of Director of Product Manager for VMware Lab Manager.

Again, this evidence seems to make it clear that VMware is sending Lab Manager off into the sunset.  Hello vCloud Director.

Marketing Efforts:  From my perspective, VMware hasn’t spent much time focusing on Lab Manager marketing.  By a show of customer or partner hands, who has seen a Lab Manager presentation from VMware in the last 6-12 months?  This ties strongly into the Development Efforts point made above.  Why market a product which seems to be well beyond its half life?  Consistent with the last thought above, marketing has noticably shifted almost entirely from Lab Manager to vCloud Director.

Chalk up another point for the theory which states Lab Manager will be consumed by vCloud Director.

Lack of Clear Communication:  About the only voice in my head (of which there are many) which reasons Lab Manager might be sticking around (other than a VMware announcement of a Lab Manager video tutorial series which has now gone stale) is the fact that VMware has not made it formally and publically clear that Lab Manager is being retired or replaced by vCloud Director.  Although I’m making a positive point here for the going concern of Lab Manager, I think there is ultimately an expiration date of Lab Manager in the not so distant future.  If you understand the basics of vCloud Director or if you have installed and configured it, you’ll notice similarities between it and Lab Manager.  But there is not 100% coverage of Lab Manager functionality and integration.  Until VMware can provide that seamless migration, they obviously aren’t going to pull the plug on Lab Manager.  Quite honestly, I think this is the most accurate depiction of where we’re sitting today.  VMware has a number of areas to address before vCloud Director can successfully replace Lab Manager.  Some are technical such as getting that 100% gap coverage between the two products from a features standpoint.  Some are going to be political/marketing based.  Which customers are ready to replace a tried and true solution with a version 1.0 product?  Some may be cost based.  Will VMware take a 1:1 trade in on Lab Manager for vCloud Director or will there be an uplift fee?  Will Enterprise Plus licensing be a requirement for future versions of vCloud Director?  vCloud Direct0r 1.0 requires Enterprise Plus licensing according to the VMware product’s ‘buy’ page.  Some will be a hybrid.  For instance, existing Lab Manager customers rely on a MS SQL (Express) database.  vCloud Director 1.0 is back ended with Oracle, a costly platform Lab Manager customers won’t necessarily have already in terms of infrastructure and staff.

SnagIt Capture

In summary, this point is an indicator that both Lab Manager and vCloud Director will exist in parallel, however, the signs can’t be ignored that Lab Manager is coasting on fumes.  Its ongoing presence and customer base will require support and future compatibility upgrades from VMware.  Maintaining support on two technologies for VMware is more expensive than to maintain just one.  A larger risk for VMware and customers may be that development efforts for vSphere have to slow down to allow Lab Manager to keep pace.  Even worse, new technology doesn’t see the light of day in vSphere because it cannot be made backward compatible with Lab Manager.  Unless we see a burst in development or marketing for Lab Manager, we may be just a short while away from a formal announcement from VMware stating the retirement of Lab Manager along with the migration plan for Lab Manager customers to become vCloud Director customers.

What are your thoughts?  I’d like to hear some others weigh in.  Please be sure to not disclose any information which would violate an NDA agreement.

Update 2/14/11: VMware has published a VMware vCenter Lab Manager Product Lifecycle FAQ for it’s current customers which fills in some blanks.  Particularly:

What is the future of the vCenter Lab Manager product line?

As customers continue to expand the use of virtualization both inside the datacenter and outside the firewall, we are focusing on delivering infrastructure solutions that can support these expanded scalability and security requirements. As a result, we have decided to discontinue additional major releases of vCenter Lab Manager. Lab Manager 4 will continue to be supported in line with our General Support Policy through May 1, 2013.

When is the current end-of-support date for vCenter Lab Manager 4?

For customers who are current on SnS, General Support has been extended to May 1, 2013.

Are vCenter Lab Manager customers eligible for offers to any new products?

To provide Lab Manager customers with the opportunity to leverage the scale and security of vCloud Director, customers who are active on SnS may exchange their existing licenses of Lab Manager to licenses of vCloud Director at no additional cost. This exchange program is entirely optional and may be exercised anytime during Lab Manager’s General Support period. This provides customers the freedom and flexibility to decide whether and when to implement a secure enterprise hybrid cloud.

The Primary License Administrator can file a Customer Service Request to request an exchange of licenses. For more information on the terms and conditions of the exchange, contact your VMware account manager.

Update 6/25/13: VMware notified its customers via email that support for Lab Manager 4.x has been extended:

June 2013

Dear VMware Valued Customers,VMware is pleased to announce a 1-year extension to the support for VMware vCenter Lab Manager 4.x. As reference, the original end of support date for this product was May 1, 2013. The new official end of support date will be May 21, 2014. This new end of support date aligns with VMware vSphere 4.x (noted in the support lifecycle matrix below as VMware ESX/ESXi 4.x and vCenter Server 4.x) end of support. This new date also allows the vCenter Lab Manager customer base more time to both use the 4.x product and evaluate options for moving beyond vCenter Lab Manager in the near future.

Additional Support Materials: