Lab Manager 4 and vDS

September 19th, 2009 by jason Leave a reply »

VMware Lab Manager 4 enables new functionality in that fenced configurations can now span ESX(i) hosts by leveraging vNetwork Distributed Switch (vDS) technology which is a new feature in VMware vSphere. Before getting overly excited, remember that vDS is a VMware Enterprise Plus feature only and it’s only found in vSphere. Without vSphere and VMware’s top tier license, vDS cannot be implemented and thus you wouldn’t be able to enable fenced Lab Manager 4 configurations to span hosts.

Host Spanning is enabled by default when a Lab Manager 4 host is prepared as indicated by the green check marks below:

When Host Spanning is enabled, an unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled. This Lab Manager service VM cannot be powered down, suspended, VMotioned, etc.:

One ill side effect of this new Host Spanning technology is that an ESX(i) host will not enter maintenance mode while Host Spanning is enabled. For those new to Lab Manager 4, the cause may not be so obvious and it can lead to much frustration. An unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled and a running VM will prevent a host from entering maintenance mode. Maintenance mode will hang at the infamous 2% complete status:

The resolution is to first cancel the maintenance mode request. Then, manually disable host spanning in the Lab Manager host configuration property sheet by unchecking the box. Notice the highlighted message in pink telling us that Host Spanning must be disabled in order for the host to enter standby or maintenance mode. Unpreparing the host will also accomplish the goal of removing the service VM but this is much more drastic and should only be done if no other Lab Manager VMs are running on the host:

After reconfiguring the Lab Manager 4 host as described above, vSphere Client Recent Tasks shows the service VM is powered off and then removed by the Lab Manager service account:

At this time, invoke the maintenance mode request and the host will now be able to migrate all VMs off and successfully enter maintenance mode.

While Lab Manager 4 Host Spanning is a step in the right direction for more flexible load distribution across hosts in a Lab Manager 4 cluster, I find the process for entering maintenance mode counter intuitive, cumbersome, and at the beginning when I didn’t know what was going on, frustrating. Unsuccessful maintenance mode attempts have always been somewhat mysterious in the past because vCenter Server doesn’t give us much information to pinpoint the problem as far as what’s preventing the maintenance mode. This situation now adds another element to the complexity. VMware should have enough intelligence to disable Host Spanning for us in the event of a maintenance mode request, or at the very least, tell us to shut it off since it is conveniently and secretly enabled by default during host preparation. Of course, all of this information is available in the Lab Manager documentation, but who reads that, right? 🙂


No comments

  1. Alan says:

    I had no idea about this ‘feature’. I was very excited about using the distributed switch as I felt this would finally allow me to VMotion fenced configurations off a host to perform maintenance. I guess this only partially solves that problem. It doesn’t seem much more convenient compared to undeploy – save state for fenced configurations. At least when compared to paying a premium for Enterprise Plus.

  2. pvrajan says:

    You should probably take a look at VMLogix LabManager. They have also added support for host spanning of a fenced configuration – but don’t have any of the limitations mentioned above. Works with vSphere Standard/Advanced/Enterprise/Enterprise Plus as well and “Enter Maintenence Mode” works seamlessly too.

    Check out their website for a feature comparison with VMware LM:

  3. mandren says:

    While this is a new feature for Lab Manager and other vendors, the Surgient Virtual Automation Platform has supported fenced environments spanning hosts for years. Not only does this work with any level of vSphere, but works on ESX3.5 and Hyper-V as well. Creating maintenance windows works just as well whether spanning hosts or not.

    Check it out:

  4. wayne says:

    Thanks. This is madening!

    Before adding LM4, vMotion works well. I ran into frustating situation when wanting to test the dvs. I expected that I could move a VM in a fenced configuration to another host to another. My simple test would be to watch things to continue to function during and after the migration.

    In short, I want to manually do in the LM fenced configuration what I would expect DRS to do automatically.

    If I have to disable host spanning before I can manually move a VM to a different host, won’t that disable the distibuted fence????

  5. jason says:

    Before LM4, VMotion only works with VMs having a physical (public) network only. VMotion would not work on configurations with private networks. LM4 fixes this by spanning private networks across all hosts with the vDS (distributed switch) in addition to allowing fenced configurations span hosts.

    The pisser is that enabling host spanning creates a service VM on each host which will not migrate (VMotion). Thereby, hosts with one of these service VMs cannot go into maintenance mode or benefit from DPMJ.

    It’s an unfortunate catch 22. VMware is lacking the obvious intelligence needed to kill the service VM when a call for Maintenance Mode or DPM is made.

    You are correct, disabling host spanning disables the spanning of private networks as well as spanning fenced configurations across hosts and we’re right back to LM3 functionality in that configurations with a private network cannot VMotion and fenced configurations must reside on a single host.

    One step forward, two steps back! 🙂

  6. Sounds good but I don’t run VSphere. Anyone got any recommendations for me? Cheers Sandy