Posts Tagged ‘VirtualCenter’

VMware configuration maximums

December 9th, 2008

Configuration Maximums for Virtual Infrastructure 3 is by far one of my favorite VMware documents.  This is a useful document for the VMware evangelist and any VMware VI administrator to have tacked up on the wall of their office for use as a quick reference.  It’s also handy for identifying platform comparison points of discussion or decision.

The document answers most of the “How many…”, “How much…” type questions about the VMware Virtual Infrastructure capabilities (ESX hypervisor, VirtualCenter, guest VMs, etc.)  more than once I’ve used this document as the basis for interactive VMware trivia sessions at our local VMware User Group meetings.  This is one of the documents that will most often be updated as new releases of VMware VI are released so it’s a good one to keep tabs on.

The VI3 documentation page keeps us informed as to what date the document was last updated.  In addition, one of the RSS feeds I am subscribed to is VMware, Inc. This feed lets me know the moment any of the VI documents are updated (at which time I then download the updated document for my own document repository I maintain).  Hardware Compatibility List (HCL) documents seem to update almost weekly which is a good indicator that VMware Engineers are hard at work in their labs certifying compatible hardware thereby expanding the list of hardware we may run our VI on.

The virtualization hypervisors (I never thought about it but is this the correct plural for hypervisor?) and management tools are evolving rapidly.  VMware, by far the most innovative of all companies in the virtualization arena, must have teams of technical writers keeping product documentation up to date.  For me personally, accurate product documentation is of the utmost importance and I hope VMware stays on top of it.  Vendor documentation is the gospel for the products and it defines what’s supported and what is not.  Keep yourself informed by reading the vendor documentation once in a while.  Even if you’re not into reading, at least know where the documentation is located for reference purposes.  I promise you the VMware configuration maximums is an interesting/fun read.

ps.  For those paying close attention, the scheduled server maintenance has been completed this evening.  I am now going out to shovel the snow in the driveway for the 3rd time in 24 hours.

VMware Update Manager plugin failures

December 8th, 2008

Roger Lund posted several links on his blog which I was personally interested in because I have dealt with them in one way shape or form. One of them was a potential resolution to the issue where the VIC loses connectivity to VMware Update Manager and the VUM plugin unloads. The error message is “Your session with the VMware Update Manager Server is no longer valid. The VMware Update Manager Client plugin will be unloaded from the VI Client”

12-7-2008 9-19-21 PM

This is an issue that I wouldn’t say I’m plagued with, however, it does pop up every few days and the easy fix is to simply re-enable the VUM plugin. It’s an inconvenience that I wanted to get to the bottom of some day when I had time, but thus far it hasn’t been a high priority. I had checked the VUM logs but was not able to determine anything conclusive.

At any rate, I was excited to see the link on Roger Lund’s blog pointing to VMware KB article 1007099 “Update Manager Client is randomly disabled”. The link discusses a potential solution of disabling anti-virus scanning of the VUM repository (where all the code and metadata is downloaded to). I performed this over the weekend by neutering Symantec Antivirus Corporate Edition and kept my fingers crossed.

Things were looking good until Sunday night when the VUM error popped up again. Oh well, back to the drawing board. If anyone has any other ideas, I’m all ears.

VMware product name changes

December 3rd, 2008

Quick update on a news item you may have already heard about. Remember those VMware product/component decoder rings you might have started working on after the VMworld 2008 announcements? It’s time for an update. VMware announced a handful of product name changes on Monday:

  1. VMware VirtualCenter is now VMware vCenter Server
  2. VMware vCenter is the family name for all management products
  3. VMware Lab Manager is now VMware vCenter Lab Manager (since it is in the management products family)
  4. The VMware vCenter prefix applies to the other products in the management products family as well
  5. VMware View is the family name for all VDI/VDM products
  6. VMware VDI is now VMware View
  7. VMware VDM is now VMware View Manager

I’m not real fond of name changes unless there is a good reason behind it. I’ll give VMware the benefit of the doubt that there was good reason to make these changes, although not knowing myself 100% what is up VMware’s sleeve, the timing is somewhat debatable. Couldn’t they have waited until the next generation of Virtual Infrastructure to align the products and components? Citrix did this with Presentation Server when they instantly re-branded it to XenApp. It confused a lot of people, especially the newcomers. I hope confusion among VMware customers is minimized. It’s going to take a little while for these new names to become second nature for me.

What do you think of the name changes? Feedback is always welcomed here.

Symantec declares VMware VMotion unsupported

November 18th, 2008

Bad news for VMware VI Enterprise customers everywhere. I just found out I have 110 unsupported production and development VMs in my datacenter. Symantec published Document ID 2008101607465248 on 10/15/08 removing VMware VMotion support from its Symantec Antivirus (SAV) and Symantec Endpoint Protection (SEP) products.

Operating systems impacted are: All Windows operating systems.

Reported issues include but are not limited to:

  • Client communication problems
  • Symantec Endpoint Protection Manager (SEPM) communication issues
  • Content update failures
  • Policy update failures
  • Client data does not get entered into the database
  • Replication failures

This is of grave concern as many enterprise datacenters and VDI deployments are going to be impacted. My personal take is that someone jumped the gun in publishing a document with mysteriously vague detail, but we’ll have to wait and see what shakes out.

I hope that VMware can approach Symantec to get this resolved ASAP. It’s in everyone’s best interest.

Thank you vinternals for the heads up on this.

Update: Symantec has updated their support document stating that the problems a few customers have seen may or may not be related to VMware and VMotion. Until further notice, Symantec is supporting their products on VMware with VMotion. If you experience an issue with Symantec products, please contact Symantec technical support. This confirms my opinion that someone at Symantec jumped the gun by issuing the 10/15/08 support document stating VMware and VMotion is unuspported. Everyone can breathe a sigh of relief now. Or at least I can.

Make VirtualCenter highly available with VMware Virtual Infrastructure

November 17th, 2008

A few days ago I posted some information on how to make VirtualCenter highly available with Microsoft Cluster Services.

Monday Night Football kickoff is coming up but I wanted follow up quickly with another option (as suggested by Lane Leverett): Deploy the VirtualCenter Management Server (VCMS) on a Windows VM hosted on a VMware Virtual Infrastructure cluster. Why is this a good option? Here are a few reasons:

  1. It’s fully supported by VMware.
  2. You probably already have a VI cluster in your environment you can leverage. Hit the ground running without spending the time to set up MSCS.
  3. Removing MSCS removes a 3rd party infrastructure complexity and dependency which requires an advanced skill set to support.
  4. Removing MSCS removes at least one Windows Server license cost and also removes the need for the more expensive Windows Enterprise Server licensing and the special hardware needs required by MSCS.
  5. Green factor: Let VCMS leverage the use of VMware Distributed Power Management (DPM).

How does it work? It’s pretty simple. A virtualized VCMS shares the same advantages any other VM inherently has when running on a VMware cluster:

  1. Resource balancing of the four food groups (vProcessor, vRAM, vDisk, and vNIC) through VMware Distributed Resource Scheduler (DRS) technology
  2. Maximum uptime and quick recovery via VMware High Availability (HA) in the event of a VI host failure or isolation condition (yes, HA will still work if the VCMS is down. HA is a VI host agent)
  3. Maximum uptime and quick recovery via VMware High Availability (HA) in the event of a VMware Tools heartbeat failure (ie. the guest OS croaks)
  4. Ability to perform host maintenance without downtime of the VCMS

A few things to watch out for (I’ve been there and done that, more than once):

  1. If you’re going to virtualize the VCMS, be sure you do so on a cluster with the necessary licensed options to support the benefits I outlined above (DRS, HA, etc.) This means VI Enterprise licensing is required (see the licensing/pricing chart on page 4 of this document). I don’t want to hide the fact that a premium is paid for VI Enterprise licensing, but as I pointed out above, if you’ve already paid for it, the bolt ons are unlimited use so get more use out of them.
  2. If your VCMS (and Update manager) database is located on the VCMS, be sure to size your virtual hardware appropriately. Don’t go overboard though. From a guest OS perspective, it’s easier to grant additional virtual resources from the four food groups than it is to retract them.
  3. If you have a power outage and your entire cluster goes down (and your VCMS along with it), it can be difficult to get things back on their feet while you don’t have the the use of the VCMS. Particularly if you’ve lost the use of other virtualized infrastructure components such as Microsoft Active Directory. Initially it’s going to be command line city so brush up on your CLI. It really all depends on how badly the situation is once you get the VI hosts back up. One example I ran into is host A wouldn’t come back up. Host B wasn’t the registered owner of the VM I needed to bring up. This requires running the vmware-cmd command to register the VM and bring it up on host B.

Well, I missed the first few minutes of Monday Night Football, but everyone who reads (tolerates) my ramblings is totally worth it.

Go forth and virtualize!

Make VirtualCenter highly available with Microsoft Cluster Services

November 12th, 2008

When VirtualCenter was first introduced, many could make the argument that VC was simply a utility class service that provided centralized management for a virtual infrastructure. If the VirtualCenter Management Server (VCMS) was rebooted in the middle of the day or if the VC services were stopped for some reason, it wasn’t too big of a deal providing the outage didn’t interrupt a key task such as a VMotion migration or a cloning process.

Times are changing. VirtualCenter is becoming a fairly critical component in the VI and high availability of VC and the VCMS is becoming increasingly important. Several factors have contributed to this evolution. To identify just a few:

  • Virtual infrastructures are growing rapidly in the datacenter. The need for a functioning centralized management platform increases exponentially.
  • Increased and more granular VC alerting capabilities are relied upon to keep administrators updated with timely information about the load and health of the VI.
  • The introduction of more granular role base security extended Virtual Infrastructure Client or Web Access deployment to more users and groups in the organization increasing dependability on VC and visibility of downtime.
  • The exposure of the VC API/SDK encouraged many new applications and tools to be written against VC. I’m talking about tools that provide important functions such as backup, reporting, automation, replication, capacity analysis, sVMotion, etc. Without VC running, these tools won’t work.
  • The introduction of plugins. Plugins are going to be the preferred bolt on for most administrators because they snap in to a unified management interface. Obvious dependency on VC.
  • The introduction of new features native to VC functionality. DRS, HA, DPM, VCB, Update Manager, Consolidation, snapshot manager, FT, SRM, etc. Like the bullet above, all of these features require a healthy functioning VCMS.
  • The Virtual Datacenter OS was announced at VMworld 2008 and is comprised of the following essential components: Application vServices, Infrastructure vServices, Cloud vServices, and Management vServices. I don’t know about you, but to me those all sound like services that would need to be highly available. While it is not yet known exactly how existing VI components transform into the VDC-OS, we know the components are going to be integral to VMware’s vision and commitment to cloud computing which needs to be highly available, if not continuously available.

VirtualCenter has evolved from a cornerstone of ESX host management into the the entire foundation on which the VI will be built on. Try to imagine what the impacts will be in your environment if and when VirtualCenter is down now and in the future. Dependencies may have waltzed in that you didn’t realize.

A single VCMS design may be what you’re used to, but fortunately there exists a method by which VC may be made highly available on a multi-node Microsoft Cluster. This document, written by none other than my VI classroom training instructor Chris Skinner, explains how to cluster VirtualCenter 2.5.

If you’re on VirtualCenter 2.0.x, follow this version of the document instead.

Update:  Follow up post here.

VMware employee confirms DPM support in next release

November 10th, 2008

There’s been some recent excitement circulating the internet around a VMware Virtual Infrastructure feature called Distributed Power Management (DPM).  An impactful video demonstration of DPM was put together by VMware engineers two months ago and released on YouTube.  I’m sure you’ve seen it already by now on the other blogs but I’ve provided a copy below in case you have not seen it.

DPM is currently in experimental status, however, Richard Garsthagen, a Senior Evangelist for VMware in EMEA (and a great conversationalist if you ever get the chance to have dinner with him), tells us in his blog that DPM “will be fully supported with the next release.”  What exactly does next release mean?  That’s a good question but we can safely assume one of two things:  Update 4, or the next generation of Virtual Infrastructure which many, including myself, are unoffically calling VI4.

This is great news because DPM support is finally going to unlock additional potential for savings in the datacenter:

  • Kilowatt consumption for powering the VI goes down
  • Kilowatt consumption for cooling the VI goes down
  • Consolidation of VMs offers increased opportunity for VMware Content Based Page Sharing resulting in more effective use of physical RAM and increased consolidation ratios
  • Saving more of the environment means Green rating goes up (take a look at this great green calculator)

In the midst of all this excitement, we must not lose sight of the fact that a properly architected cluster should support a minimum of N+1 capacity.  The goal should not be to simply shut down as many hosts as possible in the name of efficiency and saving the environment.  This mindset will compromise uptime of VMs in the event of a host failure.  Leave enough room in the cluster for HA to perform its responsibiity of powering on VMs on another available host.