Posts Tagged ‘VMware’

Disconnected VM templates

April 19th, 2009

I woke up in this morning to two failed Veeam backups in my email inbox. The two VMs were both templates I had recently created.

I launched the Virtual Infrastructure Client to see if the VMs had an open snapshot which can cause Veeam backup jobs to fail (it’s a VMware issue, not really a Veeam issue). No snapshots, but the problem was immediately obvious: the VMs were shown as “disconnected“. Typically a disconnected VM immediately ties back to a disconnected host. Not in this case. A quick look at the ESX host that was the owner of the VM showed that it was connected, online, and running powered on VMs.

New territory. How to fix? Right clicking on the VM showed no option to re-“Connect” the VM. Right clicking showed no option to remove from inventory and re-register it.  Hmm.

Solution. I placed the ESX host into maintenance mode which migrated off the running VMs to a different host in the cluster. The only two VMs left were the two disconnected templates. I then right clicked on the host and disconnected it. Immediately after being disconnected, I right clicked on the host and connected it. Both the host and VM templates changed from a disconnected to a connected state. Of course the final step was to remove the host from maintenance mode.

Update: 11/12/10:  Following is an entry from the vCalendar which has a few more options to resolve this issue:

Got a disconnected template? Several solutions exist to resolve the problem:
-Disconnect and reconnect the ESX host which owns the template
-Restart the mgmt-vmware service in the ESX Service Console
-Restart the vCenter service

Cloud Camp Minneapolis

April 18th, 2009

IMG00028-20090418-1006Today I attended Cloud Camp Minneapolis from 9:00am to 3:30pm on the University of Minnesota East Bank campus. I think the event was large success. Registration was SOLD OUT and it looked like there was somewhere between 100 and 150 attendees. I think it speaks well for the technology and the event organization when that many people will give up the majority of an absolutely gorgeous Saturday.

The event started with a continental like breakfast where people mingled and socialized for an hour before the speaking agenda began. I ran into a few familiar faces and also met with new people I hadn’t met before. The coffee was strong and the bagels looked good.

After breakfast, we were ushered into the main auditorium. George Reese (pictured top left), cloud book author and event organizer from enStratus Networks, kicked things off by briefly introducing himself as well as the premier sponsors: VISI, enStratus, Microsoft, Hosso The Rackspace Cloud, Aserver, and Right Scale.

Shortly after, the Lightning Talks began. This is where the premier event sponsors were allowed just a few minutes to deliver their cloud speech along with a little product marketing while literally whipping through their slide deck. When I say just a few minutes, I literally mean it. I think five vendors all got up and delivered their presentations in a total of 15 minutes. If you’ve ever watched the television program “Mad Money”, it was like cloud talk and offerings during the lightning round. It was both an interesting and refreshing approach.

Next we had a lengthy group discussion on hot cloud topics which were in turn used to dynamically develop the afternoon breakout session topics. We touched on things such as security, mobility, legal and liability implications, small business, etc.

We broke for lunch where I had discussions with a few locals on phone, cable, and internet service providers (ISPs) in the state of Minnesota.

After lunch the large group broke up into the smaller breakout sessions mentioned previously. I attended two sessions: Mobility and SMB.

The mobility session had a good crowd mixture comprised of service providers, application developers, and CEOs. The discussions jumped from topic to topic as people offered up their problems, questions, and philosophies orbiting cloud mobility and isolation. Not to my surprise, there was very little along the lines of answers or solutions. That’s ok. I wasn’t expecting any. Frankly, I found comfort among large numbers of industry experts who, like I, didn’t have the answers and were just as perplexed about figuring out how this is all going to work out. Developers seemed to be the most concerned about the application layer (Applications as a Service) as discussions touched on APIs and applications in the cloud and their impact on development techniques as it applies to mobility. I got a sense of less concern over platform in the cloud, also known as Platform as a Service. One developer talked about his current experience of using Amazon’s Elastic Compute Cloud (EC2). His direct benefits: he owns and supports nothing, and he pays only for what he uses. When he’s not using it, there’s essentially little or no cost. When he’s done, I imagine he saves what he needs, and the rest is destroyed. There is no traditional decommissioning and writing off of assets. There is no hardware that needs to be disposed of properly.

The SMB session was another good mixture of attendees nearly the same as above but with more of a concentration on small business, as well as micro and nano business (phrases coined during the session representing entities smaller than small business). The general idea of this session was if and how small businesses can benefit from cloud offerings. Talks began with the various ways to define a small business: by revenue? by headcount? by technology? There are examples of large manufacturing plants that have small technology footprints. Likewise, small operations can generate large amounts of revenue with the assistance of technology. Group members proposed that there exists many inefficiencies in small business, particularly in the technology and infrastructure. This is where renting platforms, applications, services, and infrastructure from cloud providers could make sense for SMBs. Wouldn’t small businesses rather focus their time and energy on developing their products and services instead of being tied down by the technology they need to run their business on? From a customer or partner credibility standpoint, does a business look more professional and equipped running their business in a certified cloud datacenter, or a broom closet? What impacts will regulation and legislation have? Decisions of how to securely store and deliver customer information in a small business shouldn’t be taken lightly. There are consequences that could easily break the trust and financial backing that a small business or startup’s survivability relies on.

In all, I had a great time at Cloud Camp Minneapolis. If you asked me six months ago what I know about the cloud, I would have had nothing to say other than “I don’t get it”. I’ve gradually been warming up to the concept and today Cloud Camp Minneapolis went a long way in delivering my first feeling of personal and professional accomplishment in that I think I’m actually caught up and on the same page as many of my peers and higher experts in the cloud community. However, I have to be honest in saying that I walked away somewhat disappointed and in disbelief that virtualization discussion was nearly non-existent. The last two VMworld virtualization conferences I attended in Las Vegas and Cannes were strongly focused on cloud computing and VMware’s Virtual Datacenter OS (VDC-OS). There was maybe one mention of VMware in one sentence and a brief reference to VDI. Microsoft was on site talking about Azure and there was no mention of Hyper-V. No mention of XenServer, Virtual Iron, etc. I’ve been led to understand that virtualization is key component to cloud infrastructure, applications, and mobility. I anticipated much of today’s discussions would revolve around virtuailzation. I couldn’t have been more wrong. After the event finished, I sent out a tweet re: no virtualization talk today. I received a response stating virtualization is merely a widget or one small component among many in the cloud. Virtualization is not really as integral as I’m being told by Paul Maritz of VMware. Maybe this is a case of Jason has been drinking too much VMware Kool-Aid for too long. The answers about the cloud are coming. Slowly but surely. Hopefully Paul is right and VMware does have a significant role to play in their version of global cloud computing. I’d like to see it, realize it, and experience it.

Tolly Group releases another Citrix vs. VMware comparison

April 15th, 2009

A few months ago, The Tolly Group released a report comparing Citrix and VMware VDI solutions.

They’re at it again. Today, The Tolly Group released another comparison. Today’s report compares Citrix XenServer 5 and VMware ESX 3.5.0 Update 3 with Citrix XenApp as the workload.

Citrix Systems commissioned Tolly to evaluate the performance of Citrix XenApp when running on Citrix XenServer 5 and compare that with XenApp running on VMware ESX 3.5u3.

Testing focused on system scalability and user quality-of-experience. This test report was approved for publication by VMware. The VMware End User License Agreement (EULA) requires such approval.

The testing was conducted in accordance with Tolly Common RFP #1101, Virtual Server Performance.

Summary of Results:

* Citrix XenServer 5 outperforms VMware ESX 3.5 by 41% in user scalability tests.
* XenApp, running on XenServer, retains a consistent user experience as load is increased to 164 users.
* Virtualizing 32-bit XenApp gives IT administrators a viable approach to increasing total user density on physical servers, without the need to re-certify their existing applications and drivers for a 64-bit platform.
* Consolidating XenApp farms on XenServer results in data center reliability benefits and cost savings.

Click here to download the report. You will need to register for the report download.

New ESX(i) 3.5 security patch released; scenarios and installation notes

April 11th, 2009

On Friday April 10th, VMware released two patches:

Both address the same issue:

A critical vulnerability in the virtual machine display function might allow a guest operating system to run code on the host. The Common Vulnerabilities and Exposures Project ( has assigned the name CVE-2009-1244 to this issue.

Hackers must love vulnerabilities like this because they can get a lot of mileage out of essentially a single attack. The ability to execute code on an ESX host can impact all running VMs on that host.

Although proper virtualization promises isolation, the reality is that no hardware or software vendor is perfect and from time to time we’re going to see issues like this. Products are under constant attack from hackers (both good and bad) to find exploits. In virtualized environments, it’s important to remember that guest VMs and guest operating systems are no different than their physical counterparts in that they need to be properly protected from the network. That means adequate virus protection, spyware protection, firewalls, encryption, packet filtering, etc.

This vulnerability in VMware ESX and ESXi is really a two factor attack. In order to compromise the ESX or ESXi host, the guest VM must first be vulnerable to compromise on the network to provide the entry point to the host. Once the guest VM is compromised, the next step is to get from the guest VM to the ESX(i) host. Hosts without the patch will be vulnerable to the next attack which we know from reading above will allow who knows what code to be executed on the host. If the host is patched, we maintain our guest isolation and the attack stops at the VM level. Unfortunately, the OS running in the guest VM is still compromised, again highlighting the need for adequate protection of the operating system and applications running in each VM.

The bottom line is this is an important update for your infrastructure. If your ESX or ESXi hosts are vulnerable, you’ll want to get this one tested and implemented as soon as possible.

I installed the updates today in the lab and discovered something interesting that is actually outlined in both of the KB articles above:

  • The ESXi version of the update requires a reboot. Using Update Manager, the patch process goes like this: Remediate -> Maintenance Mode -> VMotion VMs off -> Patch -> Reboot -> Exit Maintenance Mode. The duration of installation of the patch until exiting maintenance mode (including the reboot in between) took 12 minutes.
  • The ESX version of the update does not require a reboot. Using Update Manager, the patch process goes like this: Remediate -> Maintenance Mode -> VMotion VMs off -> Patch -> Exit Maintenance Mode. The duration of installation of the patch until exiting maintenance mode (with no reboot in between) took 1.5 minutes.

Given reboot times of the host, patching ESX hosts goes much quicker than patching ESXi hosts. Reboot times on HP Proliant servers aren’t too bad but I’ve been working with some powerful IBM servers lately and the reboot times on those are significantly longer than HP. Hopefully we’re not rebooting ESX hosts on a regular basis so with that in mind, reboot times aren’t a huge concern, but if you’ve got a large environment with a lot of hosts requiring reboots, the reboot times are going to be cumulative in most cases. Consider my environment above. A 6 node ESXi cluster is going to take 72 minutes to patch, not including VMotions. A 6 node ESX cluster is going to take 9 minutes to patch, not including VMotions. This may be something to really think about when weighing the decision of ESX versus ESXi for your environment.

Update: One more item critical to note is that although the ESX version of the patch requires no reboot, the patch does require three other patches to be installed, at least one of which requires a reboot.  If you already meet the requirements, no reboot will be required for ESX to install the new patch.

In closing, while we are on the subject of performing a lot of VMotions, take a look at a guest blog post from Simon Long called VMotion Performance. Simon shows us how to modify VirtualCenter (vCenter Server) to allow more simultaneous VMotions which will significantly cut down the amount of time spent patching ESX hosts in a cluster.

VMware documentation library updates

April 2nd, 2009

Quick note:  In case you missed it (like I did), VMware has updated most of their VMware Infrastructure 3 documentation.  If you’re a documentation junkie (like me), you’ll want to re-download all of VMware’s VI3 documentation.  About 75% of the documents have new file names as well.

Setup for Microsoft cluster service

April 1st, 2009

Setting up a Microsoft cluster on VMware used to be a fairly straight forward task with a very minimal set of considerations. Over time, the support documentation has evolved into something that looks like it was written by the U.S Internal Revenue Service. I was an Accountant in my previous life and I remember Alternative Minimum Tax code that was easier to follow than what we have today, a 50 page .PDF representing VMware’s requirements for MSCS support. Even with that, I’m not sure Microsoft supports MSCS on VMware. The Microsoft SVVP program supports explicit versions and configurations of Windows 2000/2003/2008 on ESX 3.5 update 2 and 3, and ESXi 3.5 update 3 but no mention is made regarding clustering. I could not find a definitive answer on the Microsoft SVVP program site other than the following disclaimer:

For more information about Microsoft’s policies for supporting software running in non-Microsoft hardware virtualization software please refer to In addition, refer to to find more information about Microsoft’s support policies for its applications running in virtual environments.

At any rate, here are some highlights of MSCS setup on VMware Virtual Infrastructure, and by the way, all of this information is fair game for the VMware VCP exam.

Prerequisites for Cluster in a Box

To set up a cluster in a box, you must have:

* ESX Server host, one of the following:

* ESX Server 3 – An ESX Server host with a physical network adapter for the

service console. If the clustered virtual machines need to connect with external

hosts, then an additional network adapter is highly recommended.

* ESX Server 3i – An ESX Server host with a physical network adapter for the

VMkernel. If the clustered virtual machines need to connect with external

hosts, a separate network adapter is recommended.

* A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you

need an FC HBA (QLogic or Emulex).

You can set up shared storage for a cluster in a box either by using a virtual disk or by

using a remote raw device mapping (RDM) LUN in virtual compatibility mode

(non‐pass‐through RDM).

When you set up the virtual machine, you need to configure:

* Two virtual network adapters.

* A hard disk that is shared between the two virtual machines (quorum disk).

* Optionally, additional hard disks for data that are shared between the two virtual

machines if your setup requires it. When you create hard disks, as described in this

document, the system creates the associated virtual SCSI controllers.

Prerequisites for Clustering Across Boxes

The prerequisites for clustering across boxes are similar to those for cluster in a box.

You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* Shared storage must be on an FC SAN.

* You must use an RDM in physical or virtual compatibility mode (pass‐through

RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage.

Prerequisites for Standby Host Clustering

The prerequisites for standby host clustering are similar to those for clustering across

boxes. You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* You must use RDMs in physical compatibility mode (pass‐through RDM).

You cannot use virtual disk or RDM in virtual compatibility mode

(non‐pass‐through RDM) for shared storage.

* You cannot have multiple paths from the ESX Server host to the storage.

* Running third‐party multipathing software is not supported. Because of this

limitation, VMware strongly recommends that there only be a single physical path

from the native Windows host to the storage array in a configuration of

standby‐host clustering with a native Windows host. The ESX Server host

automatically uses native ESX Server multipathing, which can result in multiple

paths to shared storage.

* Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the

physical Windows machine.

Cluster in a Box Cluster Across Boxes Standby Host Clustering
Virtual disks Yes No No
Pass-through RDM (physical compatibility mode) No Yes Yes
Non-pass-through RDM (virtual compatibility mode) Yes Yes No

Caveats, Restrictions, and Recommendations

This section summarizes caveats, restrictions, and recommendation for using MSCS in

a VMware Infrastructure environment.

* VMware only supports third‐party cluster software that is specifically listed as

supported in the hardware compatibility guides. For latest updates to VMware

support for Microsoft operating system versions for MSCS, or for any other

hardware‐specific support information, see the Storage/SAN Compatibility Guide for

ESX Server 3.5 and ESX Server 3i.

* Each virtual machine has five PCI slots available by default. A cluster uses four of

these slots (two network adapters and two SCSI host bus adapters), leaving one

PCI slot for a third network adapter (or other device), if needed.

* VMware virtual machines currently emulate only SCSI‐2 reservations and do not

support applications using SCSI‐3 persistent reservations.

* Use LSILogic virtual SCSI adapter.

* Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4.

VMware recommends Windows Server 2003.

* Use two‐node clustering.

* Clustering is not supported on iSCSI or NFS disks.

* NIC teaming is not supported with clustering.

* The boot disk of the ESX Server host should be on local storage.

* Mixed HBA environments (QLogic and Emulex) on the same host are not


* Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not


* Clustered virtual machines cannot be part of VMware clusters (DRS or HA).

* You cannot use migration with VMotion on virtual machines that run cluster


* Set the I/O time‐out to 60 seconds or more by modifying



The system might reset this I/O time‐out value if you recreate a cluster. You must

reset the value in that case.

* Use the eagerzeroedthick format when you create disks for clustered virtual

machines. By default, the VI Client or vmkfstools create disks in zeroedthick

format. You can convert a disk to eagerzeroedthick format by importing,

cloning, or inflating the disk. Disks deployed from a template are also in

eagerzeroedthick format.

* Add disks before networking, as explained in the VMware Knowledge Base article



VMware raises the bar on CPU consolidation ratio support

April 1st, 2009

VMware has updated its Configuration Maximums support document (one of my favorite most documents in the VMware document library). Most notable is the increase in number of supported virtual CPUs per core:

  • Previously, ESX and ESXi 3.5 Update 2 and earlier supported 8 virtual CPUs per core and in special cases, 11 virtual CPUs per core if the workloads were VDI
  • The new version of the document shows ESX and ESXi 3.5 Update 3 and later support 20 virtual CPUs per core across the board – with no special circumstances for VDI workloads

One thing to note however is the fact that the number of total virtual CPUs per host or total number of virtual machines per host did not change. They remain at 192 and 170 respectively.

So we’re not increasing the total number of VMs an ESX or ESXi host will support. VMware is saying they can support the same number of VMs and vCPUs on less physical CPU cores. This may be due to more powerful CPUs entering the market (such as the Intel Nehalem). Or maybe VMware is addressing customers who have traditionally light CPU workloads and need to reach higher CPU consolidation ratios. Or maybe it has something to do with blade servers or Cisco’s UCS (or Project California). At any rate, VMware is encouraging the virtualization of more with less. Maybe it’s an economy thing. Who knows. It’s good for us though since VMware still licenses by the socket and not the core. We can power 160 VMs with an 8 core box (dual quads or quad duals).

While we’re on the subject, is anyone coming close to 170 VMs per host? What’s the most impressive consolidation ratio you’ve seen? I’d like to hear about it. As in the Citrix world, I don’t think it’s a matter of “Do we have the hardware today to handle it?” – the answer is yes. It’s more the exposure of 170 VMs on a single host and do we want to go down that road.