Archive for June, 2009

vSphere upgrade experience, day 1

June 24th, 2009

A few nights ago, I began the VI3 to vSphere upgrade in my home lab and I thought I would share a few experiences. This day 1 post will cover vSphere management tools (vCenter, Update Manager, etc.) and not the hypervisor itself (ESX or ESXi).

My VI3 environment has been through some wear and tear over the years, including a few unexpected power outages which could have caused corruption on the vCenter server or the back end databases. Although the part of me which desires peace of mind wanted to start “clean” with an empty database, I knew that I must go the upgrade route, maintaining my existing data because frankly this is the method I will likely be using to deploy most vSphere installations.

I run a lot of what I would consider “production workloads” on my home lab, including domain controllers, messaging servers for registered domains, web servers, Citrix servers, this blog, etc. Failure is an option as well as a good learning experience (after all, this is a lab), however, long term outage of my production workloads is not an option. My vCenter server is virtualized on VMware Server 2.x so I started out by shutting down its OS and taking a VMware snapshot. After the vCenter shutdown, I also captured a full backup of my SQL server databases (both the vCenter database as well as the Update Manager database). I now have a solid backout plan which does not incorporate crash consistent data.

I powered the vCenter VM back up. I then copied over the vCenter 4.0 .zip package and extracted it into a temp directory on the vCenter server. This was the first mistake I made. Not thinking clearly about my snapshotted VM, I had just inflated the VM’s delta file by a few GB. What I should have done is to perform the vCenter copy and extraction before the snapshot. This is not the end of the world. After the installation of vCenter 4 and Update Manager, the snapshot would have grown by several hundred MBs if not a few GBs anyway. The .zip file and extracted contents were just a lot of extra non-contiguous I/O baggage.

I’m going to perform an upgrade of the databases, but I don’t care to actually “upgrade” vCenter and all of its components from 2.5 Update 4 to version 4.0. I’ve never ever had good luck with vCenter upgrades. My method, therefore, is complete uninstall of vCenter and all components, then a new installation of vCenter while attaching to the existing database which will in turn be upgraded. During the uninstall of vCenter, I typically find that the vCenter uninstall routine leaves bits and pieces behind in folder structures as well as the registry. I manually deleted the remaining pieces and rebooted the vCenter server for good measure and a clean start for the vCenter 4.0 installation. In retrospect, the manual deletion of left over files and uninstall of the vCenter license server will turn out to be my second and third mistakes which I’ll talk about shortly.

After the reboot, I began the vCenter 4.0 installation. I also made sure my vCenter SQL account had DBO rights to the MSDB database, the vCenter database, and the Update Manager database. This is a new requirement during the installation of vSphere. I wasn’t too far into the installation when I ran into failure and the installation routine rolled back. The error message was rather cryptic and I’m sorry I don’t have a screenshot but it had to do with the installer’s inability to properly install and configure the local ADAM instance which I believe is used for vCenter federation (linked vCenter servers). I quickly found a long thread on the VMTN forums which pointed me to the solution in VMware’s knowledgebase. KB1010938 talks about NETWORK SERVICE NTFS directory permissions (READ) that are required on the root of the drive where vCenter is being installed. A quick check showed I lacked the necessary permissions. I resolved this quickly and re-ran the installation.

During the re-installation, I ran into my second problem (self inflicted). Way back when, I had set up SSL certificates for VI3. The certificate files are required to be present during the database upgrade because the certs are tattooed to the database as well. During my “cleanup” process I spoke about above, I had deleted the SSL folder containing the certificate files VMware had left behind. Turns out this was by design. Thankfully when I performed the cleanup, all files and folders went to the recycle bin and I was able to quickly retrieve them. Without the certificate files, I would have been looking at a new database installation which would have deleted all vCenter data including performance history.

After restoring the certificate files, I reran the installation a third time. The installation of vCenter Server and all of its components was successful. I was able to open the vSphere client and connect to the vCenter server. My hosts, VMs, and all data was present. All looked to be successful until I tried a VMotion. The ESX hosts which were still on VI3 were no longer licensed. Refer to my comment further up about uninstalling the license server being a mistake. vCenter 4.0 license keys do not license VI3 legacy hosts. A VI3 license server or host based license keys must be plugged into vCenter 4.0 in order to properly license VI3 legacy hosts. I resolved the issue by re-installing the VI3 license server on some junk VM in the interim and then plugged the license server name into vCenter 4.0’s licensing configuration. Viola! The ESX3 and ESXi3 hosts are now licensed and everything is working properly. After feeling confident in the installation, I removed the vCenter snapshot.

This was enough change for one night. The ESX host upgrades (rebuilds) will come a few days later. If I uncover any gotchas during host installations, I’ll be sure to share but I expect those to be fairly uneventful. I’ve installed a lot of ESX4/ESXi4 hosts during the vSphere beta and it’s straight forward, not unlike ESX3 /ESXi3 installations. Most of the ~150 changes in vSphere will be evident in vCenter and the various components. There’s a few enhancements in the hypervisor installation but nothing that hasn’t already been pointed out in various other blogs and installation videos.

In search of an application migration solution

June 22nd, 2009

I’m reaching out to software vendors and/or readers who might be aware of an end to end application migration solution or an application migration story outlining solutions, challenges, successes, etc. The solution should be software driven. The solution will seamlessly migrate applications, services, and daemons from one platform (Windows or Linux) to another platform in the same family. As an example, the solution would migrate applications and/or services from Windows 2000 Server to Windows Server 2003. On the Linux side, another example would be migrating applications and/or daemons from SLES 9.x to SLES 10.x.

As I stated, the solution would need to be as seamless and end to end as possible. Application and platform dependencies would need to be taken into consideration, and addressed or mitigated. For example, service packs, .DLLs, .NET framework, PERL, etc. on the Windows side. Kernel versioning, compiling, PERL, etc. on the Linux side.

I’m not exactly looking for professional services, however, I would be interested in hearing about your process and the tools you use. The solution need not be virtualization specific. The right tool would work with IBM, HP, Dell, whitebox, or virtual hardware.

If you are a vendor with a solution, or a reader with some application migration background, please drop me a line at or feel free to reply to this blog entry with a comment. Feedback both small or large is welcomed as usual.

Thank you in advance.

Virtual Infrastructure Client and the Windows Registry

June 17th, 2009

Hello gang. I apologize for the frequency slowdown in blog posts but I’ve been  _insert lame excuse everyone has heard before here_. Truth be told, I am busy working on a project which I hope to have available the virtualization community on or before VMworld 2009.

This post is a no brainer, maybe you’ve seen it before on another blog or maybe you’ve figured it out for yourself. For me, I can honestly say it was the latter, but with some minimal registry skills, it’s not so difficult.

In short, my Virtual Infrastructure Client (VIC) cached list of host connection entries (at the login prompt) had gotten much too polluted over time with many stale entries that I wanted to get rid of. This can happen over the course of time if you use your VIC to connect to many different vCenter servers or explicit hosts in various environments. Particularly, I would think this can happen quickly to consultants who travel from site to site supporting virtual infrastructures.

There is a way to manipulate the cached list you see in the pulldown box. And by manipulate, I don’t just mean delete. In addition to deleting entries, you can also modify entries (perhaps for a DNS suffix migration), re-order entries (VMware doesn’t maintain this list in alpha order necessarily or perhaps you’d like a custom sort order), or add entries (consider a scenario where you have a packaged VIC that you want to roll out to your new VMware admin – instead of presenting the new admin, who has no knowledge of the environment, with a blank VIC, help them hit the ground running with a pre-populated list of vCenter servers or ESX hosts to connect to).

As the title of this post indicates, the cached entries are stored in the Windows registry and are tied to each individual user profile (HKU). You’ll find the comma delimited list of entries in the following registry key:

HKU\<User SID>\Software\VMware\VMware Infrastructure Client\Preferences\

The value name is RecentConnections and the type is REG_SZ

There’s one more value nearby that sticks out like a sore thumb:

HKU\<User SID>\Software\VMware\Virtual Infrastructure Client\Preferences\UI\SSLIgnore\

The value names vary by connection name or IP address and the type is REG_SZ. These represent each connection where you’ve checked the little box telling the VIC that you want to ignore SSL certificate warnings which you will receive in an “out of the box” configuration. I can’t think of a great use case scenario why someone who has chosen to ignore SSL warnings would want to re-enable them, other than a situation where they’ve now enabled legitimate SSL.

To find out why disabling SSL warnings might not be such a great idea, see my previous blog post titled SSL Integration With VirtualCenter.

As they say in the ARMY, or at least on M*A*S*H… “That is all”.

ThinLaunch Software Announces the Immediate Availability of Thin Desktop 2.3.2

June 13th, 2009

6-13-2009 7-33-25 PM

(St. Paul, MN) ThinLaunch Software, LLC ( announces the immediate availability of Thin Desktop 2.3.2, Thin Desktop 2.3.2 enhances the award winning Thin Desktop application announced in August, 2008. Thin Desktop 2.3.2 simplifies deployment and adoption of Virtual Desktop Strategies by overcoming common barriers associated with the implementation of these strategies.

Thin Desktop enhances the overall value of virtualization by simplifying the deployment and implementation of virtual desktops at the user device. Thin Desktop replaces the local user interface, then locks down and monitors the user / client device. This allows the administrator to gain complete control over the client end point and the user experience. When compared to group policy methods, “registry hacks” and other similar approaches, Thin Desktop is far easier to implement, deploy and maintain. Unlike the implementation of a traditional Thin Client model, Thin Desktop requires no changes to the enterprise infrastructure and has no server footprint or management server.

When a PC or Thin Client is locked down using Thin Desktop, the typical shell / user interface is hidden from the user and replaced by the designated connection or application. At the same time, underlying capabilities allowed by the administrator can remain intact. No changes to the enterprise infrastructure are required and no additional tools or management functionality is needed.

The release of version 2.3.2 enhances deployment of Thin Desktop using industry standard methods, tools and architectures. An administrator can now deploy and implement Thin Desktop on any PC or Thin Client via standard unattended silent install capability and existing software distribution and imaging methods.

“Thin Desktop 2.3.2 is the result of feedback form a wide variety of customers with very diverse use cases and requirements. A common thread is the desire to adopt virtual desktop technologies while preserving investments in current hardware, infrastructure and skill sets – with a clear path for future hardware and virtualization options.”, said ThinLaunch Software General Manager, Mike Cardinal. “Customer environments with both PC and Thin Client devices will coexist for the foreseeable future. Most users don’t care about the box connected to the monitor, keyboard and mouse – and administrators don’t want them to care.”

For additional information and an Evaluation Download of Thin Desktop, visit the website at

About ThinLaunch Software, LLC
ThinLaunch Software, LLC has developed Thin Desktop to enhance the value of client device assets. Established in May of 2007, ThinLaunch software is privately held and based in Eagan, MN, a suburb of St. Paul, MN.
ThinLaunch Software and Thin Desktop are registered trademark of ThinLaunch Software, LLC. Additional trademarks and Patents Pending. Please visit the website at:

Tripwire Launches vWire, A Virtualization Management Solution

June 8th, 2009

6-8-2009 8-46-52 PM

vWire gives visibility and control to virtualization engineers to dramatically decrease complexity, downtime, and operational costs in virtual environments.

PORTLAND, Ore. – June 9, 2009 – Tripwire® today announced vWireTM, the first virtualization management solution to integrate change and configuration awareness into Virtual Infrastructure (VI) management. Solving a real business need for virtualization administrators, vWire continuously monitors the state of virtual systems and correlates data with critical events to provide context and insight into potential issues, and then acts to prevent and resolve problems that cause downtime.

vWire was designed based on direct input from the virtualization community to introduce a greater level of control and visibility over virtual environments. The new offering from Tripwire reduces the time that virtual infrastructure managers spend troubleshooting system failures. With vWire, IT organizations can optimize their virtual infrastructures, decrease downtime and improve reliability, while reducing operating costs through automated monitoring and problem detection.

“Tripwire is working with VMware to provide customers with solutions and tools that offer a holistic view of their virtualized environments while improving operational compliance of these systems,” said Shekar Ayyar, vice president, infrastructure alliances, VMware. “Tripwire’s new vWire extends the robust management capabilities of VMware solutions with capabilities targeted at providing increased visibility and helping troubleshoot problems quickly.”

vWire is easy to set up and requires no special training. Once installed with sufficient licenses and credentials, vWire automatically starts recording and analyzing the status of the entire virtual infrastructure. vWire is comprised of three major components:

  • Comprehensive Monitoring and Data Collection – vWire monitors all change, configuration and critical event data, providing a database of up-to-the-minute information that can be quickly accessed, searched, and filtered. vWire complements the availability features of the VMware platform by monitoring systems to help prevent or eliminate virtual system downtime.
  • Shared content – vWire data can be accessed with various combinations of filters, scripts and alerts, easily accessed through the vWire dashboard. Administrators can see at a glance the aggregated information that is relevant to ensuring the health of their virtual infrastructure. These capabilities ship with out-of-the box content, but can be easily customized. Users can also import shared content from the community, expanding the library of tools and solutions.
  • The vWire community – This online forum provides an opportunity for VI administrators, industry experts and product specialists to share content, expertise and solutions for issues involving the management of virtual systems.

According to Stephen Beaver, co-author of two books on virtualization, “Essential VMware ESX Server” and “Scripting VMware Power Tools: Automating Virtual Infrastructure Administration”, Tripwire Virtualization Evangelist, “virtualization is unique in that there can be thousands of unique configuration properties in a virtual infrastructure, and the number quickly goes up when you consider multiple objects. For example, a moderately small installation with two clusters, six ESX hosts, and 60 VMs will have 38,700 configuration properties for the clusters, hosts, and VMs. vWire’s configuration automation gives virtual administrators new visibility to effectively monitor and manage configuration, change and performance data across the virtual infrastructure – all from within VMware vCenterTM Server.”

vWire is available for immediate purchase and download at Also available at are two free, downloadable tools to help manage virtualized infrastructure: OpsCheck, which helps ensure systems are configured to support VMware VMotionTM by rapidly analyzing ESX 3.0, 3.5, and ESXi hypervisors; and, ConfigCheck, which helps ensure VMware environments are properly configured, a recommended and essential first step when deploying and virtualizing additional servers.

About Tripwire, Inc.

Tripwire is the leader in data center compliance and infrastructure management solutions, building confidence for IT across both virtual and physical infrastructures. Tripwire Enterprise and vWire software, helps over 6,500 enterprises worldwide meet their configuration auditing, file integrity monitoring, virtual infrastructure management and change auditing needs for IT operations, security and compliance. Tripwire is headquartered in Portland, Ore. with offices worldwide. Tripwire can be found at,, and @vwire on Twitter.


©2009, Tripwire, Inc. Tripwire is a registered trademark of Tripwire, Inc. All other marks are property of their respective owners. All rights reserved.

VMware Update Manager, Updates, and New Builds

June 7th, 2009

This was somewhat of a strange post to get off the ground. I had a definite purpose at the beginning and I knew what I was going to write about, however, through some lab scenarios I unexpectedly took the scenic route in getting to the end.

In my mind, the topic started out as “Effective/Efficient Use of Update Manager For New Builds”.

Then, while working in the lab, the title changed to “Gosh, Update Manager Is Slow”.

A while later it morphed into “Cripes, What In The Heck Is Update Manager Doing?!”

Finally I had a revelation and the topic came full circle back to an appropriate title of “VMware Update Manager, Updates, and New Builds” which is what I more or less had in mind to begin with but as I said I picked up some information which I hadn’t recognized at the beginning.

“Effective/Efficient Use of Update Manager For New Builds”

So as I said, the idea of the post started out with a predefined purpose – discussion about the use of Update Manager in host deployments. It really has more to do with host deployment methodology as a basis of discussion that it has to do with patch management. What I was going highlight was that the deployment of an ESX host goes much quicker if you start out with the most current ESX .ISO allowed in your environment and then use VMware Update Manager to install the remaining patches to bring it to current.

As an example, let’s say our current ESX platform standard is ESX 3.5.0 Update 4 with all patches up to today’s date of 6/6/09.

  • The most efficient deployment method would be to perform the initial installation of ESX using the ESX 3.5.0 Update 4 .ISO and then afterwards, use VMware Update Manager to install the remaining 15 patches through today’s date. Using Ultimate Deployment Appliance version 1.4, I can deploy ESX 3.5.0 with Update 4 in five minutes. The subsequent 15 patches using VMware Update Manager takes an additional 16 minutes, end to end including the reboot. That’s a total of less than 25 minutes to deploy a host with all patches.
  • Now let’s look at an alternative and much more time consuming method. Install ESX 3.5.0 using the original or even the Update 1 .ISO. Again, using UDA 1.4, this takes 5 minutes. Now we use Update Manager to remediate the ESX host to Update 4 plus the remaining 15 patches. If you used the original ESX .ISO, you’re looking at 149 updates. If you installed from the ESX 3.5.0 Update 1 .ISO, you’ve got 125 patches to install. This patching process takes nearly 90 minutes! Even on an IBM x3850M2 (one of the fastest hardware platforms available on the market today), the patch process is 75 minutes.

The numbers in the second bullet above speak to the deployment of one host. We always have more than one host in a high availability cluster and a typical environment might have 6, 12, or even 32 hosts in a cluster. Ideally we don’t want to be running hosts in a cluster on different patch levels for an extended duration. Suddenly we’re looking at a long day of work for a 6 node cluster (9.5 hours) and an entire weekend gone for a cluster of 12 hosts or more (18 hours +). The kicker is that this is still an automated deployment. Automation usually means efficiency right? Not in this case. Granted, there’s not a lot of manual labor involved here, but there is a lot of “hurry up and wait”.

Now before anyone jumps in and recommends rebuilding all of the hosts concurrently, let’s just count that out as an option because in this scenario, we’re rebuilding an active cluster that can only afford 1 host outage at a time (N+1). I’m actually being generous with the time durations because I’m not even accounting for host evacuations, which at the vCenter default of 2 at a time, can take a long time on densely populated clusters. It’s a real world scenario and if you don’t plan ahead for it, you may find out there is not enough time in a weekend to complete your upgrade.

Moral of this section: When deploying hosts, use the most recent .ISO possible which has all of the updates injected into it up to the release date of the .ISO.

“Gosh, Update Manager Is Slow”

I’ve heard some comments via word of mouth about how slow Update Manager is. Myself, I thought the comments were unfounded. I’ve never had major issues with Update Manager aside from a few nuisances I’ve learned to work around. Having managed ESX environments before the advent of Update Manager, I’m grateful for what Update Manager has brought to the table in lieu of manually populated and managed intranet update repositories. I never really noticed the Update Manager slowness because I was always deploying new host builds from the latest ESX .ISO as I described in the first bullet in the section above, and then applying the few incremental post deployment patches. Deploying the full boat of ESX patches using Update Manager has opened up my eyes as to how painfully slow it can be.

One interesting thing that I discovered in the lab was not only is the patch deployment process longer, the preceding scan process is as well. The interesting component is that both the scan and the remediate steps seem to scale in a linear fashion, whether that is actually true or just a coincidence, who knows. What I mean is that:

  • An ESX 3.5.0 Update 4 host took 1 minute to scan and 16 minutes to remediate
  • An ESX 3.5.0 Update 1 host took 5 minutes to scan and 84 minutes to remediate

So we’re wasting extra time in both of the remediation processes: The scan, and the remediate.

Moral of this section: Update Manager or ESX patch installation or both is slow, but it doesn’t have to be. Same as the moral of the first section: Avoid this pitfall by using the most recent .ISO possible which has all of the updates injected into it up to the release date of the .ISO.

“Cripes, What In The Heck Is Update Manager Doing?!”

So then curiosity got the best of me and I took the lab experiment a little further. Of the 84 minutes spent remediating ESX 3.5.0 Update 1 host above, how much of that time was spent installing Update 4, and how much of the time was spent installing the 15 subsequent post Update 4 patches? Afterall, I already know that remediating the 15 post Update 4 patches by themselves takes only 16 minutes. Will the numbers jive?

To find out, I deployed an ESX 3.5.0 Update 1 host and created a remediation baseline containing ONLY ESX 3.5.0 Update 4. Big sucker – 723MB, but because it’s just one giant service pack, perhaps it will install quicker than the sum of all its updates. Here’s where I was really wrong.

I remediated the host and expected to see 1 task in vCenter describing an installation process, and then a reboot. Instead, I saw a boatload of patches being installed:

6-7-2009 12-26-22 AM

Which brings me to the title of this section “Cripes, What In The Heck Is Update Manager Doing?!” Did I apply the wrong baseline? Did Update Manager become self aware like Skynet and decide to engineer its own creative solutions to datacenter problems? Turns out Update 4 is not a patch or a service pack at all. In and of itself, it doesn’t even include binary RPM data. It’s metadata that points to all ESX 3.5.0 patches dated up to and including 3/30/09. Sure, you can download Update 4 as a 724MB offline installation package from the VMware download section, but mosey on over to their patch repository portal and you’ll see that the giant list of superseded and included updates in Update 4 is merely an 8.0K download. At first I thought that had to be a typo and I was about to drop John Troyer an email but opening up that 8K file just for kicks was the eye opener for me. Take a look at the 8K file and you’ll see the metadata that tells Update Manager to go download many of the incremental patches leading up to 3/30/09. Same concept with the 724MB offline installation package. It’s a .ZIP file. Open it up and you won’t find a large 724MB .RPM. Instead you’ll find a directory structure containing many of the incremental updates leading up to 3/30/09.

Moral of this section: Same as the moral of the first and second sections: Avoid wasting your valuable maintenance window time by avoiding as many incremental ESX patches as possible. Use the most recent .ISO possible which has all of the updates injected into it up to the release date of the .ISO when you deploy a host.

“VMware Update Manager, Updates, and New Builds”

Connect the dots and I think we’ve got a best practice in the making for host deployments using Update Manager. Existing and new host deployments aside, look at the implications of using Update Manager to deploy a major Update (in this discussion, Update 4). It’s actually 5 times faster to rebuild the host with the integrated Update 4 .ISO than it is to patch it with Update Manager. To me that’s bizarre but it is reality if you have automated host deployment methods. For medium to large environments, automated builds are absolutely required. There’s not enough time in the weekend to patch an 18 host cluster, let alone a 32 node cluster using Update Manager. Rebuild from an updated .ISO or span your host updates over several maintenance windows. The latter could get hairy and I definitely would not recommend it.

Great day today and I got a lot accomplished in the lab. Unfortunately towards the end, this happened:

6-7-2009 1-08-09 AM

Replacement unit is already on the way from NewEgg. Thank you vWire for funding the replacement!