8 New ESX 3.5.0 Patches Released; 3 Critical

October 16th, 2009 by jason No comments »

Eight new patches have been released for ESX 3.5.0. Other versions of ESX, including vSphere and ESXi, are not impacted.

3 of the 8 patches are rated critical and should be evaluated quickly for application in your virtual infrastructure.

ID: ESX350-200910401-SG Impact: HostSecurity Release date: 2009-10-16 Products: esx 3.5.0 Updates VMkernel, Tools, hostd

This patch contains the following fixes and enhancements:

This patch updates the service console kernel version to kernel-2.4.21-58.EL. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2008-4210, CVE-2008-3275, CVE-2008-0598, CVE-2008-2136, CVE-2008-2812, CVE-2007-6063, and CVE-2008-3525 to the security issues fixed in kernel-2.4.21-58.EL.

This patch reduces the boot time of ESX hosts and should be applied when multiple ESX hosts detect LUNs used for Microsoft Cluster Service (MSCS).

Symptom: Error messages similar to the following might be logged in the /var/log/vmkernel log file of the service console:

Jul 24 14:34:24 VMEX3EQCH1100003 vmkernel: 165:15:48:57.500 cpu0:1033)WARNING: SCSI: 5519: Failing I/O due to too many reservation conflicts

Jul 24 14:34:24 VMEX3EQCH1100003 vmkernel: 165:15:48:57.500 cpu0:1033)WARNING: SCSI: 5615: status SCSI reservation conflict, rstatus 0xc0de01 for vmhba1:0:9. residual R 919, CR 0, ER 3

Jul 24 14:34:24 VMEX3EQCH1100003 vmkernel: 165:15:48:57.500 cpu0:1033)SCSI: 6608: Partition table read from device vmhba1:0:9 failed: SCSI reservation conflict (0xbad0022)

Any additional lines or customizations added by a user in the /etc/fstab file are deleted when VMware Tools is reinstalled or reconfigured. This issue occurs because when uninstalling, VMware Tools restores the files which were backed up during installation.

After applying this patch, any request for connection with ESX 3.5 using cipher suite of 56-bit encryption will be dropped. As a result, browsers that exclusively use cipher suites with 40-bit and 56-bit encryption cannot connect to ESX 3.5. Microsoft has made the Internet Explorer High Encryption Pack available for Internet Explorer 5.01 and earlier. Internet Explorer 5.5 and higher versions already use 128-bit encryption.

This patch contains a fix for a security vulnerability in the ISC third-party DHCP client. This vulnerability allows for code execution in the client by a remote DHCP server through a specially crafted subnet-mask option. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the name CVE-2009-0692 to this issue.

ID: ESX350-200910402-BG Impact: Critical Release date: 2009-10-16 Products: esx 3.5.0 Updates ESX Scripts

This patch is required to be installed with ESX350-200910401-SG (KB 1013124) to resolve a boot-time-related issue. The patch reduces the boot time of ESX hosts and should be applied when multiple ESX hosts detect LUNs used for Microsoft Cluster Service (MSCS).

ID: ESX350-200910403-SG Impact: HostSecurity Release date: 2009-10-16 Products: esx 3.5.0 Updates Web Access

This patch updates the following:

WebAccess component Tomcat server to 5.5.27. This update addresses multiple security issues that exist in the earlier releases of the Tomcat server.

The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2008-1232, CVE-2008-1947, and CVE-2008-2370 to the issues addressed by Tomcat 5.5.27. For more information on these security vulnerabilities, refer to the Apache Tomcat 5.x Vulnerabilities page at http://tomcat.apache.org/security-5.html.

WebAccess component JRE to 1.5.0_18. This update addresses multiple security issues that existed in the previous versions of JRE.

The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the following names to the security issues fixed in JRE 1.5.0_17:

CVE-2008-2086, CVE-2008-5347, CVE-2008-5348, CVE-2008-5349, CVE-2008-5350, CVE-2008-5351, CVE-2008-5352, CVE-2008-5353, CVE-2008-5354, CVE-2008-5356, CVE-2008-5357, CVE-2008-5358, CVE-2008-5359, CVE-2008-5360, CVE-2008-5339, CVE-2008-5342, CVE-2008-5344, CVE-2008-5345, CVE-2008-5346, CVE-2008-5340, CVE-2008-5341, CVE-2008-5343, and CVE-2008-5355.

The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the following names to the security issues fixed in JRE 1.5.0_18:

CVE-2009-1093, CVE-2009-1094, CVE-2009-1095, CVE-2009-1096, CVE-2009-1097, CVE-2009-1098, CVE-2009-1099, CVE-2009-1100, CVE-2009-1101, CVE-2009-1102, CVE-2009-1103, CVE-2009-1104, CVE-2009-1105, CVE-2009-1106, and CVE-2009-1107.

ID: ESX350-200910404-SG Impact: HostSecurity Release date: 2009-10-16 Products: esx 3.5.0 Updates cim

After applying this patch, any request for connection to CIM port 5989 on ESX 3.5 using cipher suite of 56-bit encryption will be dropped.

ID: ESX350-200910405-SG Impact: HostSecurity Release date: 2009-10-16 Products: esx 3.5.0 Updates mptscsi drivers

This patch updates the mptscsi driver to a version that is compatible with the service console version kernel-2.4.21-58.EL.

ID: ESX350-200910406-SG Impact: HostSecurity Release date: 2009-10-16 Products: esx 3.5.0 Updates Service Console DHCP Client

The service console package dhclient has been updated to version dhclient-3.0.1-10.2. This fixes a stack buffer overflow flaw in the ISC DHCP client and a flaw in the way the DHCP daemon init script handles temporary files. The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2009-0692 and CVE-2009-1893 to these issues.

ID: ESX350-200910408-BG Impact: Critical Release date: 2009-10-16 Products: esx 3.5.0 Updates VMkernel iSCSI driver

When ESX 3.5 hosts are connected to Adaptec Snap Server series or Dell NX series of NAS appliances through the ESX software iSCSI initiator, sometimes the iSCSI LUNs are not detected by the ESX 3.5 hosts. The issue is caused due to the way the software iSCSI driver detects an overflow condition. This patch fixes the issue.

ID: ESX350-200910409-BG Impact: Critical Release date: 2009-10-16 Products: esx 3.5.0 Updates Emulex FC driver

ESX 3.5 Update 4 hosts with Emulex HBAs might stop responding when accessed through vCenter Server. This Emulex driver patch fixes the issue.

Symptom: On ESX hosts, any application making an ioctl call in to the Emulex driver might fail.

Virtualizing vCenter With vDS Catch-22

October 9th, 2009 by jason No comments »

I’ve typically been a fan of virtualizing the vCenter management server in most situations. VMware vCenter and Update Manager both make fine virtualization candidates as long as the underlying infrastructure for vCenter stays up. Loss of vCenter in a blackout situation can make things a bit of a hassle, but one can work through it with the right combination of patience and knowledge.

A few nights ago I had decided to migrate my vCenter VM to my vSphere virtual infrastructure. Because my vCenter VM was on a standalone VMware Server 2.0 box, I had to shut down the vCenter VM in order to cold migrate it to one of the ESX4 hosts directly, transfer the files to the SAN, upgrade virtual hardware, etc. Once the files were migrated to the vSphere infrastructure, it was time to configure the VM for the correct network and power it up. This is where I ran into the problem.

vCenter was shut down and unavailable, therefore, I had connected my vSphere client directly to the ESX4 host in which I transferred the VM to. When trying to configure the vCenter VM to use the vNetwork Distributed Switch (vDS) port group I had set up for all VM traffic, it was unavailable in the dropdown list of networks. The vCenter server was powered down and thus the vDS Control Plane was unavailable, eliminating my view of vDS networks.

This is a dilemma. Without a network connection, the vCenter server will not be able to communicate with the back end SQL database on a different box running SQL. This will cause the vCenter server services to not start and thus I’ll never have visibility to the vDS. Fortunately I have a fairly flat network in the lab with just a few subnets. I was able to create a temporary vSwitch and port group locally on the ESX4 host which would grant the vCenter VM the network connectivity it needed so I could then modify the network, changing from a local to a vDS port group on the fly.

Once the vCenter server was back up, I further realized that vDS port groups are still unable to be seen when the vSphere client is connected directly to an ESX4 host. The ability configure a VM to utilize vDS networking requires both that the vCenter server be functional, as well as a vSphere client connected to said vCenter server and not a managed host.

The situation I explained above is the catch-22 – the temporary inability to configure VMs for vDS networking while the vCenter server is unavailable. One might call my situation a convergence of circumstances, but with an existing virtualized vCenter server that you’re looking to migrate to a vDS integrated vSphere infrastructure, the scenario is very real. I’d like to note all VMs that had been running on a vDS port continued to run without a network outage as the vDS Data Plane is maintained on each host and remained in tact.

SQL 2005 SP2 End of Support to Force Rapid vSphere Upgrade?

October 1st, 2009 by jason No comments »

The way I read it, the Microsoft Support Lifecycle for SQL Server 2005 tells me that SQL Server 2005 SP2 support ends on 12/15/2009. That’s about 10 weeks from today.

Why should you care? If you’re utilizing VMware vCenter Server 2.5 in your production datacenter, you’ve got about 10 weeks to upgrade to vSphere to stay within a VMware supported configuration. The VMware Virtual Infrastructure Compatibility Matrixes reveal on page 10 that vCenter 2.5 is only compatible with SQL Server 2005 up to Service Pack 2. SP3 is not supported.

To make the jump to SQL Server 2005 SP3 or SQL Server 2008 requires upgrading to vSphere to stay within a VMware supported configuration.

I would venture to guess that a lot of VI customers are not ready for the jump to vSphere, especially those who wish to take advantage of the new features and the design considerations which must be evaluated and planned before deployment. Not to mention the licensing considerations which are tied to the new features. While we’re on the subject of licensing, keep in mind Enterprise licensing is retired mid December 2009. To keep existing Enterprise features in the virtual infrastructure will require Enterprise Plus licensing after the mid December Enterprise license retirement date.

With the SQL 2005 SP2 retirement date approaching, I’ll be looking for VMware modify their support stance to support SQL Server 2005 SP3. A lot of customers are going to need this to keep within support.

Speaking of SQL Server 2008, beware a caveat that Orchestrator 4.0 is not supported on SQL 2008 (yet).

VCDX Design Exam: been there, done that!

October 1st, 2009 by jason No comments »

Borrowing a blog post title from my friend in virtualization Duncan, I passed the VCDX Design exam this morning with a score of 369. A passing mark of 300 is required out of a total of 500. I had a lot of built up anxiety for this exam for a few reasons:

  1. Duncan Epping (mentioned above) had mentioned that he thought the Design exam was more difficult than the Enterprise exam. He’s already VCDX certified and he’s a VMware genius.
  2. I was at a loss as far as what to study. The blueprint covered topics that I felt were vague from a formal training or studying perspective. It implies the requirement of real world experience.

Therefore, my study method consisted of:

  1. 30 minutes looking over the VCDX Design blueprint
  2. 1 hour of brushing up on NPIV documentation
  3. 1 hour of reviewing virtualized Microsoft Cluster requirements
  4. A quick review of TCP/UDP ports used in VMware virtual infrastructure in the enterprise (including SQL, Oracle, SNMP, Syslog, AD, LDAP, NFS, iSCSI, etc.)
  5. Knowledge of vSphere must be thrown out. Candidates need to remember this is clearly a VI3 exam.
  6. 13 years broad IT experience, 8 years experience with VMware products, 5 years experience with ESX

Once in the exam room, I found it to be less difficult than the Enterprise exam (which felt more like a Red Hat exam than a VMware exam). I surmise Duncan’s experience was different as English is not his native language (although he speaks it exceptionally well) and there is a lot of reading and interpretation of data on this exam. There were also a decent share of short and to the point questions as well. While I admit I didn’t have the best score, I found many of the questions to be pretty simple and not what I expected on an advanced level certification exam. Part way into the exam I felt fairly comfortable about passing given the degree of difficulty I had thus far experienced and assuming this experience would continue through to the end.

The exam format is two parts:

  1. Part 1 consists of 51 multiple choice/multiple select questions. In this section also exists several drag and drop style questions. One of the drag and drop questions was missing an obvious correct component and had a duplicate of another. I don’t believe this was intentional. I commented on this question with the corrections needed.
  2. Part 2 consists of a Visio-like architecture design tool where you freehand place components for a customer design. There is an assload of reading and a poor presentation of the requirements and the actual design drawing all on one small screen – probably good practice when in front of customers who either don’t know what they want, or don’t easily convey what they want. I spent 27 minutes on the last design question and ended up running out of time. I highly doubt 100% accuracy of my design as I ran out of time before I was comfortable with it. Jon Hall, if you’re reading this, I’m curious to know what the grading scale is between the 51 questions and the final design.

So that’s it. I’m on to the VCDX Design application step once VMware invites me (I hear the design application is very lengthy documentation writing and takes about 2 solid weeks to complete – following the advice of other existing VCDX’s on Twitter, the application is NOT an area to skimp on), and then the final defense step after that.

I’m an end user and not in front of customers daily. Consulting is solid experience to have for the VCDX process. I think the VCDX is designed for consultants, therefore, consultants are set up well and have an inherent advantage. Wish me luck, I’ll need it.

Align Datastore Names With VM Names Using Storage vMotion

September 30th, 2009 by jason No comments »

Does it bug you when the registered names of your VM do not match the folder and file names on the datastore? It can be difficult to identify VMs when browsing the datastore if the folder and file names do not match the VM name. Or maybe the VM names generally match what’s on the datastore but there are some case sensitivity discrepancies. I for one an uncomfortable with these situations. While fixing the problem by bringing the datastore folder/file names into alignment with the VM name isn’t impossible, the process is painful when done manually and requires an outage of the VM.

Here’s a simple trick I’m sure many may already be aware of. I remember hearing about it quite a while ago (I don’t remember where) but had forgotten about it until today. Let VMware Storage VMotion take care of the problem for you. During the Storage VMotion process, the destination folder/file names are synchronized with the name of the VM on the fly with no outage.

For example, let’s say we create a VM with a registered name of “old_name”. The datastore backing folder has the name of “old_name” and the files inside are also prefixed with “old_name”.vmdk, .vmx, etc.

Now we go ahead and change the name of the VM to “new_name” in vCenter. The datastore backing folder and files still have the “old_name” and now obviously don’t match the registered VM name.

To bring the datastore backing folder and file names back in synchronization with the registered VM, we can perform a Storage VMotion. In doing so, the backing folder and files will be dynamically renamed as they land on the new datastore. In this case, they will be renamed to “new_name”.

This solution is a heck of a lot easier than powering down the VM and renaming all the files, as well as modifying the corresponding metadata in some of the files.

Update 9/27/11: As reported by Gary below and validated in my lab, this trick no longer works in vSphere 5.0 with respect to file names within the folder.  As an example, after renaming the VM in vCenter inventory and then subsequently Storage vMotioning the VM, the destination folder name will match the VM however the .vmx and .vmdk files inside will not.  This is unfortunate as I have used this trick method many times.

Update 11/7/12: Over a year later, vSphere 5.1 is shipping and this feature is still disabled.  VMware KB Article 2008877 has not been updated since the launch of vSphere 5.1 If I were a customer, I’d be upset.  As an avid user of the product, I’m upset as much about the carelessness and complacency of VMware as I am about the disabling of the feature.

Update 12/21/12: Duncan Epping reports Storage vMotion file renaming is back in vSphere 5.0 Update 2.  Read more about that here.  This is a wonderful birthday present for me.

Update 1/25/13: Duncan Epping further clarifies that Storage vMotion file renaming in vSphere 5.0 Update 2 requires an addition to the advanced setting in vCenter (Add the key “provisioning.relocate.enableRename” with value “true” and click “add”).  Read more about that here.  Duncan further hints that Storage vMotion file renaming may be coming to vSphere 5.1 Update 1.  No promises of course and this is all just speculation.

Update 4/30/13: Duncan’s prophecy came to realization late last week when VMware released vSphere 5.1 Update 1 which restores Storage vMotion file renaming.  As pointed out by Cormac here and similar to the update above, an advanced setting in vCenter is required (Add the key “provisioning.relocate.enableRename” with value “true” and click “add”).

8 New ESX(i) 4.0 Patches Released; 7 Critical

September 25th, 2009 by jason No comments »

Eight new patches have been released for ESX(i) 4.0 (6 for ESX, 2 for ESXi).  Previous versions of ESX(i) are not impacted.

7 of the 8 patches are rated critical and should be evaluated quickly for application in your virtual infrastructure.

ID: ESX400-200909401-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0
Updates vmx and vmkernel64
This patch fixes some key issues such as:
* Guest operating system shows high memory usage on Nehalem based systems, which might trigger memory alarms in vCenter.
* monitor or vmkernel fails when running certain guest operating systems with a 32-bit monitor running in binary translation mode.

See http://kb.vmware.com/kb/1014019 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909402-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates VMware Tools
This patch includes the following fixes
* Updated VMware SVGA and mouse device drivers for supported Linux guest operating systems that use Xorg 7.5.
* PBMs for Debian 5.0.1.
* PBMs for SUSE Linux Enterprise 11 VMI kernel.

See http://kb.vmware.com/kb/1014020 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909403-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates bnx2x
This patch fixes the following issues:
* Virtual machines experience a network outage when they run with older versions of VMware Tools (ESX 3.0.x)
* A network outage is experienced if the MTU value is changed on a Broadcom Netxtreme II 10gig NIC.
* unloading the driver causes a host reboot.

See http://kb.vmware.com/kb/1014021 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909404-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates ixgbe
This patch fixes the following issue:
* A vSphere ESX Host that has NIC teaming configured with the ixgbe driver for the physical NICs might fail if one of the physical NICs goes down.

See http://kb.vmware.com/kb/1014022 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909405-BG Impact: HostGeneral Release date: 2009-09-24 Products: esx 4.0.0 Updates perftools
This patch fixes the following issue:
* esxtop utility might quit with the error message “VMEsxtop_GrpStatsInit() failed” when attempting to monitor network status on ESX.

See http://kb.vmware.com/kb/1014023 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESX400-200909406-BG Impact: Critical Release date: 2009-09-24 Products: esx 4.0.0 Updates hpsa
This patch fixes the following issue:
* A virtual machine might fail after the Storage Port controller is reset on ESX hosts that have the HPSA driver connected to an SAS array.
* Hosts cannot detect more than 2 HPSA controllers due to the limited driver heap size.

See http://kb.vmware.com/kb/1014024 for more details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESX 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESXi400-200909401-BG Impact: Critical Release date: 2009-09-24 Products: embeddedEsx 4.0.0 Updates Firmware
This patch fixes some key issues such as:
* Guest operating system shows high memory usage on Nehalem based systems, which might trigger memory alarms in vCenter.
* monitor or vmkernel fails when running certain guest operating systems with a 32-bit monitor running in binary translation mode.
See http://kb.vmware.com/kb/1014026 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESXi 4.0 should add an additional patch download URL as described in KB 1013134

ID: ESXi400-200909402-BG Impact: Critical Release date: 2009-09-24 Products: embeddedEsx 4.0.0 Updates Tools
This patch includes the following fixes
* Updated VMware SVGA and mouse device drivers for supported Linux guest operating systems that use Xorg 7.5.
* PBMs for Debian 5.0.1.
* PBMs for SUSE Linux Enterprise 11 VMI kernel.

See http://kb.vmware.com/kb/1014027 for details

NOTE: Cisco Nexus 1000v customers using VMware Update Manager to patch ESXi 4.0 should add an additional patch download URL as described in KB 1013134

Lab Manager 4 and vDS

September 19th, 2009 by jason No comments »

VMware Lab Manager 4 enables new functionality in that fenced configurations can now span ESX(i) hosts by leveraging vNetwork Distributed Switch (vDS) technology which is a new feature in VMware vSphere. Before getting overly excited, remember that vDS is a VMware Enterprise Plus feature only and it’s only found in vSphere. Without vSphere and VMware’s top tier license, vDS cannot be implemented and thus you wouldn’t be able to enable fenced Lab Manager 4 configurations to span hosts.

Host Spanning is enabled by default when a Lab Manager 4 host is prepared as indicated by the green check marks below:

When Host Spanning is enabled, an unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled. This Lab Manager service VM cannot be powered down, suspended, VMotioned, etc.:

One ill side effect of this new Host Spanning technology is that an ESX(i) host will not enter maintenance mode while Host Spanning is enabled. For those new to Lab Manager 4, the cause may not be so obvious and it can lead to much frustration. An unmanageable Lab Manager service VM is pinned to each host where Host Spanning is enabled and a running VM will prevent a host from entering maintenance mode. Maintenance mode will hang at the infamous 2% complete status:

The resolution is to first cancel the maintenance mode request. Then, manually disable host spanning in the Lab Manager host configuration property sheet by unchecking the box. Notice the highlighted message in pink telling us that Host Spanning must be disabled in order for the host to enter standby or maintenance mode. Unpreparing the host will also accomplish the goal of removing the service VM but this is much more drastic and should only be done if no other Lab Manager VMs are running on the host:

After reconfiguring the Lab Manager 4 host as described above, vSphere Client Recent Tasks shows the service VM is powered off and then removed by the Lab Manager service account:

At this time, invoke the maintenance mode request and the host will now be able to migrate all VMs off and successfully enter maintenance mode.

While Lab Manager 4 Host Spanning is a step in the right direction for more flexible load distribution across hosts in a Lab Manager 4 cluster, I find the process for entering maintenance mode counter intuitive, cumbersome, and at the beginning when I didn’t know what was going on, frustrating. Unsuccessful maintenance mode attempts have always been somewhat mysterious in the past because vCenter Server doesn’t give us much information to pinpoint the problem as far as what’s preventing the maintenance mode. This situation now adds another element to the complexity. VMware should have enough intelligence to disable Host Spanning for us in the event of a maintenance mode request, or at the very least, tell us to shut it off since it is conveniently and secretly enabled by default during host preparation. Of course, all of this information is available in the Lab Manager documentation, but who reads that, right? 🙂