Posts Tagged ‘Update Manager’

Drive-through Automation with PowerGUI

July 9th, 2014

One of the interesting aspects of shared infrastructure is stumbling across configuration changes made by others who share responsibility in managing the shared environment. This is often the case in the lab but I’ve also seen it in every production environment I’ve supported to date as well. I’m not pointing any fingers – my back yard is by no means immaculate. Moreover, this bit is regarding automation, not placing blame (Note the former is productive while the latter is not).

Case in point this evening when I was attempting to perform a simple remediation of a vSphere 5.1 four-host cluster via Update Manager. I verified the patches and cluster configuration, hit the remediate button in VUM, and left the office.  VUM, DRS, and vMotion does the heavy lifting. I’ve done it a thousand times or more in the past in environments 100x this size.

I wrap up my 5pm appointment on the way home from the office, have dinner with the family, and VPN into the network to verify all the work was done. Except nothing had been accomplished. Remediation on the cluster was a failure.  Looking at the VUM logs reveals 75% of the hosts being remediated contain virtual machines with attached devices preventing VUM, DRS, and vMotion from carrying out the remediation.

Obviously I know how to solve this problem but to manually check and strip every VM of it’s offending device is going to take way too long. I know what I’m supposed to do here. I can hear the voices in my head of PowerShell gurus Alan, Luc, etc. saying over and over the well known automation battle cry “anything repeated more than once should be scripted!

That’s all well and good, I completely get it, but I’m in that all too familiar place of:

  1. Carrying out the manual tasks will take 30 minutes.
  2. Authoring, finding, testing a suitable PowerShell/PowerCLI script to automate will also take 30 minutes, probably more.
  3. FML, I didn’t budget time for either of the above.

There is a middle ground. I view it as drive-through efficiency automation. It’s call PowerGUI and it has been around almost forever. In fact, it comes from Quest which my employer now owns. And I’ve already got it along with the PowerPacks and Plug-ins installed on my new Dell Precision M4800 laptop. Establishing a PowerGUI session and authenticating with my current infrastructure couldn’t be easier. From the legacy vSphere Client, choose the Plug-ins pull down, PowerGUI Administrative Console.

The VMware vSphere Management PowerPack ships stock with not only the VM query to find all VMs with offensive devices attached, but also a method to highlight all the VMs and Disconnect.

Depending on the type of device connect to the virtual machines, VUM may also be able to handle the issue as it has the native ability to Disable any removable media devices connect to the virtual machines on the host. In this case, the problem is solved with automation (I won’t get beat up on Twitter) and free community (now Dell) automation tools. Remediation completed.

RVTools (current version 3.6) also has identical functionality to quickly locate and disconnect various devices across a virtual datacenter.  Click on the image below to read more about RVTools.

Click on the image below to read more about PowerGUI.

vSphere 4.1 Update 1 Upgrade File Issues

February 11th, 2011

I began seeing this during upgrade testing last night in my lab but decided to wait a day to see if other people were having the same problems I was.  It is now being reported in various threads in the vSphere Upgrade & Install forum that vSphere 4.1 Update 1 upgrade files are failing to import into VMware Update Manager (VUM).  What I’m consistently seeing in multiple environements is:

  • .zip files which upgrade ESX and ESX from 4.0 to 4.1u1 will import into VUM successfully. 
  • .zip files which upgrade ESX and ESX from 4.1 to 4.1u1 fail to import into VUM.
  • I have not tested the upgrade files for ESX(i) 3.5 to 4.1u1.

The success and error message for all four .zip file imports are shown below.  Two successful.  Two failures.

SnagIt Capture

MD5SUM comparisons with VMware’s download site all result in matches.  I believe there is invalid metadata or corrupted .zip files being made available for download.

The workaround is to create a patch baseline in VUM which will instruct VUM to download the necessary upgrade files itself which is an alternative method to utilizing upgrade bundles and upgrade baselines in VUM.

Windows 2008 R2 and Windows 7 on vSphere

March 28th, 2010

If you run Windows Server 2008 R2 or Windows 7 as a guest VM on vSphere, you may be aware that it was advised in VMware KB Article 1011709 that the SVGA driver should not be installed during VMware Tools installation.  If I recall correctly, this was due to a stability issue which was seen in specific, but not all, scenarios:

If you plan to use Windows 7 or Windows 2008 R2 as a guest operating system on ESX 4.0, do not use the SVGA drivers included with VMware Tools. Use the standard SVGA driver instead.

Since the SVGA driver is installed by default in a typical installation, it was necessary to perform a custom installation (or scripted perhaps) to exclude the SVGA driver for these guest OS types.  Alternatively, perform a typical VMware Tools installation and remove the SVGA driver from the Device Manager afterwards.  What you ended up with, of course, is a VM using the Microsoft Windows supplied SVGA driver and not the VMware Tools version shown in the first screenshot.  The Microsoft Windows supplied SVGA driver worked and provided stability as well, however one side effect was that mouse movement via VMware Remote Console felt a bit sluggish.

Beginning with ESX(i) 4.0 Update 1 (released 11/19/09), VMware changed the behavior and revised the above KB article in February, letting us know that they now package a new version of the SVGA driver in VMware Tools in which the bits are populated during a typical installation but not actually enabled:

The most effective solution is to update to ESX 4.0 Update 1, which provides a new WDDM driver that is installed with VMware Tools and is fully supported. After VMware Tools upgrade you can find it in C:\Program Files\Common Files\VMware\Drivers\wddm_video.

After a typical VMware Tools installation, you’ll still see a standard SVGA driver installed.  Following the KB article, head to Windows Device Manager and update the driver to the bits located in C:\Program Files\Common Files\VMware\Drivers\wddm_video:

    

The result is the new wddm driver, which ships with the newer version of VMware Tools, is installed: 

After a reboot, the crisp and precise mouse movement I’ve become accustomed to over the years with VMware has returned.  The bummer here is that while the appropriate VMware SVGA drivers get installed in previous versions of Windows guest operating systems, Windows Server 2008 R2 and Windows 7 require manual installation steps, much like VMware Tools installation on Linux guest VMs.  Add to this the fact that the automated installation/upgrade of VMware Tools via VMware Update Manager (VUM) does not enable the wddm driver.  In short, getting the appropriate wddm driver installed for many VMs will require manual intervention or scripting.  One thing you can do is to get the wddm driver installed in your Windows Server 2008 R2 and Windows 7 VM templates.  This will ensure VMs deployed from the templates have the wddm driver installed and enabled.

The wddm driver install method from VMware is helpful for the short term, however, it’s not the scalable and robust long term solution.  We need an automated solution from VMware to get the wddm driver installed.  It needs to be integrated with VUM.  I’m interested in finding out what happens with the next VMware Tools upgrade – will the wddm driver persist, or will the VMware Tools upgrade replace the wddm version with the standard version?  Stay tuned.

Update 11/6/10:  While working in the lab tonight, I noticed that with vSphere 4.1, the correct wddm video driver is installed as part of a standard VMware Tools installation on Windows 7 Ultimate x64 – no need to manually replace the Microsoft video driver with VMware’s wddm version as this is done automatically now.

Update 12/10/10: As a follow up to these tests, I wanted to see what happens when the wddm driver is installed under ESX(i) 4.0 Update 1 and its corresponding VMware Tools, and then the VM is moved to an ESX(i) 4.1 cluster and the VMware Tools are upgraded.  Does the wddm driver remain in tact, or will the 4.1 tools upgrade somehow change the driver?  During this test, I opted to use Windows 7 Ultimate 32-bit as the guest VM guinea pig.  A few discoveries were made, one of which was a surprise:

1.  Performing a standard installation of VMware Tools from ESXi 4.0 Update 1 on Windows 7 32-bit will automatically install the wddm driver, version 7.14.1.31 as shown below.  No manual steps were required to install this driver forcing a 2nd reboot.  I wasn’t counting on this.  I expected the Standard VGA Graphics Adapter driver to be installed as seen previously.  This is good.

SnagIt Capture

After moving the VM to a 4.1 cluster and performing the VMware Tools upgrade, the wddm driver was left in tact, however, its version was upgraded to 7.14.1.40.  This is also good in that the tools ugprade doesn’t negatively impact the desired results of leveraging the wddm driver for best graphics performance.

SnagIt Capture

More conclusive testing should be done with Windows 7 and Windows Server 2008 R2 64-bit to see if the results are the same.  I’ll save this for a future lab maybe.

VMware Update Manager Becomes Self-Aware

March 4th, 2010

@Mikemohr on Twitter tonight said it best:

“Haven’t we learned from Hollywood what happens when the machines become self-aware?”

I got a good chuckle.  He took my comment of VMware becoming “self-aware” exactly where I wanted it to go.  A reference to The Terminator series of films in which a sophisticated computer defense system called Skynet becomes self-aware and things go downhill for mankind from there.

Metaphorically speaking in today’s case, Skynet is VMware vSphere and mankind is represented by VMware vSphere Administrators.

During an attempt to patch my ESX(i)4  hosts, I received an error message (click the image for a larger version):

At that point, the remediation task fails and the host is not patched.  The VUM log file reflects the same error in a little more detail:

[2010-03-04 14:58:04:690 ‘JobDispatcher’ 3020 INFO] [JobDispatcher, 1616] Scheduling task VciHostRemediateTask{675}
[2010-03-04 14:58:04:690 ‘JobDispatcher’ 3020 INFO] [JobDispatcher, 354] Starting task VciHostRemediateTask{675}
[2010-03-04 14:58:04:690 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 INFO] [vciTaskBase, 534] Task started…
[2010-03-04 14:58:04:908 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 INFO] [vciHostRemediateTask, 680] Host host-112 scheduled for patching.
[2010-03-04 14:58:05:127 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 INFO] [vciHostRemediateTask, 691] Add remediate host: vim.HostSystem:host-112
[2010-03-04 14:58:13:987 ‘InventoryMonitor’ 2180 INFO] [InventoryMonitor, 427] ProcessUpdate, Enter, Update version := 15936
[2010-03-04 14:58:13:987 ‘InventoryMonitor’ 2180 INFO] [InventoryMonitor, 460] ProcessUpdate: object = vm-2642; type: vim.VirtualMachine; kind: 0
[2010-03-04 14:58:17:533 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 WARN] [vciHostRemediateTask, 717] Skipping host solo.boche.mcse as it contains VM that is running VUM or VC inside it.
[2010-03-04 14:58:17:533 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 INFO] [vciHostRemediateTask, 786] Skipping host 0BC5A140, none of upgrade and patching is supported.
[2010-03-04 14:58:17:533 ‘VciHostRemediateTask.VciHostRemediateTask{675}’ 2676 ERROR] [vciHostRemediateTask, 230] No supported Hosts found for Remediate.
[2010-03-04 14:58:17:737 ‘VciRemediateTask.RemediateTask{674}’ 2676 INFO] [vciTaskBase, 583] A subTask finished: VciHostRemediateTask{675}

Further testing in the lab revealed that this condition will be caused with a vCenter VM and/or a VMware Update Manager (VUM) VM. I understand from other colleagues on the Twitterverse that they’ve seen the same symptoms occur with patch staging.

The work around is to manually place the host in maintenance mode, at which time it has no problem whatsoever evacuating all VMs, including infrastructure VMs.  At that point, the host in maintenance mode can be remediated.

VMware Update Manager has apparently become self-aware in that it detects when its infrastructure VMs are running on the same host hardware which is to be remediated.  Self-awareness in and of itself isn’t bad, however, its feature integration is.  Unfortunately for the humans, this is a step backwards in functionality and a reduction in efficiency for a task which was once automated.  Previously, a remediation task had no problem evacuating all VMs from a host, infrastructure or not. What we have now is… well… consider the following pre and post “self-awareness” remediation steps:

Pre “self-awareness” remediation for a 6 host cluster containing infrastructure VMs:

  1. Right click the cluster object and choose Remediate
  2. Hosts are automatically and sequentially placed in maintenance mode, evacuated, patched, rebooted, and brought out of maintenance mode

Post “self-awareness” remediation for a 6 host cluster containing infrastructure VMs:

  1. Right click Host1 object and choose Enter Maintenance Mode
  2. Wait for evacutation to complete
  3. Right click Host1 object and choose Remediate
  4. Wait for remediation to complete
  5. Right click Host1 object and choose Exit Maintenance Mode
  6. Right click Host2 object and choose Enter Maintenance Mode
  7. Wait for evacutation to complete
  8. Right click Host2 object and choose Remediate
  9. Wait for remediation to complete
  10. Right click Host2 object and choose Exit Maintenance Mode
  11. Right click Host3 object and choose Enter Maintenance Mode
  12. Wait for evacutation to complete
  13. Right click Host3 object and choose Remediate
  14. Wait for remediation to complete
  15. Right click Host3 object and choose Exit Maintenance Mode
  16. Right click Host4 object and choose Enter Maintenance Mode
  17. Wait for evacutation to complete
  18. Right click Host4 object and choose Remediate
  19. Wait for remediation to complete
  20. Right click Host4 object and choose Exit Maintenance Mode
  21. Right click Host5 object and choose Enter Maintenance Mode
  22. Wait for evacutation to complete
  23. Right click Host5 object and choose Remediate
  24. Wait for remediation to complete
  25. Right click Host5 object and choose Exit Maintenance Mode
  26. Right click Host6 object and choose Enter Maintenance Mode
  27. Wait for evacutation to complete
  28. Right click Host6 object and choose Remediate
  29. Wait for remediation to complete
  30. Right click Host6 object and choose Exit Maintenance Mode

It’s Saturday and your kids want to go to the park. Do the math.

Update 5/5/10: I received this response back on 3/5/10 from VMware but failed to follow up with finding out if it was ok to share with the public.  I’ve received the blessing now so here it is:

[It] seems pretty tactical to me. We’re still trying to determine if this was documented publicly, and if not, correct the documentation and our processes.

We introduced this behavior in vSphere 4.0 U1 as a partial fix for a particular class of problem. The original problem is in the behavior of the remediation wizard if the user has chosen to power off or suspend virtual machines in the Failure response option.

If a stand-alone host is running a VM with VC or VUM in it and the user has selected those options, the consequences can be drastic – you usually don’t want to shut down your VC or VUM server when the remediation is in progress. The same applies to a DRS disabled cluster.

In DRS enabled cluster, it is also possible that VMs could not be migrated to other hosts for configuration or other reasons, such as a VM with Fault Tolerance enabled. In all these scenarios, it was possible that we could power off or suspend running VMs based on the user selected option in the remediation wizard.

To avoid this scenario, we decided to skip those hosts totally in first place in U1 time frame. In a future version of VUM, it will try to evacuate the VMs first, and only in cases where it can’t migrate them will the host enter a failed remediation state.

One work around would be to remove such a host from its cluster, patch the cluster, move the host back into the cluster, manually migrate the VMs to an already patched host, and then patch the original host.

It would appear VMware intends to grant us back some flexibility in future versions of vCenter/VUM.  Let’s hope so. This implementation leaves much to be desired.

Update 5/6/10: LucD created a blog post titled Counter the self-aware VUM. In this blog post you’ll find a script which finds the ESX host(s) that is/are running the VUM guest and/or the vCenter guest and will vMotion the guest(s) to another ESX host when needed.

11 New ESX(i) 4.0 Patch Definitions Released; 6 Critical

March 3rd, 2010

Eleven new patch definitions have been released for ESX(i) 4.0 (7 for ESX, 2 for ESXi, 2 for the Cisco Nexus 1000V).  Previous versions of ESX(i) are not impacted.

6 of the 11 patch definitions are rated critical and should be evaluated quickly for application in your virtual infrastructure.

ID: ESX400-201002401-BG Impact: Critical Release date: 2010-03-03 Products: esx 4.0.0 Updates vmkernel64,vmx,hostd etc

This patch provides support and fixes the following issues:

  • On some systems under heavy networking and processor load (large number of virtual machines), some NIC drivers might randomly attempt to reset the device and fail.
    The VMkernel logs generate the following messages every second:
    Oct 13 05:19:19 vmkernel: 0:09:22:33.216 cpu2:4390)WARNING: LinNet: netdev_watchdog: NETDEV WATCHDOG: vmnic1: transmit timed out
    Oct 13 05:19:20 vmkernel: 0:09:22:34.218 cpu8:4395)WARNING: LinNet: netdev_watchdog: NETDEV WATCHDOG: vmnic1: transmit timed out
  • ESX hosts do not display the proper status of the NFS datastore after recovering from a connectivity loss.
    Symptom: In vCenter Server, the NFS datastore is displayed as inactive.
  • When using NPIV, if the LUN on the physical HBA path is not same as the LUN on the virtual port (VPORT) path, though the LUNID:TARGETID pairs are same, then I/O might be directed to the wrong LUN causing a possible data corruption. Refer KB 1015290 for more information.
    Symptom: If NPIV is not configured properly, I/O might be directed to the wrong disk.
  • On Fujitsu systems, the OEM-IPMI-Command-Handler that lists the available OEM IPMI commands do not work as intended. No custom OEM IPMI commands are listed, though they were initialized correctly by the OEM. After applying this fix, running the VMware_IPMIOEMExtensionService and VMware_IPMIOEMExtensionServiceImpl objects displays the supported commands as listed in the command files.
  • Provides prebuilt kernel module drivers for Ubuntu 9.10 guest operating systems.
  • Adds support for upstreamed kernel PVSCSI and vmxnet3 modules.
  • Provides a change to the maintenance mode requirement during Cisco Nexus 1000V software upgrade. After installing this patch if you perform Cisco Nexus 1000V software upgrade, the ESX host goes into maintenance mode during the VEM upgrade.
  • In certain race conditions, freeing journal blocks from VMFS filesystems might fail. The WARNING: J3: 1625: Error freeing journal block (returned 0) for 497dd872-042e6e6b-942e-00215a4f87bb: Lock was not free error is written to the VMware logs.
  • Changing the resolution of the guest operating system over a PCoIP connection (desktops managed by View 4.0) might cause the virtual machine to stop responding.
    Symptoms: The following symptoms might be visible:

    • When you try to connect to the virtual machine through a vCenter Server console, a black screen appears with the Unable to connect to MKS: vmx connection handshake failed for vmfs {VM Path} message.
    • Performance graphs for CPU and memory usage in vCenter Server drop to 0.
    • Virtual machines cannot be powered off or restarted.

ID: ESX400-201002402-BG Impact: Critical Release date: 2010-03-03 Products: esx 4.0.0 Updates initscripts

This patch fixes an issue where pressing Ctrl+Alt+Delete on service console causes ESX 4.0 hosts to reboot.

ID: ESX400-201002404-SG Impact: HostSecurity Release date: 2010-03-03 Products: esx 4.0.0 Updates glib2

The service console package for GLib2 is updated to version glib2-2.12.3-4.el5_3.1. This GLib update fixes an issue where the functions inside GLib incorrectly allows multiple integer overflows leading to heap-based buffer overflows in GLib’s Base64 encoding and decoding functions. This might allow an attacker to possibly execute arbitrary code while a user is running the application. The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2008-4316 to this issue.

ID: ESX400-201002405-BG Impact: Critical Release date: 2010-03-03 Products: esx 4.0.0 Updates megaraid-sas

This patch fixes an issue where some applications do not receive events even after registering for Asynchronous Event Notifications (AEN). This issue occurs when multiple applications register for AENs.

ID: ESX400-201002406-SG Impact: HostSecurity Release date: 2010-03-03 Products: esx 4.0.0 Updates newt

The service console package for Newt library is updated to version newt-0.52.2-12.el5_4.1. This security update of Newt library fixes an issue where an attacker might cause a denial of service or possibly execute arbitrary code with the privileges of a user who is running applications using the Newt library. The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-2905 to this issue.

ID: ESX400-201002407-SG Impact: HostSecurity Release date: 2010-03-03 Products: esx 4.0.0 Updates nfs-utils

The service console package for nfs-utils is updated to version nfs-utils-1.0.9-42.el5. This security update of nfs-utils fixes an issue that might permit a remote attacker to bypass an intended access restriction. The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2008-4552 to this issue.

ID: ESX400-201002408-BG Impact: Critical Release date: 2010-03-03 Products: esx 4.0.0 Updates Enic driver

In scenarios where Pass Thru Switching (PTS) is in effect, if virtual machines are powered on, the network interface might not come up. In PTS mode, when the network interface is brought up, PTS figures the MTU from the network. There is a race in this scenario, where the enic driver might incorrectly indicate that the driver fails. This issue might occur frequently on a CISCO UCS system. This patch fixes the issue.

ID: ESXi400-201002401-BG Impact: Critical Release date: 2010-03-03 Products: embeddedEsx 4.0.0 Updates Firmware

This patch provides support and fixes the following issues:

  • On some systems under heavy networking and processor load (large number of virtual machines), some NIC drivers might randomly attempt to reset the device and fail.
    The VMkernel logs generate the following messages every second:
    Oct 13 05:19:19 vmkernel: 0:09:22:33.216 cpu2:4390)WARNING: LinNet: netdev_watchdog: NETDEV WATCHDOG: vmnic1: transmit timed out
    Oct 13 05:19:20 vmkernel: 0:09:22:34.218 cpu8:4395)WARNING: LinNet: netdev_watchdog: NETDEV WATCHDOG: vmnic1: transmit timed out
  • ESX hosts do not display the proper status of the NFS datastore after recovering from a connectivity loss.
    Symptom: In vCenter Server, the NFS datastore is displayed as inactive.
  • When using NPIV, if the LUN on the physical HBA path is not same as the LUN on the virtual port (VPORT) path, though the LUNID:TARGETID pairs are same, then I/O might be directed to the wrong LUN causing a possible data corruption. Refer KB 1015290 for more information.
    Symptom: If NPIV is not configured properly, I/O might be directed to the wrong disk.
  • On Fujitsu systems, the OEM-IPMI-Command-Handler that lists the available OEM IPMI commands do not work as intended. No custom OEM IPMI commands are listed, though they were initialized correctly by the OEM. After applying this fix, running the VMware_IPMIOEMExtensionService and VMware_IPMIOEMExtensionServiceImpl objects displays the supported commands as listed in the command files.
  • Provides prebuilt kernel module drivers for Ubuntu 9.10 guest operating systems.
  • Adds support for upstreamed kernel PVSCSI and vmxnet3 modules.
  • Provides a change to the maintenance mode requirement during Cisco Nexus 1000V software upgrade. After installing this patch if you perform Cisco Nexus 1000V software upgrade, the ESX host goes into maintenance mode during the VEM upgrade.
  • In certain race conditions, freeing journal blocks from VMFS filesystems might fail. The WARNING: J3: 1625: Error freeing journal block (returned 0) for 497dd872-042e6e6b-942e-00215a4f87bb: Lock was not free error is written to the VMware logs.
  • Changing the resolution of the guest operating system over a PCoIP connection (desktops managed by View 4.0) might cause the virtual machine to stop responding.
    Symptoms: The following symptoms might be visible:

    • When you try to connect to the virtual machine through a vCenter Server console, a black screen appears with the Unable to connect to MKS: vmx connection handshake failed for vmfs {VM Path} message.
    • Performance graphs for CPU and memory usage in vCenter Server drop to 0.
    • Virtual machines cannot be powered off or restarted.

ID: ESXi400-201002402-BG Impact: Critical Release date: 2010-03-03 Products: embeddedEsx 4.0.0 Updates VMware Tools

This patch fixes an issue where pressing Ctrl+Alt+Delete on service console causes ESX 4.0 hosts to reboot.

ID: VEM400-201002001-BG Impact: HostGeneral Release date: 2010-03-03 Products: embeddedEsx 4.0.0, esx 4.0.0 Cisco Nexus 1000V VEM

ID: VEM400-201002011-BG Impact: HostGeneral Release date: 2010-03-03 Products: embeddedEsx 4.0.0, esx 4.0.0 Cisco Nexus 1000V VEM

VMware Releases ESX(i) 3.5 Update 5; Critical Updates

December 5th, 2009

VMware apparently released ESX(i) 3.5 Update 5 dated 12/3/09, however it became available on Update Manager late this afternoon.  VMware is extremely poor at communicating anything but major releases, so to get the fastest notification possible about security patches and updates, I configure my VMware Update Manager servers to check for updates every 6 hours and provide me with email notification of anything it finds.  VMware doesn’t listen to me much when it comes to feature requests so I’ll shelve the ranting.

So what’s new in ESX 3.5 Update 5?  The major highlights are guest VM support for Windows 7 and Windows Server 2008 R2 (reminder, 64-bit only), as well as Ubuntu 9.04, and added hardware support for processors and NICs.  Before you get too excited about Windows 7, remember that it is not a supported guest operating system in VMware View.  Even in the new View 4 release, Windows 7 has “Technology Preview” support status only.

If you track the updates from VMware Update Manager, the 12/3 releases amount to 20 updates including Update 5, 16 updates of which are rated critical.  If you’re still a ways out on vSphere deployment, you’ll probably want to take a look at the critical updates for your 3.x environment.

Enablement of Intel Xeon Processor 3400 Series – Support for the Intel Xeon processor 3400 series has been added. Support includes Enhanced VMotion capabilities. For additional information on previous processor families supported by Enhanced VMotion, see Enhanced VMotion Compatibility (EVC) processor support (KB 1003212).

Driver Update for Broadcom bnx2 Network Controller – The driver for bnx2 controllers has been upgraded to version 1.6.9. This driver supports bootcode upgrade on bnx2 chipsets and requires bmapilnx and lnxfwnx2 tools upgrade from Broadcom. This driver also adds support for Network Controller – Sideband Interface (NC-SI) for SOL (serial over LAN) applicable to Broadcom NetXtreme 5709 and 5716 chipsets.

Driver Update for LSI SCSI and SAS Controllers – The driver for LSI SCSI and SAS controllers is updated to version 2.06.74. This version of the driver is required to provide a better support for shared SAS environments.

Newly Supported Guest Operating Systems – Support for the following guest operating systems has been added specifically for this release:

For more complete information about supported guests included in this release, see the VMware Compatibility Guide: http://www.vmware.com/resources/compatibility/search.php?deviceCategory=software.

•Windows 7 Enterprise (32-bit and 64-bit)
•Windows 7 Ultimate (32-bit and 64-bit)
•Windows 7 Professional (32-bit and 64-bit)
•Windows 7 Home Premium (32-bit and 64-bit)
•Windows 2008 R2 Standard Edition (64-bit)
•Windows 2008 R2 Enterprise Edition (64-bit)
•Windows 2008 R2 Datacenter Edition (64-bit)
•Windows 2008 R2 Web Server (64-bit)
•Ubuntu Desktop 9.04 (32-bit and 64-bit)
•Ubuntu Server 9.04 (32-bit and 64-bit)

Newly Supported Management Agents – See VMware ESX Server Supported Hardware Lifecycle Management Agents for current information on supported management agents.

Newly Supported Network Cards – This release of ESX Server supports HP NC375T (NetXen) PCI Express Quad Port Gigabit Server Adapter.

Newly Supported SATA Controllers – This release of ESX Server supports the Intel Ibex Peak SATA AHCI controller.

Note:

•Some limitations apply in terms of support for SATA controllers. For more information, see SATA Controller Support in ESX 3.5. (KB 1008673)

•Storing VMFS datastores on native SATA drives is not supported.

Create a 32-bit vCenter DSN on a 64-bit Operating System

November 21st, 2009

As I had pointed out in this blog post, VMware hints that 64-bit may be the future for vCenter Server. I decided that for my upgrade to vCenter 4.0 Update 1 this weekend, I would take the opportunity to rebuild my vCenter server from Windows Server 2003 32-bit to Windows Server 2008 64-bit.

Once the 64-bit base operating system build was complete, I installed the 64-bit Microsoft SQL Server Native Client drivers (downloadable here) since my back end database is Microsoft SQL Server 2005 on a remote server. A key thing to remember about this installation is that it installs both 64-bit and 32-bit DSN drivers.

The next step is to create the vCenter ODBC DSNs. Although vCenter Server runs on 64-bit operating systems, it currently requires a 32-bit ODBC DSN. This is important to remember because the Windows Start Menu launches the 64-bit ODBC DSN tool, not the 32-bit version I needed.  The vCenter Server (and Update Manager) installation will not complete without a 32-bit DSN.

To create a 32-bit DSN on a 64-bit operating system, run the following executable:

[WindowsDir]\SysWOW64\odbcad32.exe

Once the utility opens, you’ll be greeted by all the legacy 32-bit ODBC DSNs you’ve likely seen for years working with tiered Windows platforms. If using Microsoft SQL Server 2005 like me, be sure to select the SQL Native Client driver towards the bottom of the list, and not Driver da Microsoft para arquivos texto highlighted below:

Proceed with the creation of the vCenter Server and Update Manager ODBC DSNs and complete the vCenter Server and Update Manager installations.

This information and much more can be found in the ESX and vCenter Server Installation Guide, page 73.