Archive for September, 2011

VMware issues recall on new vSphere 5.0 UNMAP feature

September 30th, 2011

One of the new features in vSphere 5.0 is Thin Provisioning Block Space Reclamation (UNMAP).  This was released as one part of a new VAAI primitive (the other component of the new primitive being thin provision stun).

Today, VMware released KB 2007427 Disabling VAAI Thin Provisioning Block Space Reclamation (UNMAP) in ESXi 5.0.

Due to varied response times from the storage devices, UNMAP command can result in poor performance of the system and should be disabled on the ESXi 5.0 host.  This variation of response times in critical regions could potentially interfere with operations such as Storage vMotion and Virtual Machine Snapshot consolidation.

VMware intends to disable UNMAP in an upcoming patch release till full support for Space Reclamation is available.

As described in the article, the workaround to avoid the use of UNMAP commands on Thin Provisioned LUNs is as follows:

  1. Log into your host using Tech Support mode. For more information on using Tech Support mode see Tech Support Mode in ESXi 4.1 and 5.0 (1017910).
  2. From your ESXi 5.0 host, issue this esxcli command:  esxcli system settings advanced set –int-value 0 –option /VMFS3/EnableBlockDelete

Note: In the command above, double hyphens are used before “int-value” and “option”; the font used may render them as a single long hypen. This is a per-host setting and must be issued on each ESXi 5.0 host in your cluster.

Update 12/16/11: VMware released five (5) non-critical patches last night.  One of those patches is ESXi500-201112401-SG which is the anticipated update that disables the UNMAP functionality in the new vSphere 5 Thin Provisioning VAAI primitive.  Full patch details below:

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • Updates the glibc third party library to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-0296, CVE-2011-0536, CVE-2011-1071, CVE-2011-1095, CVE-2011-1658 and CVE-2011-1659 to these issues.
  • When a hot spare disk that is added to a RAID group is accessed before the disk instance finishes initialization or if the disk is removed while an instance of it is being accessed, a race condition might occur causing the vSphere Client to not display information about the RAID controllers and the vSphere Client user interface might also not respond for a very long time.
  • vMotion fails with the A general system error occurred: Failed to flush checkpoint data!error message when:
    • The resolution of the virtual machines is higher than 12801024, or smaller if you are using a second screen
    • The guest operating system is using the WDDM driver (Windows 7, Windows 2008 R2, Windows 2008, Windows Vista)
    • The virtual machine is using Virtual Machine Hardware version 8.
  • Creating host profiles of ESX i 5.0 hosts might fail when the host profile creation process is unable to resolve the hostname and IP address of the host by relying on the DNS for hostname and IP address lookup. An error message similar to the following is displayed:
    Call"HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server" failed.
    Error extracting indication configuation: [Errno- 2] Name or service not known.
  • In vSphere 5.0, Thin Provisioning is enabled by default on devices that adhere to T10 standards. On such thin provisioned LUNs, vSphere issues SCSI UNMAP commands to help the storage arrays reclaim unused space. Sending UNMAP commands might cause performance issues with operations such as snapshot consolidation or storage vMotion.
    This patch resolves the issue by disabling the space reclamation feature, by default.
  • If a user subscribes for an ESXi Server’s CIM indications from more that one client (for example, c1 and c2) and deletes the subscription from the first client (c1), the other clients (C2) might fail to receive any indication notification from the host.

This patch also provides you with the option of configuring the iSCSI initiator login timeout value for software iSCSI and dependent iSCSI adapters.
For example, to set the login timeout value to 10 seconds you can use commands similar to the following:

  • ~ # vmkiscsi-tool -W -a "login_timeout=10" vmhba37
  • ~ # esxcli iscsi adapter param set -A vmhba37 -k LoginTimeout -v 10

The default login timeout value is 5 seconds and the maximum value that you can set is 60 seconds.
We recommend that you change the login timeout value only if suggested by the storage vendor.

Changing the default vSphere 5.0 PSP to Round Robin

September 28th, 2011

If you have a vSphere 5.0 environment backed by a storage array (SAN) which supports multipathing over two or more active front end ports (or if you have an array with ALUA support), you may be interested in using VMware’s Round Robin PSP (Path Selection Policy) to distribute storage I/O evenly across multiple fabrics and/or fabric paths.  One of the benefits with the Round Robin PSP is that it performs the I/O balancing automatically as opposed to manually tuning fabric and path utilization which is associated with the Fixed PSP – typically the default for active/active arrays.  If you’re familiar with Round Robin, you’re probably already aware that you can manually change the PSP using the vSphere Client.  However, this can become a tedious affair yielding inconsistent configurations since each LUN on each host in the cluster needs to be configured.

SnagIt Capture

A better solution would be to modify the default PSP for your SATP (Storage Array Type Plugin) so that each new LUN presented to the hosts is automatically configured for Round Robin.

Taking a look at the default PSP for each SATP, I see there is a mix of two different PSPs: VMW_PSP_FIXED (generally for active/active arrays) and VMW_PSP_MRU (generally for active/passive arrays).  Notice the Round Robin policy VMW_PSP_RR is not the default for any SATP:

[root@lando /]# esxcli storage nmp satp list
Name                 Default PSP    Description
——————-  ————-  ——————————————————-
VMW_SATP_ALUA_CX     VMW_PSP_FIXED  Supports EMC CX that use the ALUA protocol
VMW_SATP_ALUA        VMW_PSP_MRU    Supports non-specific arrays that use the ALUA protocol
VMW_SATP_MSA         VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_DEFAULT_AP  VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_SVC         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_EQL         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_INV         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_EVA         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_SYMM        VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_CX          VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_LSI         VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_DEFAULT_AA  VMW_PSP_FIXED  Supports non-specific active/active arrays
VMW_SATP_LOCAL       VMW_PSP_FIXED  Supports direct attached devices

Modifying the PSP is achieved with a single command on each ESXi host (no reboot required):

[root@lando /]# esxcli storage nmp satp set -s VMW_SATP_ALUA_CX -P VMW_PSP_RR
Default PSP for VMW_SATP_ALUA_CX is now VMW_PSP_RR

Similarly and specifically for Dell Compellent Storage Center arrays, modifying the PSP is achieved with a single command on each ESXi host (no reboot required):

Storage Center 6.5 and older esxcli method:

[root@lando /]# esxcli storage nmp satp set -s VMW_SATP_DEFAULT_AA -P VMW_PSP_RR
Default PSP for VMW_SATP_DEFAULT_AA is now VMW_PSP_RR

Storage Center 6.6 and newer esxcli method:

[root@lando /]# esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR
Default PSP for VMW_SATP_ALUA is now VMW_PSP_RR

Storage Center 6.5 and older PowerShell method:

$Datacenter = Get-Datacenter -Name “Datacenter”

ForEach ( $VMHost in ( Get-VMHost -Location $Datacenter | Sort-Object Name ) )

{

Write-Host “Working on host `”$($VMHost.Name)`”” -ForegroundColor Green

$EsxCli = Get-EsxCli -VMHost $VMHost

$EsxCli.storage.nmp.satp.list() | Where-Object { $_.Name -eq “VMW_SATP_DEFAULT_AA” }

$EsxCli.storage.nmp.satp.set( $null, “VMW_PSP_RR”, “VMW_SATP_DEFAULT_AA” )

$EsxCli.storage.nmp.satp.list() | Where-Object { $_.Name -eq “VMW_SATP_DEFAULT_AA” }

}

Storage Center 6.6 and newer PowerShell method:

$Datacenter = Get-Datacenter -Name “Datacenter”

ForEach ( $VMHost in ( Get-VMHost -Location $Datacenter | Sort-Object Name ) )

{

Write-Host “Working on host `”$($VMHost.Name)`”” -ForegroundColor Green

$EsxCli = Get-EsxCli -VMHost $VMHost

$EsxCli.storage.nmp.satp.list() | Where-Object { $_.Name -eq “VMW_SATP_ALUA” }

$EsxCli.storage.nmp.satp.set( $null, “VMW_PSP_RR”, “VMW_SATP_ALUA” )

$EsxCli.storage.nmp.satp.list() | Where-Object { $_.Name -eq “VMW_SATP_ALUA” }

}

If I take a look at the the default PSP for each SATP, I can see the top one has changed from VMW_PSP_FIXED to VMW_PSP_RR:

[root@lando /]# esxcli storage nmp satp list
Name                 Default PSP    Description
——————-  ————-  ——————————————————-
VMW_SATP_ALUA_CX     VMW_PSP_RR     Supports EMC CX that use the ALUA protocol
VMW_SATP_ALUA        VMW_PSP_RR    Supports non-specific arrays that use the ALUA protocol
VMW_SATP_CX          VMW_PSP_MRU    Supports EMC CX that do not use the ALUA protocol
VMW_SATP_MSA         VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_DEFAULT_AP  VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_SVC         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_EQL         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_INV         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_EVA         VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_SYMM        VMW_PSP_FIXED  Placeholder (plugin not loaded)
VMW_SATP_LSI         VMW_PSP_MRU    Placeholder (plugin not loaded)
VMW_SATP_DEFAULT_AA  VMW_PSP_RR  Supports non-specific active/active arrays
VMW_SATP_LOCAL       VMW_PSP_FIXED  Supports direct attached devices

Now when I present a new LUN to the host which uses the VMW_SATP_ALUA_CX SATP, instead of using the old PSP default of VMW_PSP_FIXED, it applies the new default PSP which is VMW_PSP_RR (Round Robin):

SnagIt Capture

To clarify just a little further, what I’ve done is change the default PSP for just one SATP.  If I had other active/active or ALUA arrays which used a different SATP, I’d need to modify the default PSP for those corresponding SATPs as well.

This is good VCAP-DCA fodder.  For more on this, take a look at the vSphere Storage Guide.

If you’ve already presented and formatted your LUNs to your vSphere cluster, it’s too late to use the above method to automagically configure each of the block devices with the Round Robin PSP.  If that is the case you find yourself in with a lot of datastores you’d like to reconfigure for Round Robin, PowerShell can be leveraged with the example below changing the PSP to Round Robin explicitly for Dell Compellent Storage Center volumes (this script comes by way of the Dell Compellent Best Practices Guide for VMware vSphere):

Storage Center 6.5 and older PowerShell method:

Get-Cluster InsertClusterNameHere | Get-VMHost | Get-ScsiLun | where {$_.Vendor -eq “COMPELNT” –and $_.Multipathpolicy -eq “Fixed”} | Set-ScsiLun -Multipathpolicy RoundRobin

Storage Center 6.6 and newer PowerShell method:

Get-Cluster InsertClusterNameHere | Get-VMHost | Get-ScsiLun | where {$_.Vendor -eq “COMPELNT” –and $_.Multipathpolicy -eq “MostRecentlyUsed”} | Set-ScsiLun -Multipathpolicy RoundRobin

The PSP for devices which are already presented and in use by vSphere can also be modified individually per host, per device using esxcli. First, retrieve a list of all devices and their associated SATP and PSP configuration via esxcli on the host:

[root@lando:~] esxcli storage nmp device list
naa.6000d31000ed1f010000000000000015
Device Display Name: COMPELNT Fibre Channel Disk (naa.6000d31000ed1f010000000000000015)
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=on;alua_followover=on; action_OnRetryErrors=off; {TPG_id=61485,TPG_state=AO}{TPG_id=61486,TPG_state=AO}{TPG_id=61483,TPG_state=AO}{TPG_id=61484,TPG_state=AO}}
Path Selection Policy: VMW_PSP_MRU
Path Selection Policy Device Config: Current Path=vmhba1:C0:T10:L256
Path Selection Policy Device Custom Config:
Working Paths: vmhba1:C0:T10:L256
Is USB: false

Now change the PSP for the individual device:

[root@lando:~] esxcli storage nmp device set -d naa.6000d31000ed1f010000000000000015 -P VMW_PSP_RR

Perform this action for each device on each host in the cluster as needed.

Round Robin specific tuning can be made per device per host using the esxcli storage nmp psp roundrobin deviceconfig set command. Type may be default, iops, or bytes. The Round Robin default is 1000 IOPS. Default bytes is 10485760 or 10MB. Following is an example changing the Round Robin policy for the device to IOPS and 3 IOPS per path:

[root@lando:~] esxcli storage nmp psp roundrobin deviceconfig set -d naa.6000d31000ed1f010000000000000015 -t=iops -I=3

Enabling vCenter Server 5.0 Database Monitoring

September 27th, 2011

I stumbled across this while rummaging through the vSphere 5.0 Installation and Setup document.  Page 183 contains a small section (new in vSphere 5.0) which describes a process to enable database monitoring for Microsoft SQL Server (surrounding pages discuss enabling the same for other supported database platforms).  The SQL script provided in the documentation contains an error on the first line but I was able to adjust that and run it on the SQL 2008 R2 server in the lab.  Following is the script I ran:

use master
go
grant VIEW SERVER STATE to vcenter
go

Once access has been granted, vCenter will collect certain SQL Server health statistics and store them in the rotating vCenter profile log located by default at C:\ProgramData\VMware\VMware VirtualCenter\Logs\vpxd-profiler-xx.log.  These metrics were taken from my vCenter Server log file and serve as an example of what is being collected from the SQL Server by the vCenter Server:

–>
–> DbMonitoring/Counter/Storage: Manually extensible data files/Unit/count/Range Type/range/RangeMin/0/RangeMax/0/Timestamp/2011-09-27T18:00:01.79Z/Value/0
–> DbMonitoring/Counter/Memory:Database pages/Unit/timesIncrease/Range Type/range/RangeMin/0/RangeMax/3/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–> DbMonitoring/Counter/Storage: Peak data file storage utilization/Unit/percent/Range Type/range/RangeMin/60559224/RangeMax/90/Timestamp/2011-09-27T18:00:01.802999Z/Value/0
–> DbMonitoring/Counter/Memory:Availaable/Unit/kiloBytes/Range Type/range/RangeMin/5120/RangeMax/60559416/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–> DbMonitoring/Counter/Memory:Page Life Expectancy/Unit/seconds/Range Type/range/RangeMin/300/RangeMax/60559416/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–> DbMonitoring/Counter/IO:Log growths/Unit/timesIncrease/Range Type/range/RangeMin/0/RangeMax/3/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–> DbMonitoring/Counter/CPU:Usage/Unit/percent/Range Type/range/RangeMin/0/RangeMax/80/Timestamp/2011-09-27T18:00:01.75Z/Value/44
–> DbMonitoring/Counter/Memory:Buffer cache hit ratio/Unit/percent/Range Type/range/RangeMin/90/RangeMax/100/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–> DbMonitoring/Counter/General:User Connections/Unit/count/Range Type/range/RangeMin/255/RangeMax/60559416/Timestamp/1970-01-01T00:00:00Z/Value/N/A
–>

Per VMware’s documentation:

vCenter Server Database Monitoring captures metrics that enable the administrator to assess the status and health of the database server. Enabling Database Monitoring helps the administrator prevent vCenter downtime because of a lack of resources for the database server. Database Monitoring for vCenter Server enables administrators to monitor the database server CPU, memory, I/O, data storage, and other environment factors for stress conditions. Statistics are stored in the vCenter Server Profile Logs. You can enable Database Monitoring for a user before or after you install vCenter Server. You can also perform this procedure while vCenter Server is running.

One thing that I noticed is that these metrics were being collected in the vCenter log files prior to running the enabling script.  I’m not sure if this is because vCenter already had the required permissions to the master database (I use SQL authentication and I didn’t explicitly grant this), or perhaps this is enabled by default in the vCenter installation routine when the database prepare script runs.

The instructions provide plenty of context but are are fairly brief and don’t identify next steps or how to harvest the collected metrics.  Perhaps the vCenter Service Health agent monitors the profile log and will alarm through vCenter.  If not, then I view this as a monitoring framework VMware provides which can tailored for specific environments.  Thresholds could be defined which trigger alerts proactively before dangers or an outage occurs.  Admittedly I’m not a DBA.  With what’s provided, I’m not sure if this provides much value above and beyond native monitoring and alerting provided by SQL Server and Perfmon.

VMware View 5.0 VDI vHardware 8 vMotion Error

September 20th, 2011

General awareness/heads up blog post here on something I stumbled on with VMware View 5.0.  A few weeks ago while working with View 5.0 BETA in the lab, I ran into an issue where a Windows 7 virtual machine would not vMotion from one ESXi 5.0 host to another.  The resulting error in the vSphere Client was:

A general system error occurred: Failed to flush checkpoint data

I did a little searching and found similar symptoms in VMware KB 1011971 which speaks to an issue that can arise  when Video RAM (VRAM) is greater than 30MB for a virtual machine. In my case it was greater than 30MB but I could not adjust it due to the fact that it was being managed by the View Connection Server.  At the same time, a VMware source on Twitter volunteered his assistance and quickly came up with some inside information on the issue.  He had me try adding the following line to /etc/vmware/config on the ESXi 5.0 hosts (no reboot required):

migrate.baseCptCacheSize = “16777216”

The fix worked and I was able to vMotion the Windows 7 VM back and forth between hosts.  The information was taken back to Engineering for a KB to be released.  That KB is now available: VMware KB 2005741 vMotion of a virtual machine fails with the error: A general system error occurred: Failed to flush checkpoint data! The new KB article lists the following background information and several workarounds:

Cause

Due to new features with Hardware Version 8 for the WDDM driver, the vMotion display graphics memory requirement has increased. The default pre-allocated buffer may be too small for certain virtual machines with higher resolutions. The buffer size is not automatically increased to account for the requirements of those new features if mks.enable3d is set to FALSE (the default).

Resolution

To work around this issue, perform one of these options:

  • Change the resolution to a single screen of 12801024 or smaller before the vMotion.
  • Do not upgrade to Virtual Machine Hardware version 8.
  • Increase the base checkpoint cache size. Doubling it from its default 8MB to 16MB (16777216 byte) should be enough for every single display resolution. If you are using two displays at 16001200 each, increase the setting to 20MB (20971520 byte).To increase thebase checkpoint cache size:

    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add migrate. baseCptCacheSize to the name column and add 16777216 to the value column.
    8. Click OK to save the change.

    Note: If you don’t want to power off your virtual machine to change the resolution, you can also add the parameter to the /etc/vmware/config file on the target host. This adds the option to every VMX process that is spawning on this host, which happens when vMotion is starting a virtual machine on the server.

  • Set mks.enable3d = TRUE for the virtual machine:
    1. Power off the virtual machine.
    2. Click the virtual machine in the Inventory.
    3. On the Summary tab for that virtual machine, click Edit Settings.
    4. In the virtual machine Properties dialog box, click the Options tab.
    5. Under Advanced, select General and click Configuration Parameters.
    6. Click Add Row.
    7. In the new row, add mks.enable3d to the name column and add True to the value column.
    8. Click OK to save the change.
Caution: This workaround increases the overhead memory reservation by 256MB. As such, it may have a negative impact on HA Clusters with strict Admission Control. However, this memory is only used if the 3d application is active. If, for example, Aero Basic and not Aero Glass is used as a window theme, most of the reservation is not used and the memory could be kept available for the ESX host. The reservation still affects HA Admission Control if large multi-monitor setups are used for the virtual machine and if the CPU is older than a Nehalem processor and does not have the SSE 4.1 instruction set. In this case, using 3d is not recommended. The maximum recommended resolution for using 3d, regardless of CPU type and SSE 4.1 support, is 19201200 with dual screens.

The permanent fix for this issue did not make it into the recent View 5.0 GA release but I expect it will be included in a future release or patch.

Update 12/23/11: VMware released five (5) non-critical patches last week.  One of those patches is ESXi500-201112401-SG which permanently resolves the issues described above.  Full patch details below:

Summaries and Symptoms

This patch updates the esx-base VIB to resolve the following issues:

  • Updates the glibc third party library to resolve multiple security issues.
    The Common Vulnerabilities and Exposures project (cve.mitre.org) has assigned the names CVE-2010-0296, CVE-2011-0536, CVE-2011-1071, CVE-2011-1095, CVE-2011-1658 and CVE-2011-1659 to these issues.
  • When a hot spare disk that is added to a RAID group is accessed before the disk instance finishes initialization or if the disk is removed while an instance of it is being accessed, a race condition might occur causing the vSphere Client to not display information about the RAID controllers and the vSphere Client user interface might also not respond for a very long time.
  • vMotion fails with the A general system error occurred: Failed to flush checkpoint data!error message when:
    • The resolution of the virtual machines is higher than 1280×1024, or smaller if you are using a second screen
    • The guest operating system is using the WDDM driver (Windows 7, Windows 2008 R2, Windows 2008, Windows Vista)
    • The virtual machine is using Virtual Machine Hardware version 8.
  • Creating host profiles of ESX i 5.0 hosts might fail when the host profile creation process is unable to resolve the hostname and IP address of the host by relying on the DNS for hostname and IP address lookup. An error message similar to the following is displayed:
    Call"HostProfileManager.CreateProfile" for object "HostProfileManager" on vCenter Server" failed.
    Error extracting indication configuation: [Errno- 2] Name or service not known.
  • In vSphere 5.0, Thin Provisioning is enabled by default on devices that adhere to T10 standards. On such thin provisioned LUNs, vSphere issues SCSI UNMAP commands to help the storage arrays reclaim unused space. Sending UNMAP commands might cause performance issues with operations such as snapshot consolidation or storage vMotion.
    This patch resolves the issue by disabling the space reclamation feature, by default.
  • If a user subscribes for an ESXi Server’s CIM indications from more that one client (for example, c1 and c2) and deletes the subscription from the first client (c1), the other clients (C2) might fail to receive any indication notification from the host.

This patch also provides you with the option of configuring the iSCSI initiator login timeout value for software iSCSI and dependent iSCSI adapters.
For example, to set the login timeout value to 10 seconds you can use commands similar to the following:

  • ~ # vmkiscsi-tool -W -a "login_timeout=10" vmhba37
  • ~ # esxcli iscsi adapter param set -A vmhba37 -k LoginTimeout -v 10

The default login timeout value is 5 seconds and the maximum value that you can set is 60 seconds.
We recommend that you change the login timeout value only if suggested by the storage vendor.

Professional VMware BrownBag Group Learning

September 19th, 2011

Snagit Capture

If you weren’t already aware, VMware vEXPERT Cody Bunch has been hosting a series of BrownBag learning sessions covering topics from VCP4, VCAP4-DCA, and VCAP4-DCD exam blueprints, in addition to VCDX topics.  A number of individuals from the VMware community have been lending Cody assistance in leading these sessions.  I’ll be stepping up to the plate this Wednesday evening, 9/21 at 7pm CDT to help out.  I’ll be covering VCAP4-DCD exam blueprint objectives:

  • 1.1 Gather and analyze business requirements
  • 1.2 Gather and analyze application requirements
  • 1.3 Determine Risks, Constraints, and Assumptions

If you’re thinking of attempting the VCAP4-DCD exam or if you’re preparing for the VCDX certification, this session is for you.  Again, details below, sign up today – it’s free!

Updated 9/21/11: The live session is complete but you can download the recorded version at the Professional VMware link above.  I’m also embedding a link to the SlideRocket presentation for as long as my trial account is active (through the beginning of October).

Rogue SRM 5.0 Shadow VM Icons

September 13th, 2011

Snagit CaptureOne of the new features in VMware SRM 5.0 is Shadow VM Icons.  When VMs are protected at the primary site, these placeholder objects will automatically be created in VM inventory at the secondary site.  It may seem like a trivial topic for discussion but it is important to recognize that these placeholder objects represent datacenter capacity which will be needed and consumed on demand if and when the VMs are powered on during a planned migration or disaster recovery operation within SRM.  In previous versions of SRM, the placeholder VMs simply looked like powered off virtual machines.  In SRM 5.0, these placeholder VMs get a facelift to provide better clarity of their disposition.  You can see what these Shadow VM Icons look like in the image to the right.

Each SRM Server maintains its own unique SQL database instance in order to track current state of the environment.  It does a pretty good job of this.  However, at some point you may run into an instance where once SRM protected VMs are no longer protected (by choice or design), yet they maintain the new Shadow VM Icon look which can yield a false sense of protection.  If the VMs truly are not protected, they should have no relationship with SRM and thus should not be wearing the Shadow VM Icon.  I ran into this during an SRM upgrade.  I corrected the rogue icon by removing the VM from inventory and re-added to inventory.  This action is safe to quickly perform on running VMs.

VMworld 2011 Recap at Nexus Information Systems 9/14

September 12th, 2011

Couldn’t make the big show? No problem!

Join me at Nexus Information Systems Sept. 14th as we recap VMworld 2011! VMworld 2011 took place August 28th – Sept 1st with over 170 unique Breakout Sessions and 30+ Hands On Lab topics offered across four days. We’ll be covering our thoughts on the direction of VMware virtualization, the buzz we observed from the VMware community, and highlights of ecosystem vendors (with a special message from Dell Compellent & others). We’ll cover some specifics on:

  • VMware vSphere 5.0
  • vCloud Director 1.5
  • View 5.0
  • SRM 5.0
  • Tech Previews – AppBlast & Octopus

Wednesday, September 14, 2011 from 11:00 AM to 1:00 PM (CT)

Nexus Information Systems
6103 Blue Circle Drive
Hopkins, MN 55343

Lunch will be served

Sign up today!

Sponsored by: