Archive for the ‘Virtualization’ category

vCloud Director web page portal fails to load

March 25th, 2019

Last week I went through the process to upgrade a vCloud Director for Service Providers environment to version 9.5.0.2. All seemed to go well with the upgrade. However, after all was said and done, the vCloud Director web page portal failed to open. It would partially load… but then failed.

I seem to recall this happening at some point in the past but couldn’t remember the root cause/fix nor could I find it documented on my blog. So… time to dig into the logs.

The watchdog log showed the cell services recyling over and over.

[root@vcdcell1 logs]# tail -f vmware-vcd-watchdog.log
2019-03-22 11:25:25 | WARN | Server status returned HTTP/1.1 404
2019-03-22 11:26:25 | ALERT | vmware-vcd-cell is dead but /var/run/vmware-vcd-cell.pid exists, attempting to restart it
2019-03-22 11:26:33 | INFO | Started vmware-vcd-cell (pid=10238)
2019-03-22 11:26:36 | WARN | Server status returned HTTP/1.1 404
2019-03-22 11:27:36 | ALERT | vmware-vcd-cell is dead but /var/run/vmware-vcd-cell.pid exists, attempting to restart it
2019-03-22 11:27:43 | INFO | Started vmware-vcd-cell (pid=10827)
2019-03-22 11:27:46 | WARN | Server status returned HTTP/1.1 404
2019-03-22 11:28:46 | ALERT | vmware-vcd-cell is dead but /var/run/vmware-vcd-cell.pid exists, attempting to restart it

The cell log showed a problem with Transfer Server Storage. Error starting application: Unable to create marker file in the transfer spooling area

[root@vcdcell1 logs]# tail -f cell.log
Application Initialization: ‘com.vmware.vcloud.networking-server’ 20% complete. Subsystem ‘com.vmware.vcloud.common-cell-impl’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 12% complete. Subsystem ‘com.vmware.vcloud.common-util’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ 42% complete. Subsystem ‘com.vmware.vcloud.common-util’ started
Application Initialization: ‘com.vmware.vcloud.networking-server’ 40% complete. Subsystem ‘com.vmware.vcloud.common-util’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ 57% complete. Subsystem ‘com.vmware.vcloud.cloud-proxy-services’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 16% complete. Subsystem ‘com.vmware.vcloud.api-framework’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ 71% complete. Subsystem ‘com.vmware.vcloud.hybrid-networking’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ 85% complete. Subsystem ‘com.vmware.vcloud.hbr-aware-plugin’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ 100% complete. Subsystem ‘com.vmware.vcloud.cloud-proxy-web’ started
Application Initialization: ‘com.vmware.vcloud.cloud-proxy-server’ complete.
Application Initialization: ‘com.vmware.vcloud.common.core’ 20% complete. Subsystem ‘com.vmware.vcloud.common-vmomi’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 25% complete. Subsystem ‘com.vmware.vcloud.jax-rs-activator’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 29% complete. Subsystem ‘com.vmware.pbm.placementengine’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 33% complete. Subsystem ‘com.vmware.vcloud.vim-proxy’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 37% complete. Subsystem ‘com.vmware.vcloud.fabric.foundation’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 41% complete. Subsystem ‘com.vmware.vcloud.imagetransfer-server’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 45% complete. Subsystem ‘com.vmware.vcloud.fabric.net’ started
Application Initialization: ‘com.vmware.vcloud.networking-server’ 60% complete. Subsystem ‘com.vmware.vcloud.fabric.net’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 50% complete. Subsystem ‘com.vmware.vcloud.fabric.storage’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 54% complete. Subsystem ‘com.vmware.vcloud.fabric.compute’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 58% complete. Subsystem ‘com.vmware.vcloud.service-extensibility’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 62% complete. Subsystem ‘com.vmware.vcloud.backend-core’ started
Application Initialization: ‘com.vmware.vcloud.networking-server’ 80% complete. Subsystem ‘com.vmware.vcloud.backend-core’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 66% complete. Subsystem ‘com.vmware.vcloud.vapp-lifecycle’ started
Application Initialization: ‘com.vmware.vcloud.networking-server’ 100% complete. Subsystem ‘com.vmware.vcloud.networking-web’ started
Application Initialization: ‘com.vmware.vcloud.networking-server’ complete.
Application Initialization: ‘com.vmware.vcloud.common.core’ 70% complete. Subsystem ‘com.vmware.vcloud.content-library’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 75% complete. Subsystem ‘com.vmware.vcloud.presentation-api-impl’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 79% complete. Subsystem ‘com.vmware.vcloud.metrics-core’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ 33% complete. Subsystem ‘com.vmware.vcloud.h5-webapp-provider’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 83% complete. Subsystem ‘com.vmware.vcloud.multi-site-core’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 87% complete. Subsystem ‘com.vmware.vcloud.multi-site-api’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ 50% complete. Subsystem ‘com.vmware.vcloud.h5-webapp-tenant’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ 66% complete. Subsystem ‘com.vmware.vcloud.h5-webapp-auth’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ 83% complete. Subsystem ‘com.vmware.vcloud.h5-swagger-doc’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ 100% complete. Subsystem ‘com.vmware.vcloud.h5-swagger-ui’ started
Application Initialization: ‘com.vmware.vcloud.ui.h5cellapp’ complete.
Application Initialization: ‘com.vmware.vcloud.common.core’ 91% complete. Subsystem ‘com.vmware.vcloud.rest-api-handlers’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 95% complete. Subsystem ‘com.vmware.vcloud.jax-rs-servlet’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ 100% complete. Subsystem ‘com.vmware.vcloud.ui-vcloud-webapp’ started
Application Initialization: ‘com.vmware.vcloud.common.core’ complete.
Successfully handled all queued events.
Error starting application: Unable to create marker file in the transfer spooling area: /opt/vmware/vcloud-director/data/transfer/cells/8a483603-43b8-4215-b33f-48270582f03d

To be honest, the NFS server which hosts Transfer Server Storage in this environment isn’t always reliable but upon checking, it was up and healthy. Furthermore, I was able to manually create a test file within this Transfer Server Storage space from the vCD cell itself.

Walking the directory structure and looking at permissions, a few things didn’t look right.

[root@vcdcell1 data]# ls -l -h
total 4.0K
drwx——. 3 vcloud vcloud 27 Mar 22 11:39 activemq
drwxr-x—. 2 vcloud vcloud 6 Mar 15 04:58 generated-bundles
drwxr-x—. 2 vcloud vcloud 4.0K Mar 15 04:58 transfer
[root@vcdcell1 data]# pwd
/opt/vmware/vcloud-director/data
[root@vcdcell1 data]#
[root@vcdcell1 data]#
[root@vcdcell1 data]#
[root@vcdcell1 data]# cd transfer/
[root@vcdcell1 transfer]# ls -l -h
total 1.0K
drwx——. 2 1002 1002 64 Mar 22 11:38 cells
-rw——-. 1 root root 386 Mar 21 11:51 responses.properties
[root@vcdcell1 transfer]# cd cells/
[root@vcdcell1 cells]# ls -l -h
total 512
-rw——-. 1 1002 1002 0 May 27 2018 8a483603-43b8-4215-b33f-48270582f03d.old
-rw-r–r–. 1 root root 6 Mar 22 11:38 jgbtest.txt

Looking at some of the pieces above, I seem to recall vcloud is supposed to be the owner and group for the vCD file and directory structure. I further verified this by restoring my old vCD cell from a previous snapshot and spot checking. So let’s fix it using the chown example on page 53 of the vCloud Director Installation and Upgrade Guide.

[root@vcdcell1 cells]# chown -R vcloud:vcloud /opt/vmware/vcloud-director
[root@vcdcell1 cells]#
[root@vcdcell1 cells]#
[root@vcdcell1 cells]# ls -l -h
total 512
-rw——-. 1 vcloud vcloud 0 May 27 2018 8a483603-43b8-4215-b33f-48270582f03d.old
-rw-r–r–. 1 vcloud vcloud 6 Mar 22 11:38 jgbtest.txt

The watchdog daemon followed up by restarting vCD cell. With the correct permissions now, a new cell file was successfully created and the vCD cell successfully started. I deleted the .old cell file and of course my jgbtest.txt file.

[root@vcdcell1 cells]# ls -l -h
total 512
-rw——-. 1 vcloud vcloud 0 Mar 22 12:23 8a483603-43b8-4215-b33f-48270582f03d
-rw——-. 1 vcloud vcloud 0 May 27 2018 8a483603-43b8-4215-b33f-48270582f03d.old
-rw-r–r–. 1 vcloud vcloud 6 Mar 22 11:38 jgbtest.txt

How did this happen? I’m pretty sure it was my own fault. Last week I was also doing some deployment testing with the vCD appliance. At the time I felt it was safe for this test cell to use the same Transfer Server Storage NFS mount (so that I wouldn’t have to go through the steps to create another one). Upon further investigation, the vCD appliance cell tattooed the folders and files with the 1002 owner and group seen above.

All is well with the vCD world now and I’ve got it documented so the next time my vCD web portal doesn’t load, I’ll know just where to look.

vSphere 6.7 Storage and action_OnRetryErrors=on

February 8th, 2019

VMware introduced a new storage feature in vSphere 6.0 which was designed as a flexible option to better handle certain storage problems. Cormac Hogan did a fine job introducing the feature here. Starting with vSphere 6.0 and continuing on in vSphere 6.5, each block storage device (VMFS or RDM) is configured with an option called action_OnRetryErrors. Note that in vSphere 6.0 and 6.5, the default value is off meaning the new feature is effectively disabled and there is no new storage error handling behavior observed.

This value can be seen with the esxcli storage nmp device list command.

vSphere 6.0/6.5:
esxcli storage nmp device list | grep -A9 naa.6000d3100002b90000000000000ec1e1
naa.6000d3100002b90000000000000ec1e1
Device Display Name: sqldemo1vmfs
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=off; {TPG_id=61459,TPG_state=AO}{TPG_id=61460,TPG_state=AO}{TPG_id=61462,TPG_state=AO}{TPG_id=61461,TPG_state=AO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=0: NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba1:C0:T2:L141, vmhba1:C0:T3:L141, vmhba2:C0:T3:L141, vmhba2:C0:T2:L141
Is USB: false

If vSphere loses access to a device on a given path, the host will send a Test Unit Ready (TUR) command down the given path to check path state. When action_OnRetryErrors=off, vSphere will continue to retry for an amount of time because it expects the path to recover. It is important to note here that a path is not immediately marked dead when the first Test Unit Ready command is unsuccessful and results in a retry. It would seem many retries in fact and you’ll be able to see them in /var/log/vmkernel.log. Also note that a device typically has multiple paths and the process will be repeated for each additional path tried assuming the first path is eventually marked as dead.

Starting with vSphere 6.7, action_OnRetryErrors is enabled by default.

vSphere 6.7:
esxcli storage nmp device list | grep -A9 naa.6000d3100002b90000000000000ec1e1
naa.6000d3100002b90000000000000ec1e1
Device Display Name: sqldemo1vmfs
Storage Array Type: VMW_SATP_ALUA
Storage Array Type Device Config: {implicit_support=on; explicit_support=off; explicit_allow=on; alua_followover=on; action_OnRetryErrors=on; {TPG_id=61459,TPG_state=AO}{TPG_id=61460,TPG_state=AO}{TPG_id=61462,TPG_state=AO}{TPG_id=61461,TPG_state=AO}}
Path Selection Policy: VMW_PSP_RR
Path Selection Policy Device Config: {policy=rr,iops=1000,bytes=10485760,useANO=0; lastPathIndex=2: NumIOsPending=0,numBytesPending=0}
Path Selection Policy Device Custom Config:
Working Paths: vmhba1:C0:T2:L141, vmhba1:C0:T3:L141, vmhba2:C0:T3:L141, vmhba2:C0:T2:L141
Is USB: false

If vSphere loses access to a device on a given path, the host will send a Test Unit Ready (TUR) command down the given path to check path state. When action_OnRetryErrors=on, vSphere will immediately mark the path dead when the first retry is returned. vSphere will not continue the retry TUR commands on a dead path.

This is the part where VMware thinks it’s doing the right thing by immediately fast failing a misbehaving/dodgy/flaky path. The assumption here is that other good paths to the device are available and instead of delaying an application while waiting for path failover during the intensive TUR retry process, let’s fail this one bad path right away so that the application doesn’t have to spin its wheels.

However, if all other paths to the device are impacted by the same underlying (and let’s call it transient) condition, what happens is that each additional path iteratively goes through the process of TUR, no retry, immediately mark path as dead, move on to the next path. When all available paths have been exhausted, All Paths Down (APD) for the device kicks in. If and when paths to an APD device become available again, they will be picked back up upon the next storage fabric rescan, whether that’s done manually by an administrator, or automatically by default every 300 seconds for each vSphere host (Disk.PathEvalTime). From an application/end user standpoint, I/O delay for up to 5 minutes can be a painfully long time to wait. The irony here is that VMware can potentially turn a transient condition lasting only a few seconds into a more of a Permanent Device Loss like condition.

All of the above leads me to a support escalation I got involved in with a customer having an Active/Passive block storage array. Active/Passive is a type of array which has multiple storage processors/controllers (usually two) and LUNs are distributed across the controllers in an ownership model whereby each controller owns both the LUNs and the available paths to those LUNs. When an active controller fails or is taken offline proactively (think storage processor reboot due to a firmware upgrade), the paths to the active controller go dark, the passive controller takes ownership of the LUNs and lights up the paths to them – a process which can be measured in seconds, typically more than 2 or 3, often much more than that (this dovetails into the discussion of virtual machine disk timeout best practices). With action_OnRetryErrors=off, vSphere tolerates the transient path outage during the controller failover. With action_OnRetryErrors=on, it doesn’t – each path that goes dark is immediately failed and we have APD for all the volumes on that controller in a fraction of a second.

The problem which was occurring in this customer escalation was a convergence of circumstances:

  • The customer was using vSphere 6.7 and its defaults
    action_OnRetryErrors=on
  • The customer was using an Active/Passive storage array
  • The customer virtualized Microsoft Windows SQL cluster servers (cluster disk resources are extremely sensitive to APDs in the hypervisor and immediately fail when it detects a dependent cluster disk has been removed – a symptom introduced by APD)
  • The customer was testing controller failovers
Windows failover clusters have zero tolerance for APD disk

To resolve the problem in vSphere 6.7, action_OnRetryErrors needs to be disabled for each device backed by the Active/Passive storage array. This must be performed on every host in the cluster having access to the given devices (again, these can be VMFS volumes and/or RDMs). There are a few ways to go about this.

To modify the configuration without a host reboot, take a look at the following example. A command such as this would need to be run on every host in the cluster, and for each device (ie. in an 8 host cluster with 8 VMFS/RDMs, we need to identify the applicable naa.xxx IDs and run 64 commands. Yes, this could be scripted. Be my guest.):

esxcli storage nmp satp generic deviceconfig set -c disable_action_OnRetryErrors -d naa.6000d3100002b90000000000000ec1e1

I don’t prefer that method a whole lot. It’s tedious and error prone. It could result in cluster inconsistencies. But on the plus side, a host reboot isn’t required, and this setting will persist across reboots. That also means a configuration set at this device level will override any claim rules that could also apply to this device. Keep this in mind if a claim rule is configured but you’re not seeing the desired configuration on any specific device.

The above could also be scripted for a number of devices on a host. Here’s one example. Be very careful that the base naa.xxx string matches all of the devices from one array that should be configured, and does not modify devices from other array types that should not be configured. Also note that this script is a one liner but for blog formatting purposes I manually added a line break starting with esxcli.:

for i in `ls /vmfs/devices/disks | grep -v ":" | grep -i naa.6000D31`; do echo $i; 
esxcli storage nmp satp generic deviceconfig set -c disable_action_OnRetryErrors -d $i; done

Now to verify:

for i in `ls /vmfs/devices/disks | grep -v ":" | grep -i naa.6000D31`; do echo $i; 
esxcli storage nmp device list | grep -A2 $i | egrep -io action_OnRetryErrors=\\w+; done

I like adding a SATP claim rule using a vendor device string a lot better, although changes to claim rules for existing devices generally requires a reboot of the host to reclaim existing devices with the new configuration. Here’s an example:

esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V COMPELNT -P VMW_PSP_RR -o disable_action_OnRetryErrors

Here’s another example using quotes which is also acceptable and necessary when setting multiple option string parameters (refer to this):

esxcli storage nmp satp rule add -s “VMW_SATP_ALUA” -V “COMPELNT” -P “VMW_PSP_RR” -o “disable_action_OnRetryErrors”

When a new claim rule is added, claim rules can be reloaded with the following command.

esxcli storage core claimrule load

Keep in mind the new claim rule will only apply to unclaimed devices. Newly presented devices will inherit the new claim rule. Existing devices which are already claimed will not until the next vSphere host reboot. Devices can be unclaimed without a host reboot but all I/O to the device must be halted – somewhat of a conundrum if we’re dealing with production volumes, datastores being used for heartbeating, etc. Assuming we’re dealing with multiple devices, a reboot is just going to be easier and cleaner.

I like claim rules here better because of the global nature. It’s one command line per host in the cluster and it’ll take care of all devices from the Active/Passive storage array vendor. No need to worry about coming up with and testing a script. No need to worry about spending hours identifying the naa.xxx IDs and making all of the changes across hosts. No need to worry about tagging other storage vendor devices with an improper configuration. Lastly, the claim rule in effect is visible in a SATP claim rule list (sincere apologies for the formatting – it’s bad I know):

esxcli storage nmp satp rule list

Name Device Vendor Model Driver Transport Options Rule Group Claim Options Default PSP PSP Options Description
——————- —— ——– —————- —— ——— —————————- ———- ———————————– ———– ———– ———————————————–
VMW_SATP_ALUA COMPELNT disable_action_OnRetryErrors user VMW_PSP_RR

By the way… to remove the SATP claim rules above respectively:

esxcli storage nmp satp rule remove -s VMW_SATP_ALUA -V COMPELNT -P VMW_PSP_RR -o disable_action_OnRetryErrors

esxcli storage nmp satp rule remove -s “VMW_SATP_ALUA” -V “COMPELNT” -P “VMW_PSP_RR” -o “disable_action_OnRetryErrors”

The bottom line here is there may be a number of VMware customers with Active/Passive storage arrays, running vSphere 6.7. If and when planned or unplanned controller/storage processor failover occurs, APDs may unexpectedly occur, impacting virtual machines and their applications, whereas this was not the case with previous versions of vSphere.

In closing, I want to thank VMware Staff Technical Support Engineering for their work on this case and ultimately exposing “what changed in vSphere 6.7” because we had spent some time trying to reproduce this problem on vSphere 6.5 where we had an environment similar to what the customer had and we just weren’t seeing any problems.

References:

Managing SATPs

No Failover for Storage Path When TUR Command Is Unsuccessful

Storage path does not fail over when TUR command repeatedly returns retry requests (2106770)

Handling Transient APD Conditions

VSPHERE 6.0 STORAGE FEATURES PART 6: ACTION_ONRETRYERRORS

Updated 2-20-19: VMware published a KB article on this issue today:
ESXi 6.7 hosts with active/passive or ALUA based storage devices may see premature APD events during storage controller fail-over scenarios (67006)

VMware Horizon Share Folders Issue with Windows 10

June 12th, 2017

I spent some time the last few weekends making various updates and changes to the lab. Too numerous and not all that paramount to go into detail here, with the exception of one issue I did run into. I created a new VMware Horizon pool consisting of Windows 10 Enterprise, Version 1703 (Creators Update). The VM has 4GB RAM and VMware Horizon Agent 7.1.0.5170901 is installed. This is all key information contributing to my new problem which is the Shared Folders feature seems to have stopped functioning.

That is to say, when launching my virtual desktop from the Horizon Client, there are no shared folders or drives being passed through from where I launched the Horizon Client. Furthermore, the Share Folders menu item is completely missing from the blue Horizon Client pulldown menu.

I threw something out on Twitter and received a quick response from a very helpful VMware Developer by the name of Adam Gross (@grossag).

Adam went on to explain that the issue stems from a registry value defining an amount of memory which is less that the amount of RAM configured in the VM.

The registry key is HKLM\SYSTEM\CurrentControlSet\Control\ and the value configured for SvcHostSplitThresholdInKB is 3670016 (380000 Hex). The 3670016 is expressed in KB which comes out to be 3.5GB. The default Windows 10 VM configuration is deployed with 4GB of RAM which is what I did this past weekend. Since 3.5GB is less than 4GB, the bug rears its head.

Adam mentioned the upcoming 7.2 agent will configure this value at 32GB on Windows 10 virtual machines (that’s 33554432 or 2000000 in Hex) and perhaps even larger in the 7.2 version or some future release of the agent because the reality some day is that 32GB won’t be large enough. Adam went on to explain the maximum amount of RAM supported by Windows 10 x64 is 2TB which comes out to be 2147483648 expressed in KB or 80000000 in Hex. Therefore, it is guaranteed safe (at least to avoid this issue) to set the registry value to 80000001 (in Hex) or higher for any vRAM configuration.

To move on, the value needs to be tweaked manually in the registry. I’ll set mine to 32GB as I’ll likely never have a VDI desktop deployed between now and when the 7.2 agent ships and is installed in my lab.

And the result for posterity.

I found a reboot of the Windows 10 VM was required before the registry change made the positive impact I was looking for. After all was said and done, my shared folders came back as did the menu item from the pulldown on the blue Horizon Client pulldown menu. Easy fix for a rather obscure issue. Once again my thanks to Adam Gross for providing the solution.

VMware Tools causes virtual machine snapshot with quiesce error

July 30th, 2016

Last week I was made aware of an issue a customer in the field was having with a data protection strategy using array-based snapshots which were in turn leveraging VMware vSphere snapshots with VSS quiesce of Windows VMs. The problem began after installing VMware Tools version 10.0.0 build-3000743 (reported as version 10240 in the vSphere Web Client) which I believe is the version shipped in ESXI 6.0 Update 1b (reported as version 6.0.0, build 3380124 in the vSphere Web Client).

The issue is that creating a VMware virtual machine snapshot with VSS integration fails. The virtual machine disk configuration is simply two .vmdks on a VMFS-5 datastore but I doubt the symptoms are limited only to that configuration.

The failure message shown in the vSphere Web Client is “Cannot quiesce this virtual machine because VMware Tools is not currently available.”  The vmware.log file for the virtual machine also shows the following:

2016-07-29T19:26:47.378Z| vmx| I120: SnapshotVMX_TakeSnapshot start: ‘jgb’, deviceState=0, lazy=0, logging=0, quiesced=1, forceNative=0, tryNative=1, saveAllocMaps=0 cb=1DE2F730, cbData=32603710
2016-07-29T19:26:47.407Z| vmx| I120: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: vmfsSparse grain size is set to 1 for ‘/vmfs/volumes/51af837d-784bc8bc-0f43-e0db550a0c26/rmvm02/rmvm02-000001.
2016-07-29T19:26:47.408Z| vmx| I120: DISKLIB-LIB_CREATE : DiskLibCreateCreateParam: vmfsSparse grain size is set to 1 for ‘/vmfs/volumes/51af837d-784bc8bc-0f43-e0db550a0c26/rmvm02/rmvm02_1-00000
2016-07-29T19:26:47.408Z| vmx| I120: SNAPSHOT: SnapshotPrepareTakeDoneCB: Prepare phase complete (The operation completed successfully).
2016-07-29T19:26:56.292Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:07.790Z| vcpu-0| I120: Tools: Tools heartbeat timeout.
2016-07-29T19:27:11.294Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:17.417Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.
2016-07-29T19:27:17.417Z| vmx| I120: Msg_Post: Warning
2016-07-29T19:27:17.417Z| vmx| I120: [msg.snapshot.quiesce.rpc_timeout] A timeout occurred while communicating with VMware Tools in the virtual machine.
2016-07-29T19:27:17.417Z| vmx| I120: —————————————-
2016-07-29T19:27:17.420Z| vmx| I120: Vigor_MessageRevoke: message ‘msg.snapshot.quiesce.rpc_timeout’ (seq 10949920) is revoked
2016-07-29T19:27:17.420Z| vmx| I120: ToolsBackup: changing quiesce state: IDLE -> DONE
2016-07-29T19:27:17.420Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Done with snapshot ‘jgb’: 0
2016-07-29T19:27:17.420Z| vmx| I120: SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (31).
2016-07-29T19:27:17.420Z| vmx| I120: VigorTransport_ServerSendResponse opID=ffd663ae-5b7b-49f5-9f1c-f2135ced62c0-95-ngc-ea-d6-adfa seq=12848: Completed Snapshot request.
2016-07-29T19:27:26.297Z| vmx| I120: GuestRpcSendTimedOut: message to toolbox timed out.

After performing some digging, I found VMware had released VMware Tools version 10.0.9 on June 6, 2016. The release notes identify the root cause has been identified and resolved.

Resolved Issues

Attempts to take a quiesced snapshot in a Windows Guest OS fails
Attempts to take a quiesced snapshot after booting a Windows Guest OS fails

After downloading and upgrading VMware Tools version 10.0.9 build-3917699 (reported as version 10249 in the vSphere Web Client), the customer’s problem was resolved. Since the faulty version of VMware Tools was embedded in the customer’s templates used to deploy virtual machines throughout the datacenter, there were a number of VMs needing their VMware Tools upgraded, as well as the templates themselves.

vCenter Server 6 Appliance fsck failed

April 4th, 2016

A vCenter Server Appliance (vSphere 6.0 Update 1b) belonging to me was bounced and for some reason was unbootable. The trouble during the boot process begins with /dev/sda3 contains a file system with errors, check forced. At approximately 27% of the way through, the process terminates with fsck failed. Please repair manually and reboot.

Unable to access a bash# prompt from the current state of the appliance, I followed VMware KB 2069041 VMware vCenter Server Appliance 5.5 and 6.0 root account locked out after password expiration, particularly the latter portion of it which provides the steps to modify a kernel option in the GRUB bootloader to obtain a root shell (and subsequently run the e2fsck -y /dev/sda3 repair command.

The steps are outlined in VMware KB 2069041 and are simple to follow.

  1. Reboot the VCSA
  2. Be quick about highlighting the VMware vCenter Server appliance menu option (the KB article recommends hitting the space bar to stop the default countdown)
  3. p (to enter a root password and continue with additional commands the next step)
  4. e (to edit the boot command)
  5. Append init=/bin/bash (followed by Enter to return to the GRUB menu
  6. b (to start the boot process)

This is where e2fsck -y /dev/sda3 is executed to repair file system errors on /dev/sda3 and allow the VCSA to boot successfully.

When the process above completes, reboot the VCSA and that should be all there is to it.

Update 10/9/17: I ran into a similar issue with VCSA 6.5 Update 1 where the appliance wouldn’t boot and I was left at an emergency mode prompt. In this situation, following the steps above isn’t so straight forward in part due to the Photon OS splash screen and no visibility to the GRUB bootloader (following VMware KB 2081464). In this situation, I executed fsck /dev/sda3 at the emergency mode prompt answering yes to all prompts. After reboot, I found this did not resolve all of the issues. I was able to log in by providing the root password twice. The journalctl command revealed a problem with /dev/mapper/log_vg-log. Next I ran fsck /dev/mapper/log_vg-log again answering yes to all prompts to repair. When that was finished, the appliance was rebooted and came up operational.

vCloud Director vdnscope-1 could not be found

August 15th, 2015

For whatever reason, I’ve been spending a pretty fair amount of time lately with vCloud Director both at home as well as at the office. It’s a great product. It always has been, beginning with its Lab Manager roots. Like my last blog post, this writing will exhibit another vCloud Director database editing exercise which stemmed from a problem I encountered in the lab.

I was attempting to get away from my VLAN-backed Network Pool by configuring vCloud Director’s Provider vDC-VXLAN-NP Network Pool which is much more dynamic and powerful in nature. The Provider vDC-VXLAN-NP Network Pool is installed by default in vCloud Director but to configure and use it for Organization and vApp networks, one must follow a set of instructions which basically involves configuring upstream physical switch(es) with jumbo frames, a transport VLAN, and multicast settings, preparing the hosts by installing an agent on each of them using vShield Manager, adding VMkernel ports, Network Scopes, Virtual Wires, and so on (Mike Laverick and Rawlinson Rivera both have easy to follow tutorials. The VMware VXLAN Deployment Guide is also a great read). Once it’s all set up and working, VXLAN is pretty effing cool. Anyway, it sounds like a lot of steps and admittedly it requires some reading and attention to detail, but much of it is automated by vCloud Director, with some bumps along the way.

I did run into a few snags which ultimately lead me to going through the configuration process start to finish a few times. In the end I had to configure the Network Scope in vShield Manager manually when normally this step is performed automatically by vCloud Director via the Enable VXLAN Provider VDC right-click menu item.

Once I got beyond the installation hurdles, there was some residual impact left in the vCloud Director database and vShield Manager such that it all looked to be working properly, except that at the very end I could not power on a vApp with an isolated vApp network which relied on the use of the VXLAN-backed Network Pool. The error message was:

Cannot deploy organization VDC network  (uuid for that network)
com.vmware.vcloud.fabric.nsm.error.VsmException: VSM response error (202): The requested object : vdnscope-1 could not be found. Object identifiers are case sensitive.

[ bb505f5e-27f1-419e-9b05-da0d38a7788f ] Unable to deploy network “vApp net1(urn:uuid:7d813867-d3f1-420d-a0a8-a65263369327)”.

com.vmware.vcloud.fabric.nsm.error.VsmException: VSM response error (202): The requested object : vdnscope-1 could not be found. Object identifiers are case sensitive.

– com.vmware.vcloud.fabric.nsm.error.VsmException: VSM response error (202): The requested object : vdnscope-1 could

not be found. Object identifiers are case sensitive.

– VSM response error (202): The requested object : vdnscope-1 could not be found. Object identifiers are case sensitive.

An object named vdnscope-1 seems to be the obvious problem.

I was not able to make use of the Network Pool Repair function as it was unavailable:

Fortunately I was able to locate a related thread in the VMware Communities which more or less explained what might have happened and what I could try to fix the problem (credit to IamTHEvilONE). This is my interpretation.

Each time a Network Scope is created in the vShield Manager, an underlying object reference is tied to the Network Scope with a naming convention of vdnscope-x where x begins at 1 and is incremented at each create iteration. So the first Network Scope created in vShield Manager by vCloud Director is going to be called vdnscope-1. This object is stored in the vCloud Director database and is referenced each time an Org or vApp network is spun up which leans on the VXLAN-backed Network Pool. This is formally handled at vApp power on. The object is also stored somewhere in the vShield Manager although I was never able to locate it. What happened here is that Network Scope object known by both vCloud Director and vShield Manager were not sync and didn’t match. vCloud Direct dials up vShield Manager and says “I need that vdnscope-1 you have” and vShield Manager responds with “I have no idea what that object is”. Obvious problem.

The solution is fairly simple: Edit the vCloud Director database with the correct Network Scope object reference. But a small problem still remains: I was never able to locate the correct object name in vShield Manager. However, going back to the VMware Communities discussion, I’ll eventually be able to find the correct object name by incrementing the vdnscope-x object reference in the vCloud Director database by 1 until the two sides agree and the vApp powers on successfully.

I’ll borrow the same disclaimer from the previous blog post: An obligatory warning on vCloud database editing. Do as I say, not as I do. Editing the vCloud database should be performed only with the guidance of VMware support. Above all, create a point in time backup of the vCloud database with all vCloud Director cell servers stopped (service vmware-vcd stop). There are a variety of methods in which you can perform this database backup. Use the method that is most familiar to and works for you.

So after stopping the vCloud Director services and getting a vcloud database backup…

Step 1: Open Microsoft SQL Server Management Studio and navigate to the [vcloud].[dbo].[network_pool] table. Under the vdn_scope_id column, increment the vdnscope-1 value from 1 to 2.

Step 2: Start the vCloud Director service in all cell servers (service vmware-vcd start) and verify in vShield Manager the Virtual Wire has been created and the vApp can power on successfully. If it fails, stop vCloud services and repeat Step 1 above while incrementing the vdnscope value to 3, then 4, and so on. In my case, vdnscope-5 did the trick.

vCloud Director is awesome. VXLAN with 16 million networks capability kicks it up a notch.

Updated 8/22/15: I received a tip from Jon Hemming in the form of a blog comment. Jon states he has written a VMware KB article titled Creating an isolated network in VMware vCloud Director reports the error: vdnscope-x does not exist (2065485) which documents a process to get the correct VDN Scope ID via the REST API of vShield as well as update the vCloud Director database. Thank you Jon! I did find the syntax for the curl statement to be slightly off. The KB article calls for the following syntax:

curl -k -u admin:default -X GET https://vshield.boche.lab/api/2.0/vdn/scopes/

The result is HTTP Status 404 The requested resource is not available.

What did work was:

curl -k -u admin:default -X GET https://vshield.boche.lab/api/2.0/vdn/scopes

The only change was removing the trailing forward slash on the URL.

vCloud Director Error Cannot delete network pool

August 15th, 2015

I ran into a small problem this week in vCloud Director whereby I was unable to Delete a Network Pool. The error message stated Cannot delete network pool because It is still in use. It went on to list In use items along with a moref identifier. This was not right because I had verified there were no vApps tied to the Network Pool. Furthermore the item listed still in use was a dynamically created dvportgroup which also no longer existed on the vNetwork Distributed Switch in vCenter.

I suspect this situation came about due to running out of available storage space earlier in the week on the Microsoft SQL Server where the vCloud database is hosted. I was performing Network Pool work precisely when that incident occurred and I recall an error message at the time in vCloud Director regarding tempdb.

I tried removing state data from QRTZ tables which I blogged about here a few years ago and has worked for specific instances in the past but unfortunately that was no help here. Searching the VMware Communities turned up sparse conversations about roughly the same problem occurring with Org vDC Networks. In those situations, manually editing the vCloud Director database was required.

An obligatory warning on vCloud database editing. Do as I say, not as I do. Editing the vCloud database should be performed only with the guidance of VMware support. Above all, create a point in time backup of the vCloud database with all vCloud Director cell servers stopped (service vmware-vcd stop). There are a variety of methods in which you can perform this database backup. Use the method that is most familiar to and works for you.

Opening up Microsoft SQL Server Management Studio, there are rows in two different tables which I need to delete to fix this. This has to be done in the correct order or else a REFERENCE constraint conflict occurs in Microsoft SQL Server Management Studio and the statement will be terminated.

So after stopping the vCloud Director services and getting a vcloud database backup…

Step 1: Delete the row referencing the dvportgroup in the [vcloud].[dbo].[network_backing] table:

Step 2: Delete the row referencing the unwanted Network Pool in the [vcloud].[dbo].[network_pool] table:

That should take care of it. Start the vCloud Director service in all cell servers (service vmware-vcd start) and verify the Network Pool has been removed.