Posts Tagged ‘vCenter Server’

vSphere with Kubernetes

May 17th, 2020

During the past couple of months, I’ve had the opportunity to participate in both the vSphere 7 and Project Pacific beta programs. While the vSphere 7 beta was fairly straightforward (no intent to downplay the incredible amount of work that went into one of VMware’s biggest and most anticipated releases in company history), Project Pacific bookends the start of my Kubernetes journey – something I’ve wanted to get moving on once the busy hockey season concluded (I just wrapped up my 4th season coaching at the Peewee level).

For myself, the learning process can be broken down into two distinct parts:

  1. Understanding the architecture and deploying Kubernetes on vSphere. Sidebar: Understanding the Tanzu portfolio (and the new names for VMware modern app products). To accomplish this In vSphere 7, we need to deploy NSX-T and then enable Workload Management in the UI. That simplification easily represents several hours of work when you consider planning and in my case, failing a few times. I’ve seen a few references made on how easy this process is. Perhaps if you already have a strong background in NSX-T. I found it challenging during the beta.
  2. Day 1 Kubernetes. The supervisor cluster is up and running (I think). Now how do I use it? YAML? Pods? What’s a persistent volume claim (PVC)? Do I now have a Tanzu Kubernetes Grid Cluster? No, not yet.

This blog post is going to focus mainly on part 1 – deployment of the Kubernetes platform on vSphere 7, learning the ropes, and some of the challenges I overcame to achieve a successful deployment.

During the Project Pacific beta, we had a wizard which deployed most of the NSX-T components. The NSX-T Manager, the Edges, the Tier-0 Gateway, the Segment, the uplinks, it was all handled by the wizard. I’m an old hand with vShield Manager and NSX Manager after that for vCloud Director, but NSX-T is a beast. If you don’t know your way around NSX-T yet, the wizard was a blessing because all we had to do was understand what was needed, and then supply the correlating information to the wizard. I think the wizard also helped drive the beta program to success within the targeted start and end dates (these are typical beta program constraints).

When vSphere 7 went GA, a few notable things had changed.

  1. Licensing. Kubernetes deployment on vSphere 7 requires VMware vSphere Enterprise Plus with Add-on for Kubernetes. Right now I believe the only path is through VMware Cloud Foundation (VCF) 4.0 licensing.
  2. Unofficially you can deploy Kubernetes on vSphere 7 without VCF. All of the bits needed already exist in vCenter, ESXi, and NSX-T 3.0. But as the Kubernetes features seem to be buried in the ESXi license key, it involves just a bit of trickery. More on that in a bit.
  3. Outside of VCF, there is no wizard based installation like we had in the Project Pacific beta. It’s a manual deployment and configuration of NSX-T. To be honest and from a learning perspective, this is a good thing. There’s no better way to learn than to crack open the books, read, and do.

So here’s VMware’s book to follow:

vSphere with Kubernetes Configuration and Management (PDF, online library).

It’s a good guide and should cover everything you need from planning to deployment of both NSX-T as well as Kubernetes. If you’re going to use it, be aware that it does tend to get updated so watch for those changes to stay current. To that point, I may make references to specific page numbers that could change over time.

I’ve made several mentions of NSX-T. If you haven’t figured it out by now, the solution is quite involved when it comes to networking. It’s important to understand the networking architecture and how that will overlay your own network as well as utilize existing infrastructure resources such as DNS, NTP, and internet access. When it comes to filling in the blanks for the various VLANs, subnets, IP addresses, and gateways, it’s important to provide the right information and configure correctly. Failure to do so will either end up in a failed deployment, or a deployment that from the surface appears successful but Kubernetes work later on fails miserably. Ask me how I know.

There are several network diagrams throughout VMware’s guide. You’ll find more browsing the internet. I borrowed this one from the UI.

They all look about the same. Don’t worry so much about the internal networking of the supervisor cluster or even the POD or Service CIDRs. For the most part these pieces are autonomous. The workload enablement wizard assigns these CIDR blocks automatically so that means if you leave them alone, you can’t possibly misconfigure them.

What is important can be boiled down to just three required VLANs. Mind you I’m talking solely about Kubernetes on vSphere in the lab here. For now, forget about production VCF deployments and the VLAN requirements it brings to the table (but do scroll down to the end for a link to a click through demo of Kubernetes with VCF).

Just three VLANs. It does sound simple but where some of the confusion may start is terminology – depending on the source, I’ve seen these VLANs referred to in different ways using different terms. I’ll try and simply as much as I can.

  1. ESXi host TEP VLAN – Just a private empty VLAN. Must route to Edge node TEP VLAN. Must support minimum 1600 MTU (jumbo frames) both intra VLAN as well as routing jumbo frames to the Edge node TEP VLAN. vmk10 is tied to this VLAN.
  2. Edge node TEP VLAN– Another private empty VLAN. Must route to ESXi host TEP VLAN. Must support minimum 1600 MTU (jumbo frames) both intra VLAN as well as routing jumbo frames to the ESXi host TEP VLAN. The Edge TEP is tied to this VLAN.

    A routed tunnel is established between the ESXi host tunnel endpoints on vmk10 (and vmk11 if you’re deploying with redundancy in mind) and each Edge node TEP interface. If jumbo frames aren’t making it unfragmented through this tunnel, you’re dead in the water.
  3. The third VLAN is what VMware calls the Tier 0 gateway and uplink for transport node on page 49 of their guide. I’ve seen this called the Overlay network. I’ve seen this called the Edge uplink network. The Project Pacific beta quickstart guide called it the Edge Logical Router uplink VLAN as well as the Workload Network VLAN. Later in the wizard it was simply referred to as the Uplink VLAN. Don’t ever confuse this with the TEP VLANs. In all diagrams it’s going to be the External Network or the network where the DevOps staff live. The Tier-0 gateway provides the north/south connectivity between the external network and the Kubernetes stack (which also includes a Tier-1 gateway). Another helpful correlation: The Egress and Ingress CIDRs live on this third VLAN. You’ll find out sooner or later that existing resources must exist on this external network just as DNS, NTP, and internet access.

All of the network diagrams I’ve seen, including the one above, distinguish between the external network and the management network. For the home labbers out there, these two will most often be the same network. In my initial deployment, I made the mistake of deploying Kubernetes with a management VLAN and a separate DevOps VLAN that had no route to the internet. Workload enablement was successful but I found out later that applying a simple YAML resulted in endless failed pods being created. This is because the ESXi host based image fetcher container runtime executive (CRX) had no route to the internet to access public repository images (a firewall blocking traffic can cause this as well). I was seeing errors such as the following in /var/log/spherelet.log on the vSphere host where the pod was placed:

Failed to resolve image: Http request failed. Code 400: ErrorType(2) failed to do request: Head https://registry-1.docker.io/v2/library/nginx/manifests/alpine: dial tcp 34.197.189.129:443: connect: network is unreachable
spherelet.log:time="2020-03-25T02:47:24.881025Z" level=info msg="testns1/nginx-3a1d01bf5d03a391d168f63f6a3005ff4d17ca65-v246: Start new image fetcher instance. Crx-cli cmd args [/bin/crx-cli ++group=host/vim/vmvisor/spherelet/imgfetcher run --with-opaque-network nsx.LogicalSwitch --opaque-network-id a2241f05-9229-4703-9815-363721499b59 --network-address 04:50:56:00:30:17 --external-id bfa5e9d2-8b9d-4b34-9945-5b7452ee76a0 --in-group host/vim/vmvisor/spherelet/imgfetcher imgfetcher]\n"

The NSX-T Manager and Edge nodes both have management interfaces that tie to the management network, but much like vCenter and ESXi management interfaces, these are for management only and are not in the data path nor are they a part of the Geneve tunnel. As such, the management work does not require jumbo frames.

Early on in the beta, I took some lumps trying to deploy Kubernetes on vSphere. These attempts were unsuccessful for a few reasons and 100% of the cause was networking problems.

First networking problem: My TEP VLANs were not routed. That was purely my fault for not understanding in full the networking requirements for the two TEP VLANs. Easy fix – I contacted my lab administrator and had him add two default gateways, one for each of the TEP VLANs. Problem solved.

Second networking problem: My TEP VLANs supported jumbo frames at Layer 2 (hosts on the same VLAN can successfully send and receive unfragmented jumbo frames all day), but did not support the routing of jumbo frames. (Using vmkping with the -d switch is very important in testing for jumbo frame success the command looks something like vmkping -I vmk10 <edge TEP IP> -S vxlan -s 1572 -d). In other words, when trying to send a jumbo frame from an ESXi host TEP to an Edge TEP on the other VLAN, standard MTU frames make it through, but jumbo frames are dropped at the physical switch interface which was performing the intra switch intervlan routing.

A problem with jumbo frames can manifest itself into somewhat of a misleading problem and resulting diagnosis. When a jumbo frames problem exists between the two TEP VLANs:

  • A workload enablement appears successful and healthy in the UI
  • The Control Plane Node IP Address is pingable
  • The individual supervisor cluster nodes are reachable on their respective IP addresses and accept kubectl API commands
  • The Harbor Image Registry is successfully deployed

But…

  • The Control Plane Node IP Address is not reachable over https in a web browser
  • The Harbor Image Registry is unreachable via web browser at its published IP address

These are symptoms of an underlying jumbo frames problem but they can be misidentified as a load balancer issue.

I spent some time on this because my lab administrator assured me jumbo frames were enabled on the physical switch. It took some more digging to find out intervlan routing of jumbo frames was a separate configuration on the switch. To be fair, I didn’t initially ask for this configuration (I didn’t know what I didn’t know at the time). Once that configuration was made on the switch, jumbo frames were making it to both ends of the tunnel was traversed VLANs. Problem solved.

Just one more note on testing for intervlan routing of jumbo frames. Although the switch may be properly configured and jumbo frames are making it through between VLANs, I have found that sending vmkping commands with jumbo frames to the switch interfaces themselves (this would be the default gateway for the VLAN) can be successful and it can also fail. I think it all depends on the switch make and model. Call it a red herring and try not to pay attention to it. What’s important is that the jumbo frames ultimately make it through to the opposite tunnel endpoint.

Third networking problem: The third critical VLAN mentioned above (call it the overlay, call it the Edge Uplink, call it the External Network, call it the DevOps network), is not well understood and is implemented incorrectly. There are few ways you can go wrong here.

  1. Use the wrong VLAN – in other words a VLAN which has no reachable network services such as DNS, NTP, or a gateway to the internet. You’ll be able to deploy the Kubernetes framework but the deployment of pods requiring access to a public image repository will fail. During deployment, failed pods will quickly stack up in the namespace.
  2. Using the correct VLAN but using Egress and Ingress CIDR blocks from some other VLAN. This won’t work and it is pretty well spelled out everywhere I’ve looked that the Egress and Ingress CIDR blocks need to be on the same VLAN which represents the External Network. Among other things, the Egress portion is used for outbound traffic through the Tier-0 Gateway to the external network. The image fetcher CRX is one such function which uses Egress. The Ingress portion is used for inbound traffic from the External Network through the Tier-0 Gateway. In fact, the first usable IP address in this block ends up being the Control Plane Node IP Address for the Supervisor Cluster where all the kubectl API commands come through. If you’ve just finished enabling workload management and your Control Plane Node IP Address is not on your external network, you’ve likely assigned the wrong Egress/Ingress CIDR addresses.

Fourth networking problem: The sole edge portgroup that I had created on the distributed switch needs to be in VLAN Trunking mode (1-4094), not VLAN with a specified tag. This one can be easy to miss and early in the beta I missed it. I followed the Project Pacific quickstart guide but don’t ever remember seeing the requirement. It is well documented now on page 55 of the VMware document I mentioned early on.

To summarize the common theme thus far, networking, understanding the requirements, mapping to your environment, and getting it right for a successful vSphere with Kubernetes deployment.

Once the beta had concluded and vSphere 7 was launched, I was anxious to deploy Kubernetes on GA bits. After deploying and configuring NSX-T in the lab, I ran into the licensing obstacle. During the Project Pacific beta, license keys were not an issue. The problem is when we try to enable Workload Management after NSX-T is ready and waiting. Without the proper licensing, I was greeted with This vCenter does not have the license to support Workload Management. You won’t see this in a newly stood up greenfield environment if you haven’t gotten a chance to license the infrastructure. Where you will see it is if you’ve already licensed your ESXi hosts with Enterprise Plus licenses for instance. Since Enterprise Plus licenses by themselves are not entitled to the Kubernetes feature, they will disable the Workload Management feature.

The temporary workaround I found is to simply remove the Enterprise Plus license keys and apply the Evaluation License to the hosts. Once you do this and refresh the Workload Management page, the padlock disappears and I was able to continue with the evaluation.

Unfortunately the ESXi host Evaluation License keys are only good for 60 days. As of this writing, vSphere 7 has not yet been GA for 60 days so anyone who stood up a vSphere 7 GA environment on day 1 still has a chance to evaluate vSphere with Kubernetes.

One other minor issue I’ve run into that I’ll mention has to do with NSX-T Compute Managers. A Compute Manager in NSX-T is a vCenter Server registration. You might be familiar with the process of registering a vCenter Server with other products such as storage array or data protection software. This really is no different.

However, a problem can present itself whereby a vCenter Server has been register to an NSX-T Manager previously, that NSX-T Manager is decommissioned (improperly), and then an attempt is made sometime later to register the vCenter Server with a newly commissioned NSX-T Manager. The issue itself is a little deceptive because at first glance that subsequent registration with a new NSX-T Manager appears successful – no errors are thrown in the UI and we continue our work setting up the fabric in NSX-T Manager.

What lurks in the shadows is that the registration wasn’t entirely successful. The Registration Status shows Not Registered and the Connection Status shows Down. It’s a simple fix really – not something ugly that you might expect in the CLI. Simply click on the Status link and you’re offered an opportunity to Select errors to resolve. I’ve been through this motion a few times and the resolution is quick and effortless. Within a few seconds the Registration Status is Registered and the Connection Status is Up.

Deploying Kubernetes on vSphere can be challenging, but in the end it is quite satisfying. It also ushers in Day 1 Kubernetes. Becoming a kubectl Padawan. Observing the rapid deployment and tear down of applications on native VMware integrated Kubernetes pods. Digging into the persistent storage aspects. Deploying a Tanzu Kubernetes Grid Cluster.

Day 2 Kubernetes is also a thing. Maintenance, optimization, housekeeping, continuous improvement, backup and restoration. Significant dependencies now tie into vSphere and NSX-T infrastructure. Keeping these components healthy and available will be more important than ever to maintain a productive and happy DevOps organization.

I would be remiss if I ended this before calling out a few fantastic resources. David Stamen’s blog series Deploying vSphere with Kubernetes provides a no nonsense walk through highlighting all of the essential steps from configuring NSX-T to enabling workload management. He wraps up the series with demo applications and a Tanzu Kubernetes Grid Cluster deployment.

It should be no surprise that William Lam’s name comes up here as well. William has done some incredible work in the areas of automated vSphere deployments for lab environments. In his Deploying a minimal vSphere with Kubernetes environment article, he shows us how we can deploy Kubernetes on a two or even one node vSphere cluster (this is unsupported of course – a minimum of three vSphere hosts is required as of this writing). This is a great tip for those who want to get their hands on vSphere with Kubernetes but have a limited number of vSphere hosts in their cluster to work with. I did run into one caveat with the two node cluster in my own lab – I was unable to deploy a Tanzu Kubernetes Grid Cluster. After deployment and power on of the the TKG control plane VM, it waits indefinitely to deploy the three worker VMs. I believe the TKG cluster is looking for three supervisor control plane VMs. Nonetheless, I was able to deploy applications on native pods and it demonstrates that William’s efforts enable the community at large to do more with less in their home or work lab environments. If you find his work useful, make a point to reach out and thank him.

How to Validate MTU in an NSX-T Environment – This is a beautifully written chapter. Round up the usual vmkping suspects (Captain Louis Renault, Casablanca). You’ll find them all here. The author also utilizes esxcli (general purpose ESXi CLI), esxcfg-vmknic (vmkernel NIC CLI), nsxdp-cli (NSX datapath), and edge node CLI for diagnostics.

NSX-T Command Line Reference Guide – I stumbled onto this guide covering nsxcli. Although I didn’t use it for getting the Kubernetes lab off the ground, it looks very interesting and useful for NSX-T Datacenter so I’m bookmarking it for later. What’s interesting to note here is nsxcli CLI is run from the ESXi host via installed kernel module, as well as from the NSX-T Manager and the edge nodes.

What is Dell Technologies PowerStore – Part 13, Integrating with VMware Tanzu Kubernetes Grid – Itzik walks through VMware Tanzu Kubernetes Grid with vSphere 7 and Tanzu Kubernetes clusters. He walks through the creation of a cluster with Dell EMC PowerStore and vVols.

Lastly, here’s a cool click-through demo of a Kubernetes deployment on VCF 4.0 – vSphere with Kubernetes on Cloud Foundation (credit William Lam for sharing this link).

With that, I introduce a new blog tag: Kubernetes

Peace out.

Site Recovery Manager Firewall Rules for Windows Server

April 29th, 2020

I have a hunch Google sent you here. Before we get to what you’re looking for, I’m going to digress a little. tl;dr folks feel free to jump straight to the frown emoji below for what you’re looking for.

Since the Industrial Revolution, VMware has supported Microsoft Windows and SQL Server platforms to back datacenter and cloud infrastructure products such as vCenter Server, Site Recovery Manager, vCloud Director (rebranded recently to VMware Cloud Director), and so on. However, if you’ve been paying attention to product documentation and compatibility guides, you will have noticed support for Microsoft platforms diminishing in favor of easy to deploy appliances based on Photon OS and VMware Postgres (vPostgres). This is a good thing – spoken by a salty IT veteran with a strong Windows background.

2019 is where we really hit a brick wall. vCenter Server 6.7 is the last version that supports installation on Windows and that ended on Windows Server 2016 – there was never support for Windows Server 2019 (reference VMware KB 2091273 – Supported host operating systems for VMware vCenter Server installation). In vSphere 7.0, vCenter Server for Windows has been removed and support is not available. For more information, see Farewell, vCenter Server for Windows. Likewise, Microsoft SQL Server 2016 was the last version to support vCenter Server (matrix reference).

Site Recovery Manager (SRM) is in the same boat. It was born and bred on Winodws and SQL Server back ends. But once again we find a Photon OS-based appliance with embedded vPostres available along with product documentation which highlights diminishing support for Microsoft Windows and SQL.

Taking a closer look at the documentation…

Compatibility Matrices for VMware Site Recovery Manager 8.2

Compatibility Matrices for VMware Site Recovery Manager 8.3

  • “Site Recovery Manager Server 8.3 supports the same Windows host operating systems that vCenter Server 7.0 supports.” SRM 8.3 supports vCenter Server 6.7 as well so that should been included here also but was left out, probably an oversight.
  • Supported host operating systems for VMware vCenter Server installation (2091273)
  • Takeaway: vCenter Server 7 cannot be installed on Windows. This implies SRM 8.3 supports no version of Windows Server for installation (this implication is not at all correct as SRM 8.3 ships as a Windows executable installation for vSphere 6.x environments). Not a great spot to be in since the Photon OS-based SRM appliance employs a completely different Storage Replication Adapter (SRA) than the Windows installation and not all storage vendors support both (yet).

Ignoring the labyrinth of supported product and platform compatibility matrices above, one may choose to forge ahead and install SRM on Windows Server 2019 anyway. I’ve done it several times in the lab but there was a noted takeaway.

When I logged into the vSphere Client, the SRM plug-in was not visible. In my travels, there’s a few reasons why this symptom can occur.

  • The SRM services are not started.
  • The logged on user account is not a member of the SRM Administrators group (yes even super users like administrator@vsphere.local will need to be added to this group for SRM management).
  • The Windows Firewall is blocking ports used to present the plug-in.

Wait, what? The Windows Firewall wasn’t typically a problem in the past. That is correct. The SRM installation does create four inbound Windows Firewall rules (none outbound) on Windows Server up through 2016. However, for whatever reason, the SRM installation does not create these needed firewall rules on Windows Server 2019. The lack of firewall rules allowing SRM related traffic will block the plug-in. Reference Network Ports for Site Recovery Manager.

One obvious workaround here would be to disable the Windows Firewall but what fun would that be? Also this may violate IT security plans, trigger an audit, require exception filings. Been there, done that, ish. Let’s dig a little deeper.

The four inbound Windows Firewall rules ultimately wind up in the Windows registry. A registry export of the four rules actually created by an SRM installation is shown below. Through trial and error I’ve found that importing the rules into the Windows registry with a .reg file results in broken rules so for now I would not recommend that method.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\SharedAccess\Parameters\FirewallPolicy\FirewallRules]
"{B37EDE84-6AC1-4F7D-9E42-FA44B8D928D0}"="v2.26|Action=Allow|Active=TRUE|Dir=In|App=C:\\Program Files\\VMware\\VMware vCenter Site Recovery Manager\\bin\\vmware-dr.exe|Name=VMware vCenter Site Recovery Manager|Desc=This feature allows connections using VMware vCenter Site Recovery Manager.|"
"{F6EAE3B7-C58F-4582-816B-C3610411215B}"="v2.26|Action=Allow|Active=TRUE|Dir=In|Protocol=6|LPort=9086|Name=VMware vCenter Site Recovery Manager - HTTPS Listener Port|"
"{F6E7BE93-C469-4CE6-80C4-7069626636B0}"="v2.26|Action=Allow|Active=TRUE|Dir=In|App=C:\\Program Files\\VMware\\VMware vCenter Site Recovery Manager\\external\\commons-daemon\\prunsrv.exe|Name=VMware vCenter Site Recovery Manager Client|Desc=This feature allows connections using VMware vCenter Site Recovery Manager Client.|"
"{66BF278D-5EF4-4E5F-BD9E-58E88719FA8E}"="v2.26|Action=Allow|Active=TRUE|Dir=In|Protocol=6|LPort=443|Name=VMware vCenter Site Recovery Manager Client - HTTPS Listener Port|"

The four rules needed can be created by hand in the Windows Firewall UI, configured centrally via Group Policy Object (GPO), or scripted with netsh or PowerShell. I’ve chosen PowerShell and created the script below for the purpose of adding the rules. Pay close attention in that two of these rules are application path specific. Change the drive letter and path to the applications as necessary or the two rules won’t work properly.

# Run this PowerShell script directly on a Windows based VMware Site
# Recovery Manager server to add four inbound Windows firewall rules
# needed for SRM functionality.
# Jason Boche
# http://boche.net
# 4/29/20

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager" -Description "This feature allows connections using VMware vCenter Site Recovery Manager." -Direction Inbound -Program "C:\Program Files\VMware\VMware vCenter Site Recovery Manager\bin\vmware-dr.exe" -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager - HTTPS Listener Port" -Direction Inbound -LocalPort 9086 -Protocol TCP -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager Client" -Description "This feature allows connections using VMware vCenter Site Recovery Manager Client." -Direction Inbound -Program "C:\Program Files\VMware\VMware vCenter Site Recovery Manager\external\commons-daemon\prunsrv.exe" -Action Allow

New-NetFirewallRule -DisplayName "VMware vCenter Site Recovery Manager Client - HTTPS Listener Port" -Direction Inbound -LocalPort 443 -Protocol TCP -Action Allow

Test execution was a success.

PS S:\PowerShell scripts> .\srmaddwindowsfirewallrules.ps1


Name                  : {10ba5bb3-6503-44f8-aad3-2f0253c980a6}
DisplayName           : VMware vCenter Site Recovery Manager
Description           : This feature allows connections using VMware vCenter Site Recovery Manager.
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {ea88dee8-8c96-4218-a23d-8523e114d2a9}
DisplayName           : VMware vCenter Site Recovery Manager - HTTPS Listener Port
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {a707a4b8-b0fd-4138-9ffa-2117c51e8ed4}
DisplayName           : VMware vCenter Site Recovery Manager Client
Description           : This feature allows connections using VMware vCenter Site Recovery Manager Client.
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local

Name                  : {346ece5b-01a9-4a82-9598-9dfab8cbfcda}
DisplayName           : VMware vCenter Site Recovery Manager Client - HTTPS Listener Port
Description           :
DisplayGroup          :
Group                 :
Enabled               : True
Profile               : Any
Platform              : {}
Direction             : Inbound
Action                : Allow
EdgeTraversalPolicy   : Block
LooseSourceMapping    : False
LocalOnlyMapping      : False
Owner                 :
PrimaryStatus         : OK
Status                : The rule was parsed successfully from the store. (65536)
EnforcementStatus     : NotApplicable
PolicyStoreSource     : PersistentStore
PolicyStoreSourceType : Local



PS S:\PowerShell scripts>

After logging out of the vSphere Client and logging back in, the Site Recovery plug-in loads and is available.

Feel free to use this script but be advised, as with anything from this site, it comes without warranty. Practice due diligence. Test in a lab first. Etc.

With virtual appliance being fairly mainstream at this point, this article probably won’t age well but someone may end up here. Maybe me. It has happened before.

vCenter Server 6 Appliance fsck failed

April 4th, 2016

A vCenter Server Appliance (vSphere 6.0 Update 1b) belonging to me was bounced and for some reason was unbootable. The trouble during the boot process begins with /dev/sda3 contains a file system with errors, check forced. At approximately 27% of the way through, the process terminates with fsck failed. Please repair manually and reboot.

Unable to access a bash# prompt from the current state of the appliance, I followed VMware KB 2069041 VMware vCenter Server Appliance 5.5 and 6.0 root account locked out after password expiration, particularly the latter portion of it which provides the steps to modify a kernel option in the GRUB bootloader to obtain a root shell (and subsequently run the e2fsck -y /dev/sda3 repair command.

The steps are outlined in VMware KB 2069041 and are simple to follow.

  1. Reboot the VCSA
  2. Be quick about highlighting the VMware vCenter Server appliance menu option (the KB article recommends hitting the space bar to stop the default countdown)
  3. p (to enter a root password and continue with additional commands the next step)
  4. e (to edit the boot command)
  5. Append init=/bin/bash (followed by Enter to return to the GRUB menu
  6. b (to start the boot process)

This is where e2fsck -y /dev/sda3 is executed to repair file system errors on /dev/sda3 and allow the VCSA to boot successfully.

When the process above completes, reboot the VCSA and that should be all there is to it.

Update 10/9/17: I ran into a similar issue with VCSA 6.5 Update 1 where the appliance wouldn’t boot and I was left at an emergency mode prompt. In this situation, following the steps above isn’t so straight forward in part due to the Photon OS splash screen and no visibility to the GRUB bootloader (following VMware KB 2081464). In this situation, I executed fsck /dev/sda3 at the emergency mode prompt answering yes to all prompts. After reboot, I found this did not resolve all of the issues. I was able to log in by providing the root password twice. The journalctl command revealed a problem with /dev/mapper/log_vg-log. Next I ran fsck /dev/mapper/log_vg-log again answering yes to all prompts to repair. When that was finished, the appliance was rebooted and came up operational.

VMware vCenter Cookbook

July 27th, 2015

Back in June, I was extended an offer from PACKT Publishing to review a new VMware book. I’ve got a lot on my plate at the moment but it sounded like an easier read and I appreciated the offer as well as the accommodation of my request for paperback in lieu of electronic copy so I accepted. I finished reading it this past weekend.

The book’s title is VMware vCenter Cookbook and it is PACKT’s latest addition to an already extensive Cookbook series (Interested in Docker, DevOps, or Data Science? There’s Cookbooks for that). Although it was first published in May 2015, the content isn’t quite so new as its coverage includes vSphere 5, and vSphere 5 only with specific focus on vSphere management via vCenter Server as the title of the book indicates. The author is Konstantin Kuminsky and as I mentioned earlier the book is made available in both Kindle and paperback formats.

Admittedly I’m not familiar with PACKT’s other Cookbooks but the formula for this one is much the same as the others I imagine: “Over 65 hands-on recipes to help you efficiently manage your vSphere environment with VMware vCenter”. Each of the recipes ties to a management task that an Administrator of a vSphere environment might need to carry out day to day, weekly, monthly, or perhaps annually. Some of the recipes can also be associated with and aid in design, architecture, and planning although I would not say these are not the main areas of focus. The majority of the text is operational in nature.

The recipes are organized by chapter and while going from one to the next, there may be a correlation, but often there is not. It should be clear at this point it reads like a cookbook, and not a mystery novel (although for review purposes I did read it cover to cover). Find the vCenter how-to recipe you need via the Table of Contents or the index and follow it. Expect no more and no less.

Speaking of the Table of Contents…

  • Chapter 1: vCenter Basic Tasks and Features
  • Chapter 2: Increasing Environment Availability
  • Chapter 3: Increasing Environment Scalability
  • Chapter 4: Improving Environment Efficiency
  • Chapter 5: Optimizing Resource Usage
  • Chapter 6: Basic Administrative Tasks
  • Chapter 7: Improving Environment Manageability

It’s a desktop reference (or handheld I suppose depending on your preferred consumption model) which walks you through vSphere packaging and licensing on one page, and NUMA architecture on the next. The focus is vCenter Server and perhaps more accurately vSphere management. Fortunately that means there is quite a bit of ESXi coverage as well with management inroads from vCenter, PowerShell, and esxcli. Both Windows and appliance vCenter Server editions are included as well as equally fair coverage of both vSphere legacy client and vSphere web client.

Bottom line: It’s a good book but it would have been better had it been released at least a year or two earlier. Without vSphere 6 coverage, there’s not a lot of mileage left on the odometer. In fairness I will state that many of the recipes will translate identically or closely to vSphere 6, but not all of them. To provide a few examples, VM templates and their best operational practices haven’t changed that much. On the other hand, there are significant differences between FT capabilities and limitations between vSphere 5 and vSphere 6. From a technical perspective, I found it pretty spot on which means the author and/or the reviewers did a fine job.

Thank you PACKT Publishing for the book and the opportunity.

Legacy vSphere Client Plug-in 1.7 Released for Storage Center

July 23rd, 2014

Dell Compellent Storage Center customers who use the legacy vSphere Client plug-in to manage their storage may have noticed that the upgrade to PowerCLI 5.5 R2 which released with vSphere 5.5 Update 1 essentially “broke” the plug-in. This forced customers to make the decision to stay on PowerCLI 5.5 in order to use the legacy vSphere Client plug-in, or reap the benefits of the PowerCLI 5.5 R2 upgrade with the downside being they had to abandon use of the legacy vSphere Client plug-in.

For those that are unaware, there is a 3rd option and that is to leverage vSphere’s next generation web client along with the web client plug-in released by Dell Compellent last year (I talked about it at VMworld 2013 which you can take a quick look at below).

Although VMware strongly encourages customers to migrate to the next generation web client long term, I’m here to tell you that in the interim Dell has revd the legacy client plug-in to version 1.7 which is now compatible with PowerCLI 5.5 R2.  Both the legacy and web client plug-ins are free and quite beneficial from an operations standpoint so I encourage customers to get familiar with the tools and use them.

Other bug fixes in this 1.7 release include:

  • Datastore name validation not handled properly
  • Create Datastore, map existing volume – Server Mapping will be removed from SC whether or not it was created by VSP
  • Add Raw Device wizard is not allowing to uncheck a host once selected
  • Remove Raw Device wizard shows wrong volume size
  • Update to use new code signing certificate
  • Prevent Datastores & RDMs with underlying Live Volumes from being expanded or deleted
  • Add support for additional Flash Optimized Storage Profiles that were added in SC 6.4.2
  • Block size not offered when creating VMFS-3 Datastore from Datacenter menu item
  • Add Raw Device wizard is not allowing a host within the same cluster as the select host to be unchecked once it has been selected
  • Add RDM wizard – properties screen showing wrong or missing values
  • Expire Replay wizard – no error reported if no replays selected
  • Storage Consumption stats are wrong if a Disk folder has more than one Storage Type

Failed to connect to VMware Lookup Service

March 14th, 2014

Judging by the search results returned by Google, it looks like my blog is among the few virtualization blogs remaining which does not have a writeup on this topic.  It’s Friday so… why not.

Scenario:  vSphere 5.5 Update 1 VMware vSphere Web Client fails to log into vCenter Server (appliance version) with the following error returned:

Failed to connect to VMware Lookup Service

https://fqdn:7444/lookupservice/sdk –

SSL certificate verification failed.

Snagit Capture

Contributing factors in my case which may have played a role in this once working environment:

  1. Recently upgraded vCenter 5.5.0 Server appliance to Update 1 (unlikely as other similar environments were not impacted after upgrade)
  2. This particular vCenter appliance was deployed as a vApp from a vCloud Director catalog (likely  but unknown at this time if a customization was possible or attempted during deployment)
  3. The hostname of the appliance may have been changed recently (very likely)

The solution is quite simple.

  1. Log into the vCenter Server appliance management interface (https://fqdn:5480/)
  2. Navigate to the Admin tab
  3. Certificate regeneration enabled: choose Yes
  4. Click the Submit button
  5. Navigate to the System tab
  6. Reboot the appliance

After the appliance reboots

  1. Log into the vCenter Server appliance management interface (https://fqdn:5480/)
  2. Navigate to the Admin tab
  3. Certificate regeneration enabled: choose No
  4. Click the Submit button
  5. Log out of the vCenter Server appliance management interface
  6. Log into the VMware vSphere Web Client normally

Admittedly I recalled the Certificate regeneration feature first by logging into the vCenter Server appliance management interface, but then verified with a search to ensure the purpose of the Certificate regeneration feature.  The search results turned up Failed to connect to VMware Lookup Service – SSL Certificate Verification Failed (among many other blog posts as mentioned earlier) in addition to VMware KB 20333338 Troubleshooting the vCenter Server Appliance with Single Sign-On login.  Both more or less highlight a discrepancy between the appliance hostname and the SSL certificate resulting in the need to regenerate the certificate to match the currently assigned hostname.

I ran across another issue this week during the Update 1 upgrade to the vCenter appliance which I may or may not get to writing about today.

At any rate, have wonderful and Software Defined weekend!

vCenter Server Appliance 5.5 root account locked out after password expiration

January 10th, 2014

Thanks to Chris Colotti, I learned of a new VMware KB article today which could potentially have wide spread impact, particularly in lab, development, or proof of concept environments.  The VMware KB article number is 2069041 and it is titled The vCenter Server Appliance 5.5 root account locked out after password expiration.

In summary, the root account of the vCenter Server Appliance version 5.5 becomes locked out 90 days after deployment or root account password change.  This behavior is by design which follows a security best practice of password rotation.  In this case, the required password rotation interval is 90 days after which the account will be forcefully locked out if not changed.

The KB article describes processes to prevent a forced lockout as well as unlocking a locked out root account.

Approximately 90 days have elapsed since the release of vSphere 5.5 and I imagine this issue will quickly begin surfacing in large numbers where the vCenter Server Appliance 5.5 has been deployed using system defaults.

Update 6/16/16: For more information on vCenter Server Appliance password policies, including the local root account, check out vCSA 6.0 tricks: shell access, password expiration and certificate warnings.