Posts Tagged ‘Networking’

vCloud Director Error Cannot delete network pool

August 15th, 2015

I ran into a small problem this week in vCloud Director whereby I was unable to Delete a Network Pool. The error message stated Cannot delete network pool because It is still in use. It went on to list In use items along with a moref identifier. This was not right because I had verified there were no vApps tied to the Network Pool. Furthermore the item listed still in use was a dynamically created dvportgroup which also no longer existed on the vNetwork Distributed Switch in vCenter.

I suspect this situation came about due to running out of available storage space earlier in the week on the Microsoft SQL Server where the vCloud database is hosted. I was performing Network Pool work precisely when that incident occurred and I recall an error message at the time in vCloud Director regarding tempdb.

I tried removing state data from QRTZ tables which I blogged about here a few years ago and has worked for specific instances in the past but unfortunately that was no help here. Searching the VMware Communities turned up sparse conversations about roughly the same problem occurring with Org vDC Networks. In those situations, manually editing the vCloud Director database was required.

An obligatory warning on vCloud database editing. Do as I say, not as I do. Editing the vCloud database should be performed only with the guidance of VMware support. Above all, create a point in time backup of the vCloud database with all vCloud Director cell servers stopped (service vmware-vcd stop). There are a variety of methods in which you can perform this database backup. Use the method that is most familiar to and works for you.

Opening up Microsoft SQL Server Management Studio, there are rows in two different tables which I need to delete to fix this. This has to be done in the correct order or else a REFERENCE constraint conflict occurs in Microsoft SQL Server Management Studio and the statement will be terminated.

So after stopping the vCloud Director services and getting a vcloud database backup…

Step 1: Delete the row referencing the dvportgroup in the [vcloud].[dbo].[network_backing] table:

Step 2: Delete the row referencing the unwanted Network Pool in the [vcloud].[dbo].[network_pool] table:

That should take care of it. Start the vCloud Director service in all cell servers (service vmware-vcd start) and verify the Network Pool has been removed.

vSphere Consulting Opportunity in Twin Cities

December 14th, 2013

If you know me well, you know the area I call home.  If you’re a local friend, acquaintance, or member any of the three Minnesota VMware User Groups, then I have an opportunity that has crossed my desk which you or someone you know may be interested in.

A local business here in the Twin Cities has purchased vSphere and EMC VNXe storage infrastructure and is looking for a Consulting Engineer to deploy the infrastructure per an existing design.

Details:

  • Install and configure VMware vSphere 5.1 on two hosts
  • Install and configure VMware vCenter
  • Install and configure VMware Update Manager
  • Configure vSphere networking
  • Configure EMC VNXe storage per final design.

It’s a great opportunity to help a locally owned business deploy a vSphere infrastructure and I would think this would be in the wheelhouse of 2,000+ people I’ve met while running the Minneapolis VMware User Group.  As much as I’d love to knock this out myself, I’m a Dell Storage employee and as such I’m removing myself as a candidate for the role.  The best way I can help is to get the word out into the community.

If you’re interested, email me with your contact information and I’ll get you connected to the Director.

Happy Holidays!

Adding an IP Alias to the vCloud Director Cell Server

July 5th, 2012

Hola! Yo Soy Dora!  I hope you are having a great week and for those in the US, I hope your 4th of July holiday was fun and relaxing.

Here’s another “how to” for those not real familiar with Linux when standing up a vCloud Director infrastructure.  If you’re following the documentation, you’ll notice on page 13 of the vCloud Director Installation and Configuration Guide that two NICs or an IP alias are required to support two separate SSL connections on each vCloud Director cell server.  One IP is used for the vCloud Director HTTP service and the other is used for the console proxy service.  I’ve deployed both methods, multiple NICs and IP aliasing, for the VCD cell server.  Neither method has a distinct advantage over the other in terms of performance or other important metrics.  Where both the http and console proxy addresses are on the same subnet, I prefer to use the IP Alias method to keep things a little cleaner but using two NICs is better at full disclosure in terms of how the VCD Cell Server is built and configured from a network standpoint.

To wrap some visualization around the two options, if you’re not familiar with Linux IP Aliasing, you’d probably deploy each VCD cell server in a multihomed configured with a minimum of two NICs and two IP addresses required for VCD, one IP established for each of the required SSL connections.

Snagit Capture

The IP Alias method involves just a single NIC with two IP addresses on the same subnet sharing a common mask and default gateway for the two required SSL connections.  Don’t forget that with either method, without routed NFS on the network, each VCD cell server would likely have one additional NIC dedicated to an NFS network for vCloud Director Transfer Storage assuming the clustered cell configuration recommended for production and highly available cloud infrastructures.

Snagit Capture

I think everyone knows how to install and configure a multihomed server, so this writing will focus on adding an IP alias to a NIC in RHEL 5 Update 7, or at least it will focus on how I learned to do it via the command line.  I’ll also show a second method to accomplish adding an IP alias through the GUI (X is enabled by default in RHEL 5.7).

Assuming RHEL 5 Update 7 is already installed with a NIC having an IP address 192.168.0.10, adding an additional IP address via an alias takes just a few steps via CLI.

  1. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0 to edit the network configuration for eth0.  If it exists, remove the line GATEWAY=192.168.0.1 or comment it out by placing a hash (#) character at the beginning of the line like so: # GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  2. Make a copy of ifcfg-eth0 to use for the IP alias.  Do this with the command cp /etc/sysconfig/network-scripts/ifcfg-eth0/etc/sysconfig/network-scripts/ifcfg-eth0:0
  3. Use nano -w /etc/sysconfig/network-scripts/ifcfg-eth0:0 to edit the network configuration for eth0:0.  Change DEVICE=eth0 to read DEVICE=eth0:0.  Change IPADDR=192.168.0.10 to read IPADDR=192.168.0.11  Change ONBOOT=yes to read ONPARENT=yes  Save and exit nano with CTRL+X.
  4. Use nano -w /etc/sysconfig/network to add a commonly shared default gateway for eth0 and eth0:0.  Add the line GATEWAY=192.168.0.1  Save and exit nano with CTRL+X.
  5. Restart networking with service network restart

At this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.

A second method to accomplish the above would be through the GUI by running the Networking application in RHEL 5 Update 7.

Seen here, eth0 is already configured.  Click the New button to add an IP alias:

Snagit Capture

Select Ethernet connection, choose the existing NIC for eth0, assign the IP address, Subnet Mask, and Default Gateway for the alias, and then lastly click on the Activate button with eth0:1 highlighted.

Snagit Capture

Once again, at this point, the Linux platform has a single NIC with two IP addresses and the installation of vCloud Director on this cell can begin.  Highlighted in yellow below is the IP alias or second IP address bound to eth0:

Snagit Capture

I’ve found that the GUI approach obsoletes steps 1 and 4 from the CLI approach above.  Basically it strips out the steps where the Default Gateway configuration is moved from the individual ifcfg-eth0 network startup scripts to the centralized /etc/sysconfig/network location.  It further affirms the GATEWAY= entry may remain in each of the individual ifcfg-eth0 network startup scripts.  In the end, both methods work for a vCloud Director cell server however I imagine adding an additional NIC hard wired to an access port not on the 192.168.0.0 subnet will have issues with a GATEWAY=192.168.0.1 in /etc/sysconfig/network.

Spousetivities Is Packing For Boston

June 5th, 2012

Snagit Capture

Dell Storage Forum kicks off in Boston next week and Spousetivities will be there to ensure a good time is had by all.  If you’ve never been to Boston or if you haven’t had a chance to look around, you’re in for a treat.  Crystal has an array of activities queued up (see what I did there?) including  whale watching, a tour of MIT and/or Harvard via trolley or walking, a trolley tour of historic Boston (I highly recommend this one, lots of history in Boston), a wine tour, as well as a welcome breakfast to get things started and a private lunch cruise.

If you’d like to learn more or if you’d like to sign up for one or more of these events, follow this link – Spousetivities even has deals to save you money on your itinerary.

We hope to see you there!

Snagit Capture

Top Blog 2012 Results

February 26th, 2012

Snagit CaptureFor several years, my friend in blogging, virtualization, storage, and cigars, Eric Siebert, has conducted an annual online survey where virtualization community members can vote for their top-10 blogs.  The latest results were just released this morning along with a video counting down the top-25.  Once again, I’ve been fortunate enough to remain among the top-10 of 187 VMware virtualization blogs.  I have slipped a few spots over the past few years but nonetheless it’s an honor to be recognized among so much great talent.

My thanks and appreciation goes out to Eric Siebert who spent well over 30 of his own hours making this year’s contest successful.  Of course I’d also like to thank the readers who voted for my blog, effectively letting me know that the content I produce is valuable to the community.  That is one of the reasons I started blogging and it is the reason I will continue to do so.  Last but not least, thank you TrainSignal for sponsoring this year’s contest.

A full compilation of results and categories can be found at Eric’s site using the link above.  Following is an excerpt displaying the top-25:

Blog Rank Previous Total Votes Total Points #1 Votes
Yellow Bricks (Duncan Epping) 1 1 697 5440 243
Scott Lowe 2 3 480 3034 25
NTPro.nl (Eric Sloof) 3 4 419 2592 45
Virtual Geek (Chad Sakac) 4 2 381 2298 46
Frank Denneman 5 6 373 2214 19
RTFM Education (Mike Laverick) 6 5 337 1775 6
Virtu-al (Alan Renouf) 7 9 294 1599 10
Virtually Ghetto (William Lam) 8 25 288 1522 21
Virtualization Evangelist (Jason Boche) 9 8 283 1392 15
vSphere-land (Eric Siebert) 10 7 264 1267 9
The SLOG (Simon Long) 11 11 225 1258 23
Virtual Storage Guy (Vaughn Stewart) 12 15 218 1245 48
vReference (Forbes Guthrie) 13 19 219 1123 14
LucD (Luc Dekens) 14 21 174 1055 20
Gabe’s Virtual World (Gabriel Van Zanten) 15 10 204 995 19
Nickapedia (Nicholas Weaver) 16 24 171 948 14
My Virtual Cloud (Andre Leibovici) 17 39 150 914 25
TechHead (Simon Seagrave) 18 14 166 904 17
VMGuru.nl (Various) 19 13 179 815 21
ESX Virtualization (Vladan Seget) 20 23 138 804 19
Chris Colotti 21 - 119 733 28
VMware Tips (Rick Scherer) 22 18 155 718 5
Pivot Point (Scott Drummonds) 23 17 114 615 1
Brian Madden 24 - 96 581 6
Stephen Foskett, Pack Rat 25 - 116 562 1

Jobs

February 25th, 2012

I receive a lot of communication from recruiters, some of which I’m allowed to share, so I’ve decided to try something.  On the Jobs page, I’ll pass along virtualization and cloud centric opportunities – mostly US based but in some cases throughout the globe.  Only recruiter requests will be posted.  I won’t syndicate content easily found on the various job boards.  If you’re currently on the bench or looking for a new challenge, you may find it here.  Don’t tell them Jason sent you.  I receive no financial gain or benefit otherwise but I thought I could do something with these opportunities other than deleting them.  Best of luck in your search.

In case you missed the link, the Jobs page.

Cloning VMs, Guest Customization, & vDS Ephemeral Port Binding

November 25th, 2011

I spent a lot of time in the lab over the past few days.  I had quite a bit of success but I did run into one issue in which the story does not have a very happy ending.

The majority of my work involved networking in which I decommissioned all legacy vSwitches in the vSphere 5 cluster and converted all remaining VMkernel port groups to the existing vNetwork Distributed Switch (vDS) where I was already running the majority of the VMs on Static binding port groups.  In the process, some critical infrastructure VMs were also moved to the vDS including the vCenter, SQL, and Active Directory domain controller servers.  Because of this, I elected to implement Ephemeral – no binding for the port binding configuration of the VM port group which all VMs were connected to, including some powered off VMs I used for cloning to new virtual machines.  This decision was made in case there was a complete outage in the lab.  Static binding presents issues where in some circumstances, VMs can’t power on when the vCenter Server (Control Plane of the vDS) is down or unavailable.  Configuring the port group for Ephemeral – no binding works around this issue by allowing VMs to power on and claim their vDS ports when the vCenter Server is down.  There’s a good blog article on this subject by Eric Gray which you can find here.

Everything was working well with the new networking configuration until the following day when I tried deploying new virtual machines by cloning powered off VMs which were bound to the Ephemeral port group.  After the cloning process completed, the VM powered on for the first time and Guest Customization was then supposed to run.  This is where the problems came up.  The VMs would essentially hang just after guest customization was invoked by the vCenter Server.  While watching the remote console of the VM, it was evident that Guest Customization wasn’t starting.  At this point, the VM can’t be powered off – an error is displayed:

Cannot power Off vm_name on host_name in datacenter_name: The attempted operation cannot be performed in the current state (Powered on).

DRS also starts producing occasional errors on the host:

Unable to apply DRS resource settings on host host_name in datacenter_name. The operation is not allowed in the current state.. This can significantly reduce the effectiveness of DRS.

VMware KB 1004667 speaks to a similar circumstance where a blocking task on a VM (in this case a VMware Tools installation) prevents any other changes to it.  This speaks to why the VM can’t be powered off until the VMware Tools installation or Guest Customization process either ends or times out.

Finally, the following error in the cluster Events is what put me on to the suspicion of Ephemeral binding as the source of the issues:

Error message on vm_name on host_name in datacenter_name: Failed to connect virtual device Ethernet0.

Error Stack:

Failed to connect virtual device Ethernet0.

Unable to get networkName or devName for ethernet0

Unable to get dvs.portId for ethernet0

I searched the entire vSphere 5 document library for issues or limitations related to the use of Ephemeral – no binding but came up empty.  This reinforced my assumption that Ephemeral binding across the board for all VMs was a supported configuration.  Perhaps it is for running virtual machines but in my case it fails when used in conjunction with cloning and guest customization.  In the interim, I’ve moved off Ephemeral binding back to Static binding.  Cloning problem solved.