Twin Cities Powershell Users Group Meeting March 8th

March 7th, 2011 by jason 2 comments »

The next Twin Cities Powershell Users Group will convene on March 8th at 4:30 pm (THAT’S TOMORROW!) at the Microsoft Office in Bloomington. There are three reasons I am encouraging as many people as possible to attend this event.

Date:           March 08, 2011
Time:           4:30-6:00 p.m.
Location:     8300 Norman Center Drive, 9th Floor, Bloomington, MN 55437

Please attend if you are able, and forward this invite to anybody else that you feel might be interested in attending. RSVP at this link.

http://www.tcposhug.com/

The content being presented is focused on leveraging PowerCLI to manage and monitor your VMware environment. PowerCLI is an extremely powerful set of capabilities which will allow you to automate and manage your environment in a very efficient manner. Being able to leverage PowerCLI will save you time and make you a better VMware administrator. Additionally, this skill set is applicable to many other aspects of IT.

The presenter at this event is Ryan Grendahl from Datalink. For those of you who don’t know Ryan, he is extremely strong around VMware, storage, and automation. In fact, Ryan recently attained his VCDX, becoming one of only 66 people in the world to earn this very highly regarded certification. Ryan is very proficient and knowledgeable around PowerCLI and I believe that you will learn a lot by attending.

This event is at the Microsoft office in Bloomington. I would love to see a HUGE turnout to this event so that the Microsoft staff can see how interested people are in VMware based solutions. I’m hoping that we can make this a standing room only turnout.

Tiny Core Linux and Operational Readiness

February 28th, 2011 by jason 11 comments »

When installing, configuring, or managing VMware virtual infrastructure, one of the steps which should be performed before releasing a host (back) to production is to perform operational readiness tests.  One test which is quite critical is that of testing virtual infrastructure networking.  After all, what good is a running VM if it has no connectivity to the rest of the network?  Each ESX or ESXi host pNIC should be individually tested for internal and upstream connectivity, VLAN tagging functionality if in use (quite often it is), in addition to proper failover and fail back, and jumbo frames at the guest level if used.

There are several types of VMs or appliances which can be used to generate basic network traffic for operational readiness testing.  One that I’ve been using recently (introduced to me by a colleague) is Tiny Core Linux.  To summarize:

Tiny Core Linux is a very small (10 MB) minimal Linux GUI Desktop. It is based on Linux 2.6 kernel, Busybox, Tiny X, and Fltk. The core runs entirely in ram and boots very quickly. Also offered is Micro Core a 6 MB image that is the console based engine of Tiny Core. CLI versions of Tiny Core’s program allows the same functionality of Tiny Core’s extensions only starting with a console based system.

TCL carries with it a few of benefits, some of which are tied to its small stature:

  • The minimalist approach makes deployment simple.
  • At just 10MB, it’s extremely portable and boots fast.
  • As a Linux OS, it’s freely distributable without the complexities of licensing or activation.
  • It’s compatible with VMware hardware 7 and the Flexible or E1000 vNIC making it a good network test candidate.
  • No installation is required.  It runs straight from an .ISO file or can boot from a USB drive.
  • Point and click GUI interface provides ease of use and configuration for any user.
  • When deployed with internet connectivity, it has the ability to download and install useful applications from an online repository such as Filezilla or Firefox.  There are tons of free applications in the repository.

As I mentioned before, deployment of TCL is pretty easy.  Create a VM shell with the following properties:

  • Other Linux (32-bit)
  • 1 vCPU
  • 256MB RAM
  • Flexible or E1000 vNIC
  • Point the virtual CD/DVD ROM drive to the bootable .ISO
  • No HDD or SCSI storage controller required

First boot splash screen.  Nothing real exciting here other than optional boot options which aren’t required for the purposes of this article.  Press Enter to continue the boot process:

SnagIt Capture

After pressing Enter, the boot process is briefly displayed:

SnagIt Capture

Once booted, the first step would be to configure the network via the Panel applet at the bottom of the Mac like menu:

SnagIt Capture

If DHCP is enabled on the subnet, an address will be automatically acquired by this point.  Otherwise, give eth0 a static TCP/IP configuration.  Name Servers are optional and not required for basic network connectivity unless you would like to test name resolution in your virtual infrastructure:

SnagIt Capture

Once TCP/IP has been configured, a Terminal can be opened up and a basic ping test can be started.  Change the IP address and vNIC portgroup to test different VLANs but my suggestion would be to spawn multiple TCL instances, one per each VLAN to test because you’ll need to vMotion the TCL VMs to each host being tested.  You don’t want to continuously be modifying the TCP/IP configuration:

SnagIt Capture

What else of interest is in the Panel applet besides Network configuration?  Some ubiquitous items such as date/time configuration, disk and terminal services tools, and wallpaper configuration:

SnagIt Capture

The online application repository is packed with what seems like thousands of apps:

SnagIt Capture

After installing FileZilla, it’s available as an applet:

SnagIt Capture

FileZilla is fully functional:

SnagIt Capture

So I’ve only been using Tiny Core Linux as a network testing appliance, but clearly it has some other uses when paired with extensible applications.  A few other things that I’ll point out is:

  1. TCL can be Suspended in order to move it to other clusters (with compatible CPUs) so that both a host and a storage migration can be performed in a single step.  Once TCL reaches its destination cluster, Unsuspend.
  2. During my tests, TCL will continue to run without issue after being severed from its boot .ISO.  This is possible because it is booted into RAM where it continues to run from that point on.

I’ve been watching Tiny Core Linux for several months and the development efforts appear fairly aggressive and backed by an individual or group with a lot of talent and energy which is good to see.  As of this writing, version 3.5 is available.  Give Tiny Core Linux a try.

WordPress 3.1 Upgrade Issues

February 27th, 2011 by jason 3 comments »

I noticed this evening that WordPress 3.1 was available and my blog’s dasboard was coaxing me to upgrade.  Every single time I have upgraded, I have made a backup before hand.  At the end of a long week, my logic was shot and I proceeded with the upgrade without a backup.  As luck would have it, my Windows Server 2003 and IIS based blog no longer worked.  Page loads were an endless hourglass, no 404 or any other web browser errors.   However, another symptom included the w3wp.exe process (this is IIS) on my server consuming extremely heavy CPU utilization during the endless page loads.  When cancelling the page load, the CPU utilization goes back down to normal.

As I have an ongoing obligation to blog sponsors, not to mention I was mentally drained, I was feeling pretty screwed at this point, but was prepared to restore from the previous night’s Veeam file level backups.  I turned to Google looking for other WordPress upgrade experiences.  Search results quickly lead me to this thread which provided a ton of users having the same issue.  A chap by the moniker of jarnez had the solution, or at least workaround which worked for me as well as others.  Open the blog’s admin dashboard (thankfully this is still functional) and install the Permalink Fix & Disable Canonical Redirects Pack plugin and all is back to normal again. 

Thank you jarnez!!!

VMTurbo Introduces Real-time Management Suite for Virtualized Data Centers

February 18th, 2011 by jason 2 comments »

Press Release:

VMTurbo Introduces Real-time Management Suite for Virtualized Data Centers

Holistic suite ‘ties the viewing with the doing’ by proactively preventing problems and recommending and automating corrective actions for healthy and efficient environments

Valhalla, NY, February 15, 2011 — VMTurbo, provider of software to analyze, optimize and control the virtualized data center, today announced availability of the full VMTurbo Virtualization Management Suite.  Unique in its ability to turn insights into actions, VMTurbo pinpoints problems, identifies their impact and recommends corrective actions, which can be automated to ensure healthy and efficient virtual environments.

“VMTurbo has given HD Supply the visibility required to eliminate storage I/O bottlenecks and stabilize VM availability in our data centers,” said Brad Cowles, director of information technology at HD Supply, one of the largest diversified wholesale distributors in North America. “At the same time, VMTurbo is collecting the data HD Supply needs to optimize the environment as we move toward our goal of virtualizing 75% of our enterprise applications by 2014.”

VMTurbo is the only virtualization management solution to:

Combine real-time operational performance metrics with unique analytics to drive a broad set of workload management actions that maintain virtual infrastructure operations within pre-defined performance constraints, in order to guarantee service levels and maximize the ROI of server, storage and data center facilities;

Deliver performance at lowest infrastructure cost by automating the decision of what workload to run where and when in order to maximize the ROI of virtualized and cloud environments, and reduce both operating and capital expenses;

Ensure ongoing pro-active management to maintain a healthy and efficient data center;

Support systemic life-cycle management of the data center via an integrated suite that helps administrators and IT leadership organize operational management into consistent integrated workflows.

“By ensuring quality of service for mission-critical applications through proper workload balancing and eliminating and preventing problems, VMTurbo lets system administrators and infrastructure operations managers sleep at night,” said Shmuel Kliger, President and CEO, VMTurbo.  “With the enterprise-class ability to scale to thousands of VMs and beyond, VMTurbo is a life-saver as enterprises scale out their virtualization deployments to distributed data centers and cloud-scale environments.”

The VMTurbo Virtualization Management Suite – which includes Monitor, Reporter, Planner and Optimizer modules – is packaged in a single virtual appliance, making it easy to deploy, configure, operate and upgrade. Installed in minutes, the appliance automatically discovers and then monitors and analyzes your virtual infrastructure.  A single virtual appliance can manage thousands of VMs across multiple Virtual Centers, scaling out for large and cloud environments.

Availability and Pricing

The VMTurbo suite is currently available for the VMware ESX Server or vSphere 3.5u2 or later, and VMware vCenter 2.5 or later, priced at $399/socket.

Related Links

VMTurbo Optimizer: http://www.vmturbo.com/products/optimizer/

Top 10 Reasons to Choose VMTurbo: http://www.vmturbo.com/why-vmturbo/

About VMTurbo

VMTurbo provides an integrated software suite for proactive and automated management of workload and resources in virtualized data centers. Only VMTurbo provides a holistic view of your virtual infrastructure as well as detailed action plans with respect to workload placement and resource allocation.  Our customers accomplish ever more, with less IT resources, by using our suite to analyze, optimize and control their virtual infrastructure.

Deploy ESX & ESXi With Hidden Lab Manager 4 Switch

February 17th, 2011 by jason 9 comments »

SnagIt Capture200 million years from now, divers off the west coast of the U.S. will make an incredible discovery.  Miles beneath the Pacific Ocean, in a location once known as the Moscone Center in San Francisco, evidence will emerge which reveals spectacular gatherings that once took place.  Humans from around the globe would assemble semi-annually to celebrate virtualization and cloud technologies from a company named VMware which made its mark throughout history as the undisputed and mostly uncontested leader in its space.  What this company did changed the way mankind did business forever.  Companies and consumers alike were provided with tremendous advantages, flexibility, and cost savings.

At these events, massive amounts of compute resources were harnessed to power “virtual laboratories”.  These laboratories (or labs as they were called for short) were dynamically provisioned on demand and at large scale by the attendees themselves.  Archaeologists in Miami, Florida and Ashburn, Virginia made similar discoveries and they believe that the three sites were somehow linked together for the twice a year event called “VMworld”.  Scientists estimate that the combined amount of resources would easily be able to support the deployment 50,000+ “virtual machines” in just a few days.

How did they accomplish this?  Without a doubt, by automating.  The fossilized remains suggest they may have used one of their own development products called “Lab Manager” which was first introduced in the year 2006 A.D. and retired by vCloud Director just seven years later in 2013 according to the scriptures.  The Lab Manager product was a special use case tool which many businesses with internal software development processes flourished by, and a whole lot more when it morphed into vCD.  What wasn’t widely shared or known beyond the VMware staff was that it shipped with some special abilities that were locked and hidden.  Scientists believe these abilities assisted in the automated deployment of virtualized ESX and ESXi hosts within Lab Manager.  This was the key to automating the VMworld labs.  Details aren’t 100% complete but there’s enough information such that future researchers may be able to find or synthesize the missing DNA to recreate a functional replica of what once existed. 

Disclaimer: What follows is not supported by VMware.  Before you get carried away with excitement, ask yourself if this is something you should be doing in your environment.

The Lab Manager 4 configuration is stored in a SQL Express database installed locally on the Lab Manager 4 server.  To unlock the virtualized ESX(i) support, a hidden switch must be flipped in the database.  Add a row to the “Config” table in the Lab Manager database:

Cat: settings
Name: EsxVmSupportEnabled
Value: 1

This can be accomplished this by:

  1. granting a domain account the SysAdmin role using the SQL Server 2005 Surface Area Configuration tool inside the Lab Manager server
  2. and then executing the following query via a Microsoft SQL Server Management Studio on a remote SQL 2005 server (or use OSQL locally if you know how that tool works):

SnagIt Capture

The next step is to Clear Cache via the Uber Administration Screen in the Lab Manager web interface (this screen is available with or without the above database hack).  How does one get to this uber-admin page?  Log into the Lab Manager web interface as an administrator, click the About hyperlink on the left edge Support menu.  Once at the About page, Use CTRL+U to access the uber-admin page.  Click the Clear Cache button:

SnagIt Capture

Next step.  By virtue of having installed and performed the initial configuration of Lab Manager at this point, it is assumed one has already prepared the Lab Manager hosts with the default Lab Manager Agent.  To facilitate the automated deployment of virtual ESX(i) hosts in Lab Manager, the special ESX-VM support specific Lab Manager agents need to be installed.  To do this, simply Disable your Lab manager hosts, Unprepare each Lab Manager host, then Prepare again.  Because the hidden database switch was flipped in a previous step, Lab Manager will now install the ESX-VM support specific Lab Manager agent on each ESX(i) host.

The next two steps do not exploit a hidden feature, however, they do need to be followed for virtual ESX(i) deployment.  Navigate to Settings | Guest Customization.  Uncheck the box labeled Only Allow Publishing of Templates With a Version of VMware Tools That Supports Guest Customization.

SnagIt Capture

In the final step, Enterprise Plus customers making use of the vDS must disable host spanning on each Lab Manager host by unchecking the box Enable host for Host Spanning:

SnagIt Capture

Now that the required changes have been made to support virtual ESX(i) hosts in Lab Manager, the resulting changes can be seen within Lab Manager.

Create a new VM Template.  I’ll call this one ESXi 4.  Take a look at the new virtualized VMware ESX(i) Guest OS types are now available for templating and ultimately deployment:

SnagIt Capture

Immediately after creating the base template, select it and choose Properties.  Here we see several new fields for automating the deployment of virtual ESX(i) hosts: Licensing, credentials, shared storage connectivity, and vCenter configuration:

SnagIt Capture

For an ESX guest OS type, an additional field for configuring a VMkernel interface is made available:

SnagIt Capture

Finally, create a Configuration using one or more of the new virtual ESX(i) templates and take a look at the custom buttons that show up:  Configure vPod, Add ESX-VMs to External vCenter, Attach External NFS to ESX-VMs, and Attach External iSCSI to ESX-VMs.  These added functions could be used for manual provisioning post deployment, copying files, or for troubleshooting:

SnagIt Capture

This is enough to get started and experiment with.  Unfortunately, it’s not 100% complete.  What’s missing is a guest customization script which runs inside the virtual ESX(i) host post deployment and contains more of the automation needed to deploy unique and properly configured virtual ESX(i) hosts in Lab Manager.  Perhaps one day these scripts will be discovered and shared, or recreated.

vSphere Integration With EMC Unisphere

February 14th, 2011 by jason 6 comments »

SnagIt CaptureIf you manage EMC unified storage running at least FLARE 30 and DART 6, or if you’re using a recent version of the UBER VSA, or if you’re one of the fortunate few who have had your hands on the new VNX series, then chances are you’re familiar with or you’ve at least experienced Unisphere, which is EMC’s single pane of glass approach to managing its multi protocol arrays.  For what is essentially a 1.0 product, I think EMC did a great job with Unisphere.  It’s modern.  It’s fast.  It has a cool sleek design and flows well.  They may have cut a few corners where it made sense (one can still see a few old pieces of Navisphere code here and there) but what counts for me the most at the end of the day is the functionality and efficiency gained by a consolidation of tools.

You’re probably reading this because you have a relationship with VMware virtualization.  Anyone who designs, implements, manages, or troubleshoots VMware virtual infrastructure also has a relationship with storage, most often shared storage.  Virtualization has been transforming the datacenter, and not just it’s composition.  The way we manage and collaborate from a technology perspective is also evolving.  Virtualization has brought about an intersection of technologies which is redefining roles and delegation of responsibilities.  One of the earlier examples of this was virtual networking.  With the introduction of 802.1Q VST in ESX, network groups found themselves fielding requests for trunked VLANs to servers and having to perform the associated design, capacity, and security planning.  Managing access to VLANs was a shift in delegated responsibility from the network team to the virtualization platform team.  Some years later, implementation of the Cisco Nexus 1000V in vSphere pulled most of the network related tasks back under the control of the network team.

Storage is another broad reaching technology upon which most of today’s computing relies upon, including virtualization.  Partners work closely with VMware to develop tools which provide seamless integration of overlapping technologies.  Unisphere is one of several products in the EMC portfolio which boasts this integration.  Granted, some of these VMware bits existed in Unisphere’s ancestor Navisphere.  However, I think it’s still worth highlighting some of the capabilities found in Unisphere.  EMC has been on an absolute virtualization rampage.  I can only imagine that with their commitment, these products will get increasingly better.

So what does this Unisphere/vSphere integration look like?  Let’s take a look…

In order to bring vSphere visibility into Unisphere, we need to make Unisphere aware of our virtual environment.  From the Host Management menu pane in Unisphere, choose Hypervisor Information Configuration Wizard:

SnagIt Capture

Classic welcome to the wizard.  Next:

SnagIt Capture

Select the EMC array in which to integrate a hypervisor configuration:

SnagIt Capture

In the following screen, we’re given the option to integrate either standalone ESX(i) hosts, vCenter managed hosts, or both.  In this case, I’ll choose vCenter managed hosts:

SnagIt Capture

Unisphere needs the IP address of the vCenter Server along with credentials having sufficient permissions to collect virtual infrastructure information.  FQDN of virtual infrastructure doesn’t work here (Wish list item), however, hex characters are accepted which tells me it’s IPv6 compatible:

SnagIt Capture

I see your infrastructure.  Would you like to add or remove items?

SnagIt Capture

Last step.  This is the virtual infrastructure we’re going to tie into.  Choose Finish:

SnagIt Capture

Congratulations.  Success.  Click Finish once more:

SnagIt Capture

Once completed, I see that the vCenter server I added has nested in the ESX host which it manages.  Again we see only the IP address representing a vCenter Server, rather than the FQDN itself.  This could get a little hairy in larger environments where a name is more familiar and friendlier than an IP address.  However, in Unisphere’s defense, at the time of adding a host we do have the option of adding a short description which would show up here.  Highlighting the ESX host reveals the VMs which are running on the host.  Nothing Earth shattering yet, but the good stuff lies ahead:

SnagIt Capture

Let’s look at the ESX host properties.  Here’s where the value starts to mount (storage pun intended).  The LUN Status tab reveals information of LUNs in use by the ESX host, as well as the Storage Processor configuration and status.  This is useful information for balance and performance troubleshooting purposes:

SnagIt Capture

Moving on to the Storage tab, more detailed information is provided about the LUN characteristics and how the LUNs are presented to the ESX host:

SnagIt Capture

The Virtual Machines tab is much the same as the VMware Infrastructure summary screen with the information that it provides.  However, it does provide the ability to drill down to specific VM information by way of hyperlinks:

SnagIt Capture

Let’s take a look at the VM named vma41 by clicking on the vma41 hyperlink from the window above.  The General tab provides some summary information about the VM and the storage, but nothing that we probably don’t already know at this point.  Onward:

SnagIt Capture

The LUN Status tab provides the VM to storage mapping and Storage Processor.  Once again, this is key information for performance troubleshooting.  Don’t get me wrong.  This information alone isn’t necessarily going to provide conclusive troubleshooting data.  Rather, it should be combined with other information collected such as  storage or fabric performance reports:

SnagIt Capture

Similar to the host metrics, the Storage tab from the VM point of view provides more detailed information about the datastore as well as the VM disk configuration.  Note the Type column which shows that the VM was thinly provisioned:

SnagIt Capture

There are a few situations which can invoke the age old storage administrator’s question: “What’s using this LUN?”  From the Storage | LUNs | Properties drill down (or from Storage | Pools/RAID Groups), Unisphere ties in the ESX hosts connected to the LUN as well as the VMs  living on the LUN.  Example use cases where this information is pertinent would be performance troubleshooting, storage migration or expansion, replication and DR/BCP planning.

SnagIt Capture

VM integration also lends itself to the Unisphere Report Wizard.  Here, reports can be generated for immediate display in a web browser, or they can be exported in .CSV format to be massaged further.

SnagIt Capture

If you’d like to see more, EMC has made available a three minute EMC Unisphere/VMware Integration Demo video which showcases integration and the flow of information:

In addition to that, you can download the FREE UBER VSA and give Unisphere a try for yourself.  Other EMC vSpecialist demos can be found at Everything VMware At EMC.

With all of this goodness and as with any product, there is room for improvement.  I mentioned before that by and large the vSphere integration code appears to be legacy which came from Navisphere.  Navisphere manages CLARiiON block storage only (fibre channel and native CLARiiON iSCSI).  What this means is that there is a gap in Unisphere/vSphere integration with respect to Celerra NFS and iSCSI.  For NFS, EMC has a vSphere plugin which Chad Sakac introduced about a year ago on his blog here and here.  While it’s not Unisphere integration, it does do some cool and useful things which are outlined in this product overview

In medium to large sized environments where teams can be siloed, it’s integration like this which can provide a common language, bridging the gap between technologies which have close dependencies with one another.  These tools work in the SMB space as well where staff will have both virtualization and storage areas of responsibility.  vSphere integration with Unisphere can provide a fair amount insight and efficiency.  I think this is just a slight representation of what future integration will be capable of.  VMware’s portfolio of virtualization, cloud, and data protection products continues to expand.  Each and every product VMware delivers is dependent on storage.  There is a tremendous opportunity to leverage each of these attach points for future integration.

vSphere 4.1 Update 1 Upgrade File Issues

February 11th, 2011 by jason 14 comments »

I began seeing this during upgrade testing last night in my lab but decided to wait a day to see if other people were having the same problems I was.  It is now being reported in various threads in the vSphere Upgrade & Install forum that vSphere 4.1 Update 1 upgrade files are failing to import into VMware Update Manager (VUM).  What I’m consistently seeing in multiple environements is:

  • .zip files which upgrade ESX and ESX from 4.0 to 4.1u1 will import into VUM successfully. 
  • .zip files which upgrade ESX and ESX from 4.1 to 4.1u1 fail to import into VUM.
  • I have not tested the upgrade files for ESX(i) 3.5 to 4.1u1.

The success and error message for all four .zip file imports are shown below.  Two successful.  Two failures.

SnagIt Capture

MD5SUM comparisons with VMware’s download site all result in matches.  I believe there is invalid metadata or corrupted .zip files being made available for download.

The workaround is to create a patch baseline in VUM which will instruct VUM to download the necessary upgrade files itself which is an alternative method to utilizing upgrade bundles and upgrade baselines in VUM.