Posts Tagged ‘vCenter Server’

vCenter Server 5.0 and MS SQL Database Permissions

August 20th, 2011

It’s that time again (to bring up the age old topic of Microsoft SQL database permission requirements in order to install VMware vCenter Server).  This brief article focuses on vCenter 5.0.  Permissions on the SQL side haven’t changed at all based on what was required in vSphere 4.  However, the error displayed for lacking required permissions to the MSDB System database has.  In fact, in my opinion it’s a tad misleading.

To review, the vCenter database account being used to make the ODBC connection requires the db_owner role on the MSDB System database during the installation of vCenter Server.  This facilitates the installation of SQL Agent jobs for vCenter statistic rollups.

In the example below, I’m using SQL authentication with an account named vcenter.  I purposely left out its required role on MSDB and you can see below the resulting error:

The DB user entered does not have the required permissions needed to install and configure vCenter Server with the selected DB.  Please correct the following error(s):  The database user ‘vcenter’ does not have the following privileges on the ‘vc50’ database:

EXECUTE sp_add_category

EXECUTE sp_add_job

EXECUTE sp_add_jobschedule

EXECUTE sp_add_jobserver

EXECUTE sp_add_jobstep

EXECUTE sp_delete_job

EXECUTE sp_update_job

SELECT syscategories

SELECT sysjobs

SELECT sysjobsteps

Snagit Capture

Now what I think is misleading about the error thrown is that it’s pointing the finger at missing permissions on the vc50 database.  This is incorrect.  My vcenter SQL account has db_owner permissions on the vc50 vCenter database.  The problem is actually lacking the temporary db_owner permissions on the MSDB System database at vCenter installation time as described earlier.

The steps to rectify this situation are the same as before.  Grant the vcenter account the db_owner role for the MSDB System database, install vCenter, then revoke that role when vCenter installation is complete. While we’re on the subject, the installation of vCenter Update Manager 5.0 with a Microsoft SQL back end database also requires the ODBC connection account to temporarily have db_owner permissions on the MSDB System database.  I do believe this is a new requirement in vSphere 5.0.  If you’re going to install VUM, you might as well do that first before going through the process of revoking the db_owner role.

An example of where that role is added in SQL Server 2008 R2 Management Studio is shown below:

Snagit Capture

Configure a vCenter 5.0 integrated Syslog server

July 23rd, 2011

Now that VMware offers an ESXi only platform in vSphere 5.0, there are logging decisions to be considered which were a non-issue on the ESX platform.  Particularly with boot from SAN, boot from flash, or stateless hosts where logs can’t be stored locally on the host with no scratch partition due to not having local storage.  Some shops use Splunk as a Syslog server.  Other bloggers such as Simon Long have identified in the past how to send logs to the vMA appliance.  Centralized management of anything is almost always a good thing and the same holds true for logging.

New in the vCenter 5.0 bundle is a Syslog server which can be integrated with vCenter 5.0.  I’m going to go through the installation, configuration, and then I’ll have a look at the logs.

Installation couldn’t be much easier.  I’ll highlight the main steps.  First launch the VMware Syslog Collector installation:

Snagit Capture

The setup routine will open Windows Firewall ports as necessary.  Choose the appropriate drive letter and path installation locations.  Note the second drive letter and path specifies the location of the aggregated syslog files from the hosts.  Be sure there is enough space on the drive for the log files, particularly in medium to large environments:

Snagit Capture

Choose the VMware vCenter Server installation (this is not the default type of installation):

Snagit Capture

Provide the location of the vCenter Server as well as credentials to establish the connection.  In this case I’m installing the Syslog server on the vCenter Server itself:

7-23-2011 4-14-41 PM

 

The Syslog server has the ability to accept connections on three different ports:

  1. UDP 514
  2. TCP 514
  3. Encrypted SSL 1514

There’s an opportunity to change the default listening ports but I’ll leave them as is, especially UDP 514 which is an industry standard port for Syslog communications:

Snagit Capture

Once the installation is finished, it’s ready to accept incoming Syslog connections from hosts.  You’ll notice a few new items in the vSphere Client.  First is the VMware Syslog Collector Configuration plug-in:

Snagit Capture

Next is the Network Syslog Collector applet:

Snagit Capture

It’s waiting for incoming Syslog connections:

Snagit Capture

Now I’ll a configure host to send its logs to the vCenter integrated Syslog server.  This is fairly straightforward as well and there are a few ways to do it.  I’ll identify two.

In the vCenter inventory, select the ESXi 5.0 host, navigate to the Configuration tab, then Advanced Settings under Software.  Enter the Syslog server address in the field for Syslog.global.logHost.  The format is <protocol>://<f.q.d.n>:port.  So for my example:  udp://vcenter50.boche.mcse:514.  This field allows multiple Syslog protocols and endpoints separated by commas.  I could write split the logs to additional Syslog server with this entry:  udp://vcenter50.boche.mcse:514, splunk.boche.mcse, ssl://securesyslogs.boche.mcse:1514.  In that example, logs are shipped to vcenter50.boche.mcse and splunk.boche.mcse over UDP 514, as well as to securesyslogs.boche.mcse over 1514.  Another thing to point out on multiple entries.. there is a space after each comma which appears to be required for the host to interpret multiple entries properly:

Snagit Capture

There are many other Syslog loggers options which can be tuned.  Have a look at them and configure your preferred logging appropriately.

Another method to configure and enable syslog on an ESXi 5 host would be to use esxcli.  The commands for each host look something like this:

~ # esxcli system syslog config set –loghost=192.168.110.16
~ # esxcli system syslog reload

Now I’ll ensure outbound UDP 514 is opened on the ESXi 5.0 firewall.  If the Syslog ports are closed, logs won’t make it to the Syslog server:

Snagit Capture

Back to the vCenter (Syslog) Server, you’ll see a folder for each host sending logs to the Syslog server:

Snagit Capture

And here come the logs:

Snagit Capture

The same logs are going to the Splunk server too:

7-23-2011 4-00-48 PM

This is what the logs look like in Splunk.  It’s a very powerful tool for centrally storing logs and then querying those logs using a powerful engine:

7-23-2011 4-07-53 PM

And since this host actually has local disk, and as a result a scratch partition, the logs natively go to the scratch partition:

7-23-2011 4-04-33 PM

Notice the host I configured is also displayed in the Network Syslog Collector along with the general path to the logs as well as the size of each host’s respective log file (I’ve noticed that it sometimes requires exiting the vSphere Client and logging back in before the hosts show up below):

Snagit Capture

Earlier I mentioned that I’d show a second way to configure Syslog on the ESXi host.  That method is much easier and comes by way of leveraging host profiles.  Simply create a host profile and add the Syslog configuration to the profile.  Of course this profile can be used to deploy the configuration to countless other hosts which makes it a very easy and powerful method to deploy a centralized logging configuration:

Snagit Capture

For more information, see VMware KB 2003322 Configuring syslog on ESXi 5.0.

Virtualization Wars: Episode V – VMware Strikes Back

July 12th, 2011

Snagit CaptureAt 9am PDT this morning, Paul Maritz and Steve Herrod take the stage to announce the next generation of the VMware virtualized datacenter.  Each new product and set of features are impressive in their own right.  Combine them and what you have is a major upgrade of VMware’s entire cloud infrastructure stack.  I’ll highlight the major announcements and some of the detail behind them.  In addition, the embargo and NDA surrounding the vSphere 5 private beta expires.  If you’re a frequent reader of blogs or the Twitter stream, you’re going to bombarded with information at fire-hose-to-the-face pace, starting now.

7-10-2011 4-22-46 PM

 

vSphere 5.0 (ESXi 5.0 and vCenter 5.0)

At the heart of it all is a major new release of VMware’s type 1 hypervisor and management platform.  Increased scalability and new features make virtualizing those last remaining tier 1 applications quantifiable.

7-10-2011 4-55-28 PM

Snagit Capture

ESX and the Service Console are formally retired as of this release.  Going forward, we have just a single hypervisor to maintain and that is ESXi.  Non-Windows shops should find some happiness in a Linux based vCenter appliance and sophisticated web client front end.  While these components are not 100% fully featured yet in their debut, they come close.

Storage DRS is the long awaited compliment to CPU and memory based DRS introduced in VMware Virtual Infrastructure 3.  SDRS will coordinate initial placement of VM storage in addition to keeping datastore clusters balanced (space usage and latency thresholds including SIOC integration) with or without the use of SDRS affinity rules.  Similar to DRS clusters, SDRS enabled datastore clusters offer maintenance mode functionality which evacuates (Storage vMotion or cold migration) registered VMs and VMDKs (still no template migration support, c’mon VMware) off of a datastore which has been placed into maintenance mode.  VMware engineers recognize the value of flexibility, particularly when it comes to SDRS operations where thresholds can be altered and tuned on a schedule basis. For instance, IO patterns during the day when normal or peak production occurs may differ from night time IO patterns when guest based backups and virus scans occur.  When it comes to SDRS, separate thresholds would be preferred so that SDRS doesn’t trigger based on inappropriate thresholds.

Profile-Driven Storage couples storage capabilities (VASA automated or manually user-defined) to VM storage profile requirements in an effort to meet guest and application SLAs.  The result is the classification of a datastore, from a guest VM viewpoint, of Compatible or Incompatible at the time of evaluating VM placement on storage.  Subsequently, the location of a VM can be automatically monitored to ensure profile compliance.

7-10-2011 5-29-56 PM

Snagit CaptureI mentioned VASA previously which is a new acronym for vSphere Storage APIs for Storage Awareness.  This new API allows storage vendors to expose topology, capabilities, and state of the physical device to vCenter Server management.  As mentioned earlier, this information can be used to automatically populate the capabilities attribute in Profile-Driven Storage.  It can also be leveraged by SDRS for optimized operations.

The optimal solution is to stack the functionality of SDRS and Profile-Driven Storage to reduce administrative burden while meeting application SLAs through automated efficiency and optimization.

7-10-2011 7-34-31 PM

Snagit CaptureIf you look closely at all of the announcements being made, you’ll notice there is only one net-new release and that is the vSphere Storage Appliance (VSA).  Small to medium business (SMB) customers are the target market for the VSA.  These are customers who seek some of the enterprise features that vSphere offers like HA, vMotion, or DRS but lack the fibre channel SAN, iSCSI, or NFS shared storage requirement.  A VSA is deployed to each ESXi host which presents local RAID 1+0 host storage as NFS (no iSCSI or VAAI/SAAI support at GA release time).  Each VSA is managed by one and only one vCenter Server. In addition, each VSA must reside on the same VLAN as the vCenter Server.  VSAs are managed by the VSA Manager which is a vCenter plugin available after the first VSA is installed.  It’s function is to assist in deploying VSAs, automatically mounting NFS exports to each host in the cluster, and to provide monitoring and troubleshooting of the VSA cluster.

7-10-2011 8-03-42 PM

Snagit CaptureYou’re probably familiar with the concept of a VSA but at this point you should start to notice the differences in VMware’s VSA: integration.  In addition, it’s a VMware supported configuration with “one throat to choke” as they say.  Another feature is resiliency.  The VSAs on each cluster node replicate with each other and if required will provide seamless fault tolerance in the event of a host node failure.  In such a case, a remaining node in the cluster will take over the role of presenting a replica of the datastore which went down.  Again, this process is seamless and is accomplished without any change in the IP configuration of VMkernel ports or NFS exports.  With this integration in place, it was a no-brainer for VMware to also implement maintenance mode for VSAs.  MM comes in to flavors: Whole VSA cluster MM or Single VSA node MM.

VMware’s VSA isn’t a freebie.  It will be licensed.  The figure below sums up the VSA value proposition:

7-10-2011 8-20-38 PM

High Availability (HA) has been enhanced dramatically.  Some may say the version shipping in vSphere 5 is a complete rewrite.  What was once foundational Legato AAM (Automated Availability Manager) is now finally evolving to scale further with vSphere 5.  Some of the new features include elimination of common issues such as DNS resolution, node communication between management network as well as storage along with failure detection enhancement.  IPv6 support, consolidated logging into one file per host, enhanced UI and enhanced deployment mechanism (as if deployment wasn’t already easy enough, albeit sometimes error prone).

7-10-2011 3-27-11 PMFrom an architecture standpoint, HA has changed dramatically.  HA has effectively gone from five (5) fail over coordinator hosts to just one (1) in a Master/Slave model.  No more is there a concept of Primary/Secondary HA hosts, however if you still want to think of it that way, it’s now one (1) primary host (the master) and all remaining hosts would be secondary (the slaves).  That said, I would consider it a personal favor if everyone would use the correct version specific terminology – less confusion when assumptions have to be made (not that I like assumptions either, but I digress).

The FDM (fault domain manager) Master does what you traditionally might expect: monitors and reacts to slave host & VM availability.  It also updates its inventory of the hosts in the cluster, and the protected VMs each time a VM power operation occurs.

Slave hosts have responsibilities as well.  They maintain a list of powered on VMs.  They monitor local VMs and forward significant state changes to the Master. They provide VM health monitoring and any other HA features which do not require central coordination.  They monitor the health of the Master and participate in the election process should the Master fail (the host with the most datastores and then the lexically highest moid [99>100] wins the election).

Another new feature in HA the ability to leverage storage to facilitate the sharing of stateful heartbeat information (known as Heartbeat Datastores) if and when management network connectivity is lost.  By default, vCenter picks two datastores for backup HA communication.  The choices are made by how many hosts have connectivity and if the storage is on different arrays.  Of course, a vSphere administrator may manually choose the datastores to be used.  Hosts manipulate HA information on the datastore based on the datastore type. On VMFS datastores, the Master reads the VMFS heartbeat region. On NFS datastores, the Master monitors a heartbeat file that is periodically touched by the Slaves. VM availability is reported by a file created by each Slave which lists the powered on VMs. Multiple Master coordination is performed by using file locks on the datastore.

As discussed earlier, there are a number of GUI enhancements which were put in place to monitor and configure HA in vSphere 5.  I’m not going to go into each of those here as there are a number of them.  Surely there will be HA deep dives in the coming months.  Suffice it to say, they are all enhancements which stack to provide ease of HA management, troubleshooting, and resiliency.

Another significant advance in vSphere 5 is Auto Deploy which integrates with Image Builder, vCenter, and Host Profiles.  The idea here is centrally managed stateless hardware infrastructure.  ESXi host hardware PXE boots an image profile from the Auto Deploy server.  Unique host configuration is provided by an answer file or VMware Host Profiles (previously an Enterprise Plus feature).  Once booted, the host is added to vCenter host inventory.  Statelessness is not necessarily a newly introduced concept, therefore, the benefits are strikingly familiar to say ESXi boot from SAN: No local boot disk (right sized storage, increased storage performance across many spindles), scales to support of many hosts, decoupling of host image from host hardware – statelessness defined.  It may take some time before I warm up to this feature. Honestly, it’s another vCenter dependency, this one quite critical with the platform services it provides.

For a more thorough list of anticipated vSphere 5 “what’s new” features, take a look at this release from virtualization.info.

 

vCloud Director 1.5

Snagit CaptureUp next is a new release of vCloud Director version 1.5 which marks the first vCD update since the product became generally available on August 30th, 2010.  This release is packed with several new features.

Fast Provisioning is the space saving linked clone support missing in the GA release.  Linked clones can span multiple datastores and multiple vCenter Servers. This feature will go a long way in bridging the parity gap between vCD and VMware’s sun setting Lab Manager product.

3rd party distributed switch support means vCD can leverage virtualized edge switches such as the Cisco Nexus 1000V.

The new vCloud Messages feature connects vCD with existing AMQP based IT management tools such as CMDB, IPAM, and ticketing systems to provide updates on vCD workflow tasks.

vCD originally supported Oracle 10g std/ent Release 2 and 11g std/ent.  vCD now supports Microsoft SQL Server 2005 std/ent SP4 and SQL Server 2008 exp/std/ent 64-bit.  Oracle 11g R2 is now also supported.  Flexibility. Choice.

vCD 1.5 adds support for vSphere 5 including Auto Deploy and virtual hardware version 8 (32 vCPU and 1TB vRAM).  In this regard, VMware extends new vSphere 5 scalability limits to vCD workloads.  Boiled down: Any tier 1 app in the private/public cloud.

Last but not least, vCD integration with vShield IPSec VPN and 5-tuple firewall capability.

vShield 5.0

VMware’s message about vShield is that it has become a fundamental component in consolidated private cloud and multi-tenant VMware virtualized datacenters.  While traditional security infrastructure can take significant time and resources to implement, there’s an inherent efficiency in leveraging security features baked into and native to the underlying hypervisor.

Snagit Capture

There are no changes in vShield Endpoint, however, VMware has introduced static routing in vShield Edge (instead of NAT) for external connections and certificate-based VPN connectivity.

 

Site Recovery Manager 5.0

Snagit CaptureAnother major announcement from VMware is the introduction of SRM 5.0.  SRM has already been quite successful in providing simple and reliable DR protection for the VMware virtualized datacenter.  Version 5 boasts several new features which enhance functionality.

Replication between sites can be achieved in a more granular per-VM (or even sub-VM) fashion, between different storage types, and it’s handled natively by vSphere Replication (vSR).  More choice in seeding of the initial full replica. The result is a simplified RPO.

Snagit Capture

Another new feature in SRM is Planned Migration which facilitates the migration protected VMs from site to site before a disaster actually occurs.  This could also be used in advance of datacenter maintenance.  Perhaps your policy is to run your business 50% of the time from the DR site.  The workflow assistance makes such migrations easier.  It’s a downtime avoidance mechanism which makes it useful in several cases.

Snagit CaptureFailback can be achieved once the VMs are re protected at the recovery site and the replication flow is reversed.  It’s simply another push of the big button to go the opposite direction.

Feedback from customers has influenced UI enhancements. Unification of sites into one GUI is achieved without Linked Mode or multiple vSphere Client instances. Shadow VMs take on a new look at the recovery site. Improved reporting for audits.

Other miscellaneous notables are IPv6 support, performance increase in guest VM IP customization, ability to execute scripts inside the guest VM (In guest callouts), new SOAP based APIs on the protected and recovery sides, and a dependency hierarchy for protected multi tiered applications.

 

In summary, this is a magnificent day for all of VMware as they have indeed raised the bar with their market leading innovation.  Well done!

 

VMware product diagrams courtesy of VMware

Star Wars diagrams courtesy of Wookieepedia, the Star Wards Wiki

Watch VMware Raise the Bar on July 12th

July 11th, 2011

On Tuesday July 12th, VMware CEO Paul Maritz and CTO Steve Herrod are hosting a large campus and worldwide event where they plan to make announcements about the next generation of cloud infrastructure.

The event kicks off at 9am PDT and is formally titled “Raising the Bar, Part V”. You can watch it online by registering here.  The itinerary is as follows:

  • 9:00-9:45 Paul and Steve present – live online streaming
  • 10:00-12:00 five tracks of deep dive breakout sessions
  • 10:00-12:00 live Q&A with VMware cloud and virtualization experts
    • Eric Siebert
    • David Davis
    • Bob Plankers
    • Bill Hill

In addition, by attending live you also have the chance to win a free VMworld pass.  More details on that and how to win here.

I’m pretty excited both personally and for VMware.  This is going to be huge!

Performance Overview charts fail with STATs Report Service internal error

May 11th, 2011

A few months ago I was troubleshooting a problem with the Overview charts in the Performance tab of the vSphere Client.  This was a vSphere 4.0 Update 1 environment but I believe the root cause will impact other vSphere versions as well.

Instead of displaying the dashboard of charts in the Overview display, an error was displayed:

STATs Report service internal error
or
STATs Report application initialization is not completed successfully

One unique aspect of this environment was that the vCenter database was hosted on a Microsoft SQL Server which used a port other than the default of TCP 1433.  VMware KB Article 1012812 identified this as the root cause of the issue.

To resolve the issue, I was required to stop the vCenter Server service and modify the statsreport.xml file located on the vCenter Server in the \Program Files\VMware\Infrastructure\tomcat\conf\Catalina\localhost\ directory by inserting the line in bold.  Note the italicized components will vary and are environment specific based on the SQL server name, database name, alternate TCP port in use, and authentication method (SQL/false or Windows integrated/true):

<Resource auth=”Container”
   name=”jdbc/StatsDS”
   type=”javax.sql.DataSource”
   factory=”org.apache.tomcat.dbcp.dbcp.BasicDataSourceFactory”
   initialSize=”3″
  maxActive=”10″
  maxIdle=”3″
  maxWait=”10000″
  defaultReadOnly=”true”
  defaultTransactionIsolation=”READ_COMMITTED”
  removeAbandoned=”true”
  removeAbandonedTimeout=”60″
  url=”jdbc:sqlserver://sqlservername:1601;instanceName=sqlservername;
     databaseName=sqldatabasename;integratedSecurity=false;”
/>

Don’t forget to restart the vCenter Server service after saving the statsreport.xml file.

vSphere Integration With EMC Unisphere

February 14th, 2011

SnagIt CaptureIf you manage EMC unified storage running at least FLARE 30 and DART 6, or if you’re using a recent version of the UBER VSA, or if you’re one of the fortunate few who have had your hands on the new VNX series, then chances are you’re familiar with or you’ve at least experienced Unisphere, which is EMC’s single pane of glass approach to managing its multi protocol arrays.  For what is essentially a 1.0 product, I think EMC did a great job with Unisphere.  It’s modern.  It’s fast.  It has a cool sleek design and flows well.  They may have cut a few corners where it made sense (one can still see a few old pieces of Navisphere code here and there) but what counts for me the most at the end of the day is the functionality and efficiency gained by a consolidation of tools.

You’re probably reading this because you have a relationship with VMware virtualization.  Anyone who designs, implements, manages, or troubleshoots VMware virtual infrastructure also has a relationship with storage, most often shared storage.  Virtualization has been transforming the datacenter, and not just it’s composition.  The way we manage and collaborate from a technology perspective is also evolving.  Virtualization has brought about an intersection of technologies which is redefining roles and delegation of responsibilities.  One of the earlier examples of this was virtual networking.  With the introduction of 802.1Q VST in ESX, network groups found themselves fielding requests for trunked VLANs to servers and having to perform the associated design, capacity, and security planning.  Managing access to VLANs was a shift in delegated responsibility from the network team to the virtualization platform team.  Some years later, implementation of the Cisco Nexus 1000V in vSphere pulled most of the network related tasks back under the control of the network team.

Storage is another broad reaching technology upon which most of today’s computing relies upon, including virtualization.  Partners work closely with VMware to develop tools which provide seamless integration of overlapping technologies.  Unisphere is one of several products in the EMC portfolio which boasts this integration.  Granted, some of these VMware bits existed in Unisphere’s ancestor Navisphere.  However, I think it’s still worth highlighting some of the capabilities found in Unisphere.  EMC has been on an absolute virtualization rampage.  I can only imagine that with their commitment, these products will get increasingly better.

So what does this Unisphere/vSphere integration look like?  Let’s take a look…

In order to bring vSphere visibility into Unisphere, we need to make Unisphere aware of our virtual environment.  From the Host Management menu pane in Unisphere, choose Hypervisor Information Configuration Wizard:

SnagIt Capture

Classic welcome to the wizard.  Next:

SnagIt Capture

Select the EMC array in which to integrate a hypervisor configuration:

SnagIt Capture

In the following screen, we’re given the option to integrate either standalone ESX(i) hosts, vCenter managed hosts, or both.  In this case, I’ll choose vCenter managed hosts:

SnagIt Capture

Unisphere needs the IP address of the vCenter Server along with credentials having sufficient permissions to collect virtual infrastructure information.  FQDN of virtual infrastructure doesn’t work here (Wish list item), however, hex characters are accepted which tells me it’s IPv6 compatible:

SnagIt Capture

I see your infrastructure.  Would you like to add or remove items?

SnagIt Capture

Last step.  This is the virtual infrastructure we’re going to tie into.  Choose Finish:

SnagIt Capture

Congratulations.  Success.  Click Finish once more:

SnagIt Capture

Once completed, I see that the vCenter server I added has nested in the ESX host which it manages.  Again we see only the IP address representing a vCenter Server, rather than the FQDN itself.  This could get a little hairy in larger environments where a name is more familiar and friendlier than an IP address.  However, in Unisphere’s defense, at the time of adding a host we do have the option of adding a short description which would show up here.  Highlighting the ESX host reveals the VMs which are running on the host.  Nothing Earth shattering yet, but the good stuff lies ahead:

SnagIt Capture

Let’s look at the ESX host properties.  Here’s where the value starts to mount (storage pun intended).  The LUN Status tab reveals information of LUNs in use by the ESX host, as well as the Storage Processor configuration and status.  This is useful information for balance and performance troubleshooting purposes:

SnagIt Capture

Moving on to the Storage tab, more detailed information is provided about the LUN characteristics and how the LUNs are presented to the ESX host:

SnagIt Capture

The Virtual Machines tab is much the same as the VMware Infrastructure summary screen with the information that it provides.  However, it does provide the ability to drill down to specific VM information by way of hyperlinks:

SnagIt Capture

Let’s take a look at the VM named vma41 by clicking on the vma41 hyperlink from the window above.  The General tab provides some summary information about the VM and the storage, but nothing that we probably don’t already know at this point.  Onward:

SnagIt Capture

The LUN Status tab provides the VM to storage mapping and Storage Processor.  Once again, this is key information for performance troubleshooting.  Don’t get me wrong.  This information alone isn’t necessarily going to provide conclusive troubleshooting data.  Rather, it should be combined with other information collected such as  storage or fabric performance reports:

SnagIt Capture

Similar to the host metrics, the Storage tab from the VM point of view provides more detailed information about the datastore as well as the VM disk configuration.  Note the Type column which shows that the VM was thinly provisioned:

SnagIt Capture

There are a few situations which can invoke the age old storage administrator’s question: “What’s using this LUN?”  From the Storage | LUNs | Properties drill down (or from Storage | Pools/RAID Groups), Unisphere ties in the ESX hosts connected to the LUN as well as the VMs  living on the LUN.  Example use cases where this information is pertinent would be performance troubleshooting, storage migration or expansion, replication and DR/BCP planning.

SnagIt Capture

VM integration also lends itself to the Unisphere Report Wizard.  Here, reports can be generated for immediate display in a web browser, or they can be exported in .CSV format to be massaged further.

SnagIt Capture

If you’d like to see more, EMC has made available a three minute EMC Unisphere/VMware Integration Demo video which showcases integration and the flow of information:

In addition to that, you can download the FREE UBER VSA and give Unisphere a try for yourself.  Other EMC vSpecialist demos can be found at Everything VMware At EMC.

With all of this goodness and as with any product, there is room for improvement.  I mentioned before that by and large the vSphere integration code appears to be legacy which came from Navisphere.  Navisphere manages CLARiiON block storage only (fibre channel and native CLARiiON iSCSI).  What this means is that there is a gap in Unisphere/vSphere integration with respect to Celerra NFS and iSCSI.  For NFS, EMC has a vSphere plugin which Chad Sakac introduced about a year ago on his blog here and here.  While it’s not Unisphere integration, it does do some cool and useful things which are outlined in this product overview

In medium to large sized environments where teams can be siloed, it’s integration like this which can provide a common language, bridging the gap between technologies which have close dependencies with one another.  These tools work in the SMB space as well where staff will have both virtualization and storage areas of responsibility.  vSphere integration with Unisphere can provide a fair amount insight and efficiency.  I think this is just a slight representation of what future integration will be capable of.  VMware’s portfolio of virtualization, cloud, and data protection products continues to expand.  Each and every product VMware delivers is dependent on storage.  There is a tremendous opportunity to leverage each of these attach points for future integration.

VMware Releases vSphere 4.1 Update 1

February 10th, 2011

I’ve just been informed by my VMware Update Manager (VUM) that VMware has released vSphere 4.1 Update 1, including:

  • vCenter 4.1 Update 1
  • ESXi 4.1 Update 1
  • ESX 4.1 Update 1
  • vShield Zones 4.1 Update 1??

Will this be the last release of the ESX hypervisor in history?  Thus far, I haven’t seen that the HP, IBM, and Dell versions of ESXi 4.1 Update 1 are available for download yet.  They typically follow the VMware GA release by a few weeks.

Grab your copy now!

SnagIt Capture

The number of patch definitions downloaded (15 critical/28 total):

ID: ESX410-201101201-SG  Impact: HostSecurity  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 Core and CIM components

ID: ESX410-201101202-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 VMware-webCenter

ID: ESX410-201101203-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 esxupdate library

ID: ESX410-201101204-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 mptsas device driver

ID: ESX410-201101206-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 bnx2xi device driver

ID: ESX410-201101207-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 bnx2x device driver

ID: ESX410-201101208-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 sata device driver

ID: ESX410-201101211-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 VMware-esx-remove-rpms

ID: ESX410-201101213-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 net-enic device driver

ID: ESX410-201101214-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 qla4xxx device driver

ID: ESX410-201101215-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 net-nx-nic device driver

ID: ESX410-201101216-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 vaai plug-in

ID: ESX410-201101217-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 e1000e device driver

ID: ESX410-201101218-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 net-cdc-ether driver

ID: ESX410-201101219-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 e1000 device driver

ID: ESX410-201101220-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0 Updates the ESX 4.1 igb, tg3, scsi-fnic

ID: ESX410-201101221-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 HP SAS Controllers

ID: ESX410-201101222-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates ESX 4.1 mptsas, mptspi drivers

ID: ESX410-201101223-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0

3w-9xxx: scsi driver for VMware ESX

ID: ESX410-201101224-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: esx 4.1.0

vxge: net driver for VMware ESX

ID: ESX410-201101225-UG  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 Updates vmware-esx-pam-config library

ID: ESX410-201101226-SG  Impact: HostSecurity  Release date: 2011-02-10  Products: esx 4.1.0 Updates glibc packages

ID: ESX410-Update01  Impact: Critical  Release date: 2011-02-10  Products: esx 4.1.0 VMware ESX 4.1 Complete Update 1

ID: ESXi410-201101201-SG  Impact: HostSecurity  Release date: 2011-02-10  Products: embeddedEsx 4.1.0 Updates the ESXi 4.1 firmware

ID: ESXi410-201101202-UG  Impact: Critical  Release date: 2011-02-10  Products: embeddedEsx 4.1.0 Updates the ESXi  4.1 VMware Tools

ID: ESXi410-201101223-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: embeddedEsx 4.1.0

3w-9xxx: scsi driver for VMware ESXi

ID: ESXi410-201101224-UG  Impact: HostGeneral  Release date: 2011-02-10  Products: embeddedEsx 4.1.0

vxge: net driver for VMware ESXi

ID: ESXi410-Update01  Impact: HostGeneral  Release date: 2011-02-10  Products: embeddedEsx 4.1.0 VMware ESXi 4.1 Complete Update 1