Posts Tagged ‘vSphere’

Storage block size and alignment

March 20th, 2009

Steve Chambers posted version 2 of the Storage block size and alignment document over at the VIOPS (VMware Virtual Infrastructure Operations) site. At seven pages, it is both a short and a GREAT read.

For those not familiar with VMFS and VM guest alignment, I’ll summarize:

VMFS Alignment

  1. Unaligned volumes result in track crossing and additional I/O penalties in the form of latency and throughput which may or may not be noticeable in your environment (it depends)
  2. To verify whether or not your VMFS volumes are aligned, run the fdisk -lu command at the console
  3. VMFS volumes created with the Virtual Infrastructure Client (vSphere Client) are automatically aligned since it automatically align the volume along the 64KB boundary so no need to worry about the sub bullets in #2 above.
  4. NFS datastores are not concerned with VMFS alignment as they are not block VMFS datastores
  5. Alternatively, VMFS volumes can be aligned by following a series of fdisk commands manually which will destroy data on the volume (definitely not preferred)
  6. VMFS block size only determines maximum file size on the VMFS volume. VMFS block size does not play even a remotely significant performance role.  There are a number of expert blog articles which debate this.

VM Guest Alignment

  1. To verify whether or not your VM guest virtual disks are aligned, check the partition offset value
    • Aligned virtual disks will have a partition offset value evenly divisible by 4,096 (ie. 65,536 or 1,048,576 which is a default for Windows Server 2008)
    • Non-aligned virtual disks will have a partition offset value not evenly divisible by 4,096 (ie. 32,256 which is a default for Windows XP and Server 2003)
  2. Due to the destructive nature of the alignment procedures, alignment is always performed before data is placed on the volume
  3. Alignment in Linux guests is performed using an almost identical series of fdisk commands listed in a previous bullet
  4. Alignment in Windows guests is performed using diskpart.exe
  5. Although guest alignment is data destructive, guest alignment can be performed after the guest OS is installed because the document recommends that alignment of the OS partition is unnecessary; only align the data partitions before data is placed on them.  **see update below**

Alignment is most often going to be labor intensive and thus will have diminishing returns. This will especially be true if your environment has already been built and you need to align after the fact. Environments in the planning stages and not yet built will be among the best candidates for alignment right out of the gate. Whatever stage you are at, updating guest VM templates with alignment wouldn’t be a bad idea. Alignment of one image will pay dividends, whether noticeable or not, over and over as that template is deployed throughout the infrastructure.

Update: NetApp released a few scripts that will not only automate the verification and alignment processes at the guest VM OS level, the script will align the guest OS without destroying data. The one exception I ran into was with a Citrix VM that had remapped drives. CTXGINA.DLL got real cranky. The scripts are:

  • mbrscan – Scans the -flat.vmdk file for alignment
  • mbralign – Makes a backup of the .vmdk and creates a newly aligned .vmdk

See also:  NetApp – Storage Nuts & Bolts: mbrscan/mbralign

3-20-2009 1-24-50 PM

Other recommended reading:

Recommendations for Aligning VMFS Partitions

Performance Best Practices for VMware vSphere 4.1

DPM best practices. Look before you leap.

March 16th, 2009

It has previously been announced that VMware’s Distributed Power Management (DPM) technology will be fully supported in vSphere. Although today DPM is for experimental purposes only, virtual infrastructure users with VI Enterprise licensing can nonetheless leverage its usefulness of powering down ESX infrastructure during non-peak periods where they see fit.

Before enabling DPM, there are a few precautionary steps I would go through first to test each ESX host in the cluster for DPM compatibility which will help mitigate risk and ensure success. Assuming most, if not all, hosts in the cluster will be identical in hardware make and model, you may choose to perform these tests on only one of the hosts in the cluster. More on testing scope a little further down.

This first step is optional but personally I’d go through the motions anyway. Remove the hosts to be tested individually from the cluster. If the hosts have running VMs, place the host in maintenance mode first to displace the running VMs onto other hosts in the cluster:

3-16-2009 10-31-19 PM

If the step above was skipped or if the host wasn’t in a cluster to begin with, then the first step is to place the clustered host into maintenance mode. The following step would be to manually place the host in Standby Mode. This is going to validate whether or not vCenter can successfully place a host into Standby Mode automatically when DPM is enabled. One problem I’ve run into is the inability to place a host into Standby Mode because the NIC doesn’t support Wake On LAN (WOL) or WOL isn’t enabled on the NIC:

3-16-2009 10-25-53 PM

Assuming the host has successfully been place into Standby Mode, use the host command menu (similar in look to the menu above) to take the host out of Standby Mode. I don’t have the screen shot for that because the particular hosts I’m working with right now aren’t supporting the WOL type that VMware needs.

Once the host has successfully entered and left Standby Mode, the it can be removed from maintenance mode and added back into the cluster. Now would not be a bad time to take a look around some of the key areas such as networking and storage to make sure those subsystems are functioning properly and they are able to “see” their respective switches, VLANs, LUNs, etc. Add some VMs to the host and power them on. Again, perform some cursory validation to ensure the VMs have network connectivity, storage, and the correct consumption of CPU and memory.

My point in all of this is that ESX has been brought back from a deep slumber. A twelve point health inspection is the least amount of effort we can put forth on the front side to assure ourselves that, once automated, DPM will not bite us down the road. The steps I’m recommending have more to do with DPM compatibility with the different types of server and NIC hardware, than they have to do with VMware’s DPM technology in and of itself. That said, at a minimum I’d recommend these preliminary checks on each of the different hardware types in the datacenter. On the other end of the spectrum if you are very cautious, you may choose to run through these steps for each and every host that will participate in a DPM enabled cluster.

After all the ESX hosts have been “Standby Mode verified”, the cluster settings can be configured to enable DPM. Similar to DRS, DPM can be enabled in a manual mode where it will make suggestions but it won’t act on them without your approval, or it can be set for fully automatic, dynamically making and acting on its own decisions:

3-16-2009 10-24-33 PM

DPM is an interesting technology but I’ve always felt in the back of my mind it conflicts with capacity planning (including the accounting for N+1 or N+2, etc.) and the ubiquitous virtualization goal of maximizing the use of server infrastructure. In a perfect world, we’ll always be teetering on our own perfect threshold of “just enough infrastructure” and “not too much infrastructure”. Having infrastructure in excess of what what would violate availability constraints and admission control is where DPM fits in. That said, if you have a use for DPM, in theory, you have excess infrastructure. Why? I can think of several compelling reasons why this might happen, but again in that perfect world, none could excuse the capital virtualization sin of excess hardware not being utilized to its fullest potential (let alone, powered off and doing nothing). In a perfect world, we always have just enough hardware to meet cyclical workload peaks but not too much during the valleys. In a perfect world, virtual server requests come planned so well in advance that any new infrastructure needed is added the day the VM is spun up to maintain that perfect balance. In a perfect world, we don’t purchase larger blocks or cells of infrastructure than what we actually need because there are no such things as lead times for channel delivery, change management, and installation that we need to account for.

If you don’t live in a perfect world (like me), DPM offers those of us with an excess of infrastructure and excuses an environment friendly and responsible alternative to at least cut the consumption of electricity and cooling while maintaining capacity on demand if and when needed. Options and flexibility through innovation is good. That is why I choose VMware.

Straighten out licensing in preparation for vSphere

March 6th, 2009

There is a lot of buzz accumulating about the anticipated release of VMware vSphere.  Are you ready for it?  Is your license portal ready for vSphere?  Does anyone remember the licesing upgrades from VI2 to VI3?  Did they go smooth for you?

Double check your answers and be sure.  Inaccurate license counts in your license portal are going to lead to frustrating problems when you attempt to upgrade to vSphere.  When you get to vSphere, your new license key(s) may be missing quantities or SKUs you’ve purchased in the past.  Pay extra special attention if you purchase through a reseller to be sure your license counts and SKUs in the portal are 100% accurate.

DO NOT wait until the release of vSphere to sort out your licensing issues.  I would anticipate a long line of people in the support queues who were not proactive in sorting out their licensing issues prior to the release of vSphere.  Taking care of this ahead of time will help guarantee a smooth vSphere upgrade and it will also help balance the call load on VMware’s support staff.

To verify your licensing, head to the VMware licensing portal:

3-6-2009 4-52-30 PM

Click “Find Serial Number”

3-6-2009 4-52-53 PM

Change the filter parameters as follows:

Change “License Category” to Purchased/Registered.  Doing so will show you more licensing than not doing so in some cases.

Change “Sort Results By” to Product then License Type.  Doing so will make the licenses easier to reconcile.

3-6-2009 5-05-47 PM

Now reconcile all of your serial numbers.  Be aware that there may be more than one page of licenses in your portal.  If you’re missing licenses, check for a page 2, page 3, etc.

For more help on licensing, including help in contacting VMware on licensing issues, see the following blog entry I wrote in January.

VMware next generation datacenter exploration

February 27th, 2009

Find the best data management solutions at Redapt. Data migration services can improve efficiency and cut cost.

Following is a VMworld Europe 2009 preview of features VMware is developing for future versions of vSphere. There is no guarantee or time line of when these features will be introduced into vSphere. Furthermore, the features should not be thought of as a group that will be implemented together at one time. A more likely scenario is that they will be integrated independently into major or incremental future builds. With that disclaimer out of the way, let’s dig in to the good stuff.

Pluggable Storage Architecture (PSA). ESX/ESXi will have a new architecture for storage called PSA which is a collection of VMKernel APIs that allow 3rd party hardware vendors to inject code into the ESX storage I/O path. 3rd party developers will be able to design custom load balancing techniques and fail over mechanisms for specific storage arrays. This will happen in part with the use of VMware’s Native Multipathing Plugin (NMP) which VMware will distribute with ESX. Additional plugins from storage partners may also appear. During the lab, I explored the PSA commands using the ESXi “unsupported” console via PuTTY.

Update: Duncan Epping over at Yellow Bricks just wrote about Pluggable Storage Architecture, expanding quite a bit on its components.  View that post here.

Hot Cloning of Virtual Machines. This upcoming feature is fairly self explanatory. Duplicate or clone a virtual machine while the source VM is running. I think this feature will be useful for troubleshooting or base lining a guest OS on the fly without impacting the source by causing a temporary outage to clone the control VM into the experiment environment. Additionally, during the cloning process, VMware is going to allow us to choose a different disk type than that of the source VM. For example, the source VM may have a disk type of pre-allocated but we can change the clone destination disk type to a thinly provisioned sparse disk. Fragmentation anyone? Speaking of pitfalls, you may wonder how VMware will handle powering on the destination VM for the first time with a duplicate network name and IP address as the clone source that is currently running on the network? Simple. We already have the technology today: The Guest Customization process. While guest customization has always been optional for us, it more or less becomes mandatory in hot cloning so I’d start getting used to it.

Update: As a few people have pointed out in the comments, hot cloning of virtual machines is available to us prior to the release of vSphere. VM hot cloning was introduced in VirtualCenter 2.5 Update 2. See the following release notes: http://www.vmware.com/support/vi3/doc/vi3_esx35u2_vc25u2_rel_notes.html

Host Profiles. Simplify and standardize ESX/ESXi host configuration management via policies. The idea is to eliminate manual configuration through the console or VIC which can be subject to human error or neglect. To a good degree, host profiles will replace much of the automated deployment methods in your environment. Notice I didn’t say host profiles will replace all automated methods. There are configuration areas which host profile policies don’t cover. You’ll need supplemental coverage for those areas so don’t permanently delete your scripts and processes just yet. You’ll need to keep a few of them around even after implementing host profiles. Host profiles can be created by hand from scratch, or a template can be constructed based on an existing host configuration. Lastly, profiles are not just for the initial deployment. They can be used to maintain compliance of host configurations going forward. Applying host profiles reminds me a lot of dropping Microsoft Active Directory Group Policy Objects (GPOs) on an OU folder structure. Monitoring compliance across the datacenter or cluster feels strikingly familiar to scanning and remediating via VMware Update Manager.

Storage VMotion. The sVMotion technology isn’t new to those on the VI3 platform already but the coming GUI to facilitate the sVMotion is. Props to Andrew Kutz for providing an sVMotion GUI plugin for free while VMware expected us to fumble around with sVMotion in the RCLI. Frankly, the sVMotion GUI should have been built into VirtualCenter the day it was introduced. The rumor is VMware didn’t want sVMotion to be that easy for us to use, hence we could get ourselves into some trouble with it. Apparently the same conscience feels no guilt about the ease of snapshotting and the risk associated with leaving snapshots open. VMware borrowed code from the hot cloning feature and will allow disk type changing during the sVMotion process. Using the same example as above, during an sVMotion, on the fly we can migrate from a pre-allocated disk type to a thinly provisioned sparse disk.

vApps. vApps allow us to group together tiered applications or VMs into a single virtual service entity. This isn’t simply global groups for VMs or Workstation teams, VMware has taken it a step further by tying together VM interdependencies and resource allocations which allows things like single-step power operations (think one click staggered power operations in the correct order), cloning, deployment, and monitoring of the entire application workload. The Open Virtualization Format (OVF) 1.0 standard will also be integrated which will support the importing and exporting of vApps. I know what you’re thinking – What will VMware think of next? Keep reading.

VMFS-3 Online Volume Grow. I like to read more into a name or a phrase than I probably should. Does this mean we will see online volume grow in VI3 before the release of VI4? Or does this mean that in VI4, VMFS is unchanged and stays at the “3” designation. The latter would be something to look forward to because personally I can do without datastore upgrades, although with the emerging VMware technology, shuffling VMs and storage around, even hot, makes the process of datastore upgrades pretty easy, however, we still need the time to plan and perform the tasks, plus the extra shared storage to leap frog the datastore upgrades. So what is online volume grow? Answer: seamless VMFS volume growing without the use of extents. OVG facilitates a two step process of growing your underlying hardware LUNs (in a typical scenario this is going to be some type of shared storage like SAN, iSCSI, or NFS), then extending the VMFS volume so that it consumes the extra space on the LUNs. For the Microsoft administrators, you may be familiar with using the “DISKPART” command line utility to expand a non-OS partition . Same thing. Now, not everyone will have the type of storage that allows dynamic or even offline LUN growth at the physical layer. For this, VMware still allows VMFS volume growth through the use of extents but doing so doesn’t make my skin crawl any less than it did when I first learned about extents.

vNetwork Distributed Switch. I think VMware idolizes Hitachi. Any storage administrator who has been around Hitachi for a while will know what I’m talking about here. Hitachi likes to periodically change the names of their hardware and software technology whether it makes sense or not. More often than not, each of their technologies has two names/acronyms at a minimum. In some cases three. VMware is keeping up the pace with their name changes. What was once Distributed Virtual Switch (DVS) at VMworld 2008, is now vNetwork Distributed Switch (vNDS). Notice the case sensitivity there. I have and will continue to ding anyone for getting VMware’s branding wrong, but I promise to try to be polite about it because I realize the number of people who are as anal as I falls within the range of nobody and hardly anyone. The vNDS is a virtual network switch that can be shared by more than one ESX host. I think the idea behind the vNDS falls in line with host profiles: automated network configuration and consistency across hosts. Not only will this save us time from having to manually create switches and port groups (or generate the scripts to automate the process), but it will help guarantee we don’t run into VM migration problems which more and more enterprise features are dependent on (basically any feature that makes use of hot or cold VMotion or sVMotion). Add the Cisco Nexus 1000v into the mix and we see that VMware networking is becoming more automated, robust, and flexible, but with added complexity which could mean longer time to resolve network related issues.

Last but not least, Fault Tolerance. Truth be told, this is another VMware technology that has gone through a Marketing department name change but this was announced at VMworld 2008 and I’ve already ranted about it so I’ll let it go. In a single sentence, FT is an ESX/ESXi technology that provides continuous availability for virtual machines using VMware vLockstep functionality. It works by having identical VMs run in virtual lockstep on two separate hosts. The “primary” VM is in the active state doing what it does best: receives requests, serves information, and runs applications on the network. A “secondary” VM follows all changes made on the primary VM. VMware vLockstep captures all nondeterministic transactions that occur on the primary VM. The transactions are sent to the secondary VM running on a different host. All of this happens with a latency of less than a single second. If the primary VM goes down, the secondary takes over almost instantly with no loss of data or transactions. This is where FT differs from VMware High Availability (HA). HA is a cold restart of a failed VM. In FT, the VM is already running. At what cost does this FT technology come to us? I don’t know. VMware is tight lipped on licensing thus far but I can tell you that FT is enabled at an individual VM by VM level, not at a global datacenter, cluster, or host level. Have you figured out the other significant cost yet? Virtual Infrastructure resources. CPU, RAM, Disk, Network. The secondary VM is running in parallel with the primary. That means for each FT protected VM, we essentially need double the VI resources from the four food groups. This is a higher level of protection of VM workloads, in fact, the highest level of protection we’ve seen yet. This level of protection comes to us at a premium and thus I expect to see carefully planned and sparse usage of FT in the datacenter for the most critical workloads. Hopefully all will realize this isn’t VMware gouging us for more money. I expect FT to be a separately licensed component and by that, VMware gives us the choice whether to implement or not. That’s key because not all shops will have a need for FT so why should they be forced to purchase it? Customers want options and flexibility through adaptive and competitive licensing models.

This is an exciting list of new features and functionality that I look forward to working with. Hopefully we see them in the coming year. Those from the competing virtualization camps that think you are catching up with VMware – here’s your answer. VMware will continue to raise the bar while you play catch up. You’ve not done your homework if you thought VMware would sit back and relax, resting on its laurels. When has VMware ever been known for this? VMware has hundreds of ideas in the queues waiting for development. Ideas for innovation larger than you or I could imagine. Personally I think there is room for all three of the major hypervisor players in the ecosystem. Certainly the competition is good for the customer. It forces everyone to bring on their “A” game. Game on.

VMworld Europe 2009 Wednesday

February 25th, 2009

I need to make this quick because it’s 3:25am and I risk not waking up for my sessions tomorrow in four hours.

It has been a whirlwind of a day. I arrived at the conference and found out by word of mouth VMware had announced their list of vExpert recipients. I was one of 300 people on the planet chosen as a vExpert based on various contributions I’ve made to the VMware virtualization community including forum activity over the years, evangelism through blogging, podcasting, VMUG leader, etc. I can proudly display the silver vExpert logo on my blog. This is a nice gesture from VMware to recognize people in the community that have given much of themselves to promote a product that they believe in and help shape the future of our planet.

I attended some good sessions. Yesterday I learned about VMware vCenter Chargeback. It’s features seem fairly consistent with other chargeback solutions I’ve tested. Still not much automated help for estimating VM infrastructure and operational costs prior to VM deployment for new servers/applications/workloads but when I asked about this during Q&A, the speaker assured me this would be coming in future versions. vCenter Chargeback is also going to add an additional database to vCenter. For those with vCenter and Update Manager, we’re now up to three separate databases. The chargeback database has to be pretty simple – I don’t understand why additional tables can’t be created in the vCenter database for chargeback eliminating the need for an additional database. Where I get nervous about databases is during vCenter upgrades and the additional time and effort required to repair or back out from a failed database upgrade.

I attended a few more good sessions today. Most notably TA15 Protecting your vCenter Server using vCenter Heartbeat and LAB11 VMware VI Toolkit for Windows (PowerShell) where I was assisted by none other than Carter Shanklin whom many might recognize from Twitter. Carter also delivered a knockout session which I hear is currently ranked #1 among all sessions. In the past, it wasn’t a show stopper for the virtual infrastructure if VirtualCenter was down for a brief to moderately extended period of time. With all of the components announced recently that tie into vCenter Server, the importance of vCenter Server uptime (and vSphere as a whole) has increased exponentially. vCenter Server is evolving into an enterprise application requiring 99.9999% uptime. The additional moving parts will introduce increased complexity and potentially new operational and support standards for vSphere. Our models will need to be adapted to fit the uptime requirements of vSphere.

DSC00677The second VMTN: Ask the Experts session was held today. We had more people in the community lounge than yesterday but still not many visitors who were looking for assistance with VMware virutalization. I was pulled away by Jessica, a Systems Engineer with VMware, along with a camera crew to give an interview on vExpert along with some general chit chat about the show. That interview will be posted on vmworld.com.

DSC00711Moving along into the evening, I attended the VMworld party which started at 20:00. It was a great time. To the left, that’s Mike Laverick walking through the entrance with his video camera in tow. There was live music including two women who kicked things off with some techno violin. I thought the food was pretty good and there was quite a variety. The presentation of the food was also interesting as you will see from the photos below. The man at the bar in the brown jacket with his back turned to me is none other than Jonathan Reeve of Hyper9.

DSC00727 DSC00716 DSC00734

DSC00715 DSC00728 DSC00721

DSC00738I was the lucky recipient of a Flip Video mino HD from Tripwire.

This is a slick little video recording device which records up to 1 hour of HD video and sound on internal memory.

I hung out with a lot friends and talked with some interesting people like Brian Madden who always has interesting stories to tell.

DSC00732 DSC00743

DSC00745The story behind this picture is that while waiting in line to get into the party, I buried five Euros worth of coins in this hot candle wax 1/2 inch deep along with a few US coins. The experiment was to see if anyone would dig them out after the candle wax had dried. When we left the party, all the Euro coins were gone. Someone later took them out of the hot wax and peeled the wax shavings off which were found on the ground. They left the US coins and my card.

The VMworld party ended at midnight and some of us walked down the strip to a small techno bar that was jam packed. There was a live DJ, dancing, drinking, and making out. Like the Veeam boat party the other night, I ran into Tarry Singh, Strategic Business Consultant: Data Center (Cloud Computing, Virtualization). Tarry is funny as hell and that guy can definitely cut a rug. I’ve got a lot of video footage from tonight but cannot post any due to very poor upload speeds from the hotel.

It’s late and the Hyper9 alien and I are tired. Goodnight.

DSC00749

VMworld Europe 2009 Tuesday keynote

February 24th, 2009

DSC00569 The general session keynote was kicked off by Maurizio Carli, General Manager EMEA. Maurizio briefly talked about VMware EMEA growth:

  • VMworld Europe 2008 4,500 attendees
  • VMworld Europe 2009 4,700 attendees
  • 100 sponsors this year

DSC00570 Paul Maritz President and CEO began his keynote discussing today’s IT problems and how they are not sustainable into the future. The solution is:

  • Efficiency
  • Control
  • Choice

VMware addresses the above with the following initiatives:

  1. VDC-OS – Foundation for the Cloud
  2. vCloud – Choice and Cloud Federation
  3. vClient – Desktop as a Service

VDC-OS

DSC00575 The Cloud as Architecture from the bottom up. Virtualization is the key to making all of this happen in an evolutionary way:

  • Datacenter/Cloud

VMware vSphere

    • Existing Apps/New Apps – Existing and multiple future app models
    • Management – SLA management model
    • Policies – Security, Compliance…
    • Software – Scale and availability through software
    • Hardware – Industry standard building blocks

Paul went on to discuss the vSphere Architecture and its components. Other than the vSphere name being introduced, the slide looked identical to that of what was presented at VMworld 2008 and what exists on the VDC-OS web page.

VMware vCenter Suite SLA Driven Management Model:

  • Availability
  • Security
  • Performance

2009 is the year virtualization users have been waiting for. Quoting Paul, there will be no reason why we can not virtualize 100% of the workloads in our environment. That is a confident statement and it makes me enthusiastic about things to come.

vCloud

I have been witness to a lot of discussion, including a degree of uncertainty (including my own), concerning cloud computing. VMware is addressing the concerns by working with service providers to ensure compatibility between internal and external clouds (ie. Sungard). In addition, they are working with standards bodies to avoid a “Hotel California” situation where you can check in but never check out.

Paul brought up a few guest speakers to talk about the cloud and they performed live demos as well.

DSC00579

vClient

Unfortunately at this point, wireless went down and I was scrambling to reproduce content above that was lost and I hadn’t saved yet.  That said, I didn’t get as much of the vClient content as I would have liked.  Brian Madden was licking his chops for desktop content so hopefully he can round out the discussion.

VMware View Enables Desktop as a Service. Layers from the bottom up:

  • VMware View
  • vCenter
  • VDC-OS/vSphere
  • Hardware

VMware View: Complete Roll-Out in 2009:

  • Management
    • Centralized template-based management
    • App virtualization
    • Thin provisioning
  • WAN
    • Hi latency
    • Low bandwidth
    • Productive Desktop
  • LAN
    • HD video
    • Flash
    • 3D graphics
  • Local
    • Use local resources
    • Optimal media experience
    • Rich portable desktop

The next speaker to take the stage talked about SAP.  Rather than listen to him, I spent some time editing this post for final submission.

I’m now heading on to the sessions.

For those interested, don’t miss the VMTN:  Ask the Experts session today and tomorrow at 13:00 in the Community Lounge.  My wife Amy baked chocolate chip cookies for those who attend.  Hurry before they run out!

Next generation of VMware Virtual Infrastructure named VMware vSphere

December 19th, 2008

Today at the Minneapolis VMware User Group (VMUG) meeting, VMware employees disclosed to a group of 150+ attendees the new name for the next generation of Virtual Infrastructure many have been referring to as VI4 or VI.next.  The new name is VMware vSphere.

I value and respect the various relationships I have with VMware and thus before posting this news, I checked with authoritative sources inside VMware.  VMware Marketing has endorsed the release of this information to the public.

VMware also released a few new configuration maximum details on vSphere but for now I am keeping that information to myself.  Other audience members in attendance may decide to break this news.