Find the best data management solutions at Redapt. Data migration services can improve efficiency and cut cost.
Following is a VMworld Europe 2009 preview of features VMware is developing for future versions of vSphere. There is no guarantee or time line of when these features will be introduced into vSphere. Furthermore, the features should not be thought of as a group that will be implemented together at one time. A more likely scenario is that they will be integrated independently into major or incremental future builds. With that disclaimer out of the way, let’s dig in to the good stuff.
Pluggable Storage Architecture (PSA). ESX/ESXi will have a new architecture for storage called PSA which is a collection of VMKernel APIs that allow 3rd party hardware vendors to inject code into the ESX storage I/O path. 3rd party developers will be able to design custom load balancing techniques and fail over mechanisms for specific storage arrays. This will happen in part with the use of VMware’s Native Multipathing Plugin (NMP) which VMware will distribute with ESX. Additional plugins from storage partners may also appear. During the lab, I explored the PSA commands using the ESXi “unsupported” console via PuTTY.
Update: Duncan Epping over at Yellow Bricks just wrote about Pluggable Storage Architecture, expanding quite a bit on its components. View that post here.
Hot Cloning of Virtual Machines. This upcoming feature is fairly self explanatory. Duplicate or clone a virtual machine while the source VM is running. I think this feature will be useful for troubleshooting or base lining a guest OS on the fly without impacting the source by causing a temporary outage to clone the control VM into the experiment environment. Additionally, during the cloning process, VMware is going to allow us to choose a different disk type than that of the source VM. For example, the source VM may have a disk type of pre-allocated but we can change the clone destination disk type to a thinly provisioned sparse disk. Fragmentation anyone? Speaking of pitfalls, you may wonder how VMware will handle powering on the destination VM for the first time with a duplicate network name and IP address as the clone source that is currently running on the network? Simple. We already have the technology today: The Guest Customization process. While guest customization has always been optional for us, it more or less becomes mandatory in hot cloning so I’d start getting used to it.
Update: As a few people have pointed out in the comments, hot cloning of virtual machines is available to us prior to the release of vSphere. VM hot cloning was introduced in VirtualCenter 2.5 Update 2. See the following release notes: http://www.vmware.com/support/vi3/doc/vi3_esx35u2_vc25u2_rel_notes.html
Host Profiles. Simplify and standardize ESX/ESXi host configuration management via policies. The idea is to eliminate manual configuration through the console or VIC which can be subject to human error or neglect. To a good degree, host profiles will replace much of the automated deployment methods in your environment. Notice I didn’t say host profiles will replace all automated methods. There are configuration areas which host profile policies don’t cover. You’ll need supplemental coverage for those areas so don’t permanently delete your scripts and processes just yet. You’ll need to keep a few of them around even after implementing host profiles. Host profiles can be created by hand from scratch, or a template can be constructed based on an existing host configuration. Lastly, profiles are not just for the initial deployment. They can be used to maintain compliance of host configurations going forward. Applying host profiles reminds me a lot of dropping Microsoft Active Directory Group Policy Objects (GPOs) on an OU folder structure. Monitoring compliance across the datacenter or cluster feels strikingly familiar to scanning and remediating via VMware Update Manager.
Storage VMotion. The sVMotion technology isn’t new to those on the VI3 platform already but the coming GUI to facilitate the sVMotion is. Props to Andrew Kutz for providing an sVMotion GUI plugin for free while VMware expected us to fumble around with sVMotion in the RCLI. Frankly, the sVMotion GUI should have been built into VirtualCenter the day it was introduced. The rumor is VMware didn’t want sVMotion to be that easy for us to use, hence we could get ourselves into some trouble with it. Apparently the same conscience feels no guilt about the ease of snapshotting and the risk associated with leaving snapshots open. VMware borrowed code from the hot cloning feature and will allow disk type changing during the sVMotion process. Using the same example as above, during an sVMotion, on the fly we can migrate from a pre-allocated disk type to a thinly provisioned sparse disk.
vApps. vApps allow us to group together tiered applications or VMs into a single virtual service entity. This isn’t simply global groups for VMs or Workstation teams, VMware has taken it a step further by tying together VM interdependencies and resource allocations which allows things like single-step power operations (think one click staggered power operations in the correct order), cloning, deployment, and monitoring of the entire application workload. The Open Virtualization Format (OVF) 1.0 standard will also be integrated which will support the importing and exporting of vApps. I know what you’re thinking – What will VMware think of next? Keep reading.
VMFS-3 Online Volume Grow. I like to read more into a name or a phrase than I probably should. Does this mean we will see online volume grow in VI3 before the release of VI4? Or does this mean that in VI4, VMFS is unchanged and stays at the “3″ designation. The latter would be something to look forward to because personally I can do without datastore upgrades, although with the emerging VMware technology, shuffling VMs and storage around, even hot, makes the process of datastore upgrades pretty easy, however, we still need the time to plan and perform the tasks, plus the extra shared storage to leap frog the datastore upgrades. So what is online volume grow? Answer: seamless VMFS volume growing without the use of extents. OVG facilitates a two step process of growing your underlying hardware LUNs (in a typical scenario this is going to be some type of shared storage like SAN, iSCSI, or NFS), then extending the VMFS volume so that it consumes the extra space on the LUNs. For the Microsoft administrators, you may be familiar with using the “DISKPART” command line utility to expand a non-OS partition . Same thing. Now, not everyone will have the type of storage that allows dynamic or even offline LUN growth at the physical layer. For this, VMware still allows VMFS volume growth through the use of extents but doing so doesn’t make my skin crawl any less than it did when I first learned about extents.
vNetwork Distributed Switch. I think VMware idolizes Hitachi. Any storage administrator who has been around Hitachi for a while will know what I’m talking about here. Hitachi likes to periodically change the names of their hardware and software technology whether it makes sense or not. More often than not, each of their technologies has two names/acronyms at a minimum. In some cases three. VMware is keeping up the pace with their name changes. What was once Distributed Virtual Switch (DVS) at VMworld 2008, is now vNetwork Distributed Switch (vNDS). Notice the case sensitivity there. I have and will continue to ding anyone for getting VMware’s branding wrong, but I promise to try to be polite about it because I realize the number of people who are as anal as I falls within the range of nobody and hardly anyone. The vNDS is a virtual network switch that can be shared by more than one ESX host. I think the idea behind the vNDS falls in line with host profiles: automated network configuration and consistency across hosts. Not only will this save us time from having to manually create switches and port groups (or generate the scripts to automate the process), but it will help guarantee we don’t run into VM migration problems which more and more enterprise features are dependent on (basically any feature that makes use of hot or cold VMotion or sVMotion). Add the Cisco Nexus 1000v into the mix and we see that VMware networking is becoming more automated, robust, and flexible, but with added complexity which could mean longer time to resolve network related issues.
Last but not least, Fault Tolerance. Truth be told, this is another VMware technology that has gone through a Marketing department name change but this was announced at VMworld 2008 and I’ve already ranted about it so I’ll let it go. In a single sentence, FT is an ESX/ESXi technology that provides continuous availability for virtual machines using VMware vLockstep functionality. It works by having identical VMs run in virtual lockstep on two separate hosts. The “primary” VM is in the active state doing what it does best: receives requests, serves information, and runs applications on the network. A “secondary” VM follows all changes made on the primary VM. VMware vLockstep captures all nondeterministic transactions that occur on the primary VM. The transactions are sent to the secondary VM running on a different host. All of this happens with a latency of less than a single second. If the primary VM goes down, the secondary takes over almost instantly with no loss of data or transactions. This is where FT differs from VMware High Availability (HA). HA is a cold restart of a failed VM. In FT, the VM is already running. At what cost does this FT technology come to us? I don’t know. VMware is tight lipped on licensing thus far but I can tell you that FT is enabled at an individual VM by VM level, not at a global datacenter, cluster, or host level. Have you figured out the other significant cost yet? Virtual Infrastructure resources. CPU, RAM, Disk, Network. The secondary VM is running in parallel with the primary. That means for each FT protected VM, we essentially need double the VI resources from the four food groups. This is a higher level of protection of VM workloads, in fact, the highest level of protection we’ve seen yet. This level of protection comes to us at a premium and thus I expect to see carefully planned and sparse usage of FT in the datacenter for the most critical workloads. Hopefully all will realize this isn’t VMware gouging us for more money. I expect FT to be a separately licensed component and by that, VMware gives us the choice whether to implement or not. That’s key because not all shops will have a need for FT so why should they be forced to purchase it? Customers want options and flexibility through adaptive and competitive licensing models.
This is an exciting list of new features and functionality that I look forward to working with. Hopefully we see them in the coming year. Those from the competing virtualization camps that think you are catching up with VMware – here’s your answer. VMware will continue to raise the bar while you play catch up. You’ve not done your homework if you thought VMware would sit back and relax, resting on its laurels. When has VMware ever been known for this? VMware has hundreds of ideas in the queues waiting for development. Ideas for innovation larger than you or I could imagine. Personally I think there is room for all three of the major hypervisor players in the ecosystem. Certainly the competition is good for the customer. It forces everyone to bring on their “A” game. Game on.