Posts Tagged ‘ESX’

StarWind Software Inc. Announces Opening of German Office

October 4th, 2011

Press Release:

StarWind Software Inc. Announces Opening of German Office

StarWind Software Inc. Opens a New Office in Germany to Drive Local Channel Growth

Snagit CaptureBurlington, MA – October 1, 2011StarWind Software Inc., a global leader and pioneer in SAN software for building iSCSI storage servers, announced today that it has opened a new office in Sankt Augustin, Germany to service the growing demand for StarWind’s iSCSI SAN solutions. The German office expands StarWind’s ability to offer local sales and support services to its fast growing base of customers and prospects in the region.

“We have seen substantial growth in our customer base and level of interest in our solutions in Europe,” said Artem Berman, Chief Executive Officer of StarWind Software. “Since the market potential for our products is significant, we have opened a new office in Germany to strengthen our presence there. We shall use our best efforts to complete the localization of resources.”

“Our local presence in Germany will help us to work closely with our partners and customers, to better meet their needs as well as sweepingly develop their distribution networks,” said Roman Shovkun, Chief Sales Officer of StarWind Software. “The new office permits us to deliver superior sales, support to our customers, and to the growing prospect base in the region.”

The new office is located at:
Monikastr. 13
53757 Sankt Augustin
Primary contact: Veronica Schmidberger

About StarWind Software Inc.
StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software focuses on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated Storage Node Failover and Failback, Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

StarWind is a pioneer, since 2003, in the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in more than 100 countries and from small and midsize companies to governments and Fortune 1000 companies.

SRM 5.0 Replication Bits and Bytes

October 3rd, 2011

VMware has pushed out several releases and features in the past several weeks.  It can be a lot to digest, particularly if you’ve been involved in the beta programs for these new products because there were some changes made when the bits made their GA debut. One of those new products is SRM 5.0.  I’ve been working a lot with this product lately and I thought it would be helpful to share some of the information I’ve collected along the way.

One of the new features in SRM 5.0 is vSphere Replication.  I’ve heard some people refer to it as Host Based Replication or HBR for short.  In terms of how it works, this is an accurate description and it was the feature name during the beta phase.  However, by the time SRM 5.0 went to GA, each of the replication components went through a name change as you’ll see below. If you know me, you’re aware that I’m somewhat of a stickler on branding.  As such, I try to get it right as much as possible myself, and I’ll sometimes point out corrections to others in an effort to lessen or perpetuate confusion.

Another product feature launched around the same time is the vSphere Storage Appliance or VSA for short.  In my brief experience with both products I’ve mentioned so far, I find it’s not uncommon for people to associate or confuse SRM replication with a dependency on the VSA.  This is not the case – they are quite independent.  In fact, one of the biggest selling points of SRM based replication is that it works with any VMware vSphere certified storage and protocol.  If you think about it for a minute, this now becomes a pretty powerful for getting a DR site set up with what you have today storage wise.  It also allows you to get SRM in the door based on the same principles, with the ability to grow into scalable array based replication in an upcoming budget cycle.

With that out of the way, here’s a glimpse at the SRM 5.0 native replication components and terminology (both beta and GA).

Beta Name GA Name GA Acronym
HBR vSphere Replication VR
HMS vSphere Replication Management Server vRMS
HBR server vSphere Replication Server vRS
ESXi HBR agent vSphere Replication Agent vR agent


Here is a look at how the SRM based replication pieces fit in the SRM 5.0 architecture.  Note the storage objects shown are VMFS but they could be both VMFS datastores as well as NFS datastores on either side:

Snagit Capture

Diagram courtesy VMware, Inc.

To review, the benefits of vSphere Replication are:

  1. No requirement for enterprise array based replication at both sites.
  2. Replication between heterogeneous storage, whatever that storage vendor or protocol might be at each site (so long as it’s supported on the HCL).
  3. Per VM replication. I didn’t mention this earlier but it’s another distinct advantage of VR over per datastore replication.
  4. It’s included in the cost of SRM licensing. No extra VMware or array based replication licenses are needed.

Do note that access to the VR feature is by way of a separate installable component of SRM 5.0.  If you haven’t already installed the component during the initial SRM installation, you can do so afterwards by running the SRM 5.0 setup routine again at each site.

I’ve talked about the advantages of VR.  Again, I think they are a big enabler for small to medium sized businesses and I applaud VMware for offering this component which is critical to the best possible RPO and RTO.  But what about the disadvantages compared to array based replication?  In no particular order:

  1. Cannot replicate templates.  The ‘why’ comes next.
  2. Cannot replicate powered off virtual machines.  The ‘why’ for this follows.
  3. Cannot replicate files which don’t change (powered off VMs, ISOs, etc.)  This is because replications are handled by the vRA component – a shim in vSphere’s storage stack deployed on each ESX(i) host.  By the way, Changed Block Tracking (CBT) and VMware snapshots are not used by the vRA.  The mechanism uses a bandwidth efficient technology similar to CBT but it’s worth pointing out it is not CBT.  Another item to note here is that VMs which are shut down won’t replicate writes during the shutdown process.  This is fundamentally because only VMs which are powered on and stay powered on are replicated by VR.  Current state of the VM would, however, be replicated once the VM is powered back on.
  4. Cannot replicate FT VMs. Note that array based replication can be used to protect FT VMs but once recovered they are not longer FT enabled.
  5. Cannot replicate linked clone trees (Lab Manager, vCD, View, etc.)
  6. Array based replication will replicate a VMware based snapshot hierarchy to the destination site while leaving them in tact. VR can replicate VMs with snapshots but they will be consolidated at the destination site.  This is again based on the principle that only changes are replicated to the destination site.
  7. Cannot replicate vApp consistency groups.
  8. VR does not work with virtual disks opened in “multi-writer mode” which is how MSCS VMs are configured.
  9. VR can only be used with SRM.  It can’t be used as a data replication for your vSphere environment outside of SRM.
  10. Losing a vSphere host means that the vRA and the current replication state of a VM or VMs is also lost.  In the event of HA failover, a full-sync must be performed for these VMs once they are powered on at the new host (and vRA).
  11. The number of VMs which can be replicated with VR will likely be less than array based replication depending on the storage array you’re comparing to.  In the beta, VR supported 100 VMs.  At GA, SRM 5.0 supports up to 500 VMs with vSphere Replication. (Thanks Greg)
  12. In band VR requires additional open TCP ports:
    1. 31031 for initial replication
    2. 44046 for ongoing replication
  13. VR requires vSphere 5 hosts at both the protected and recovery sites while array based replication follows only general SRM 5.0 minimum requirements of vCenter 5.0 and hosts which can be 3.5, 4.x, and/or 5.0.

The list of disadvantages appears long but don’t let that stop you from taking a serious look at SRM 5.0 and vSphere Replication.  I don’t think there are many, if any, showstoppers in that list for small to medium businesses.

I hope you find this useful.  I gathered the information from various sources, much of it from an SRM Beta FAQ which to the best of my knowledge are still fact today in the GA release.  If you find any errors or would like to offer corrections or additions, as always please feel free to use the Comments section below.

Professional VMware BrownBag Group Learning

September 19th, 2011

Snagit Capture

If you weren’t already aware, VMware vEXPERT Cody Bunch has been hosting a series of BrownBag learning sessions covering topics from VCP4, VCAP4-DCA, and VCAP4-DCD exam blueprints, in addition to VCDX topics.  A number of individuals from the VMware community have been lending Cody assistance in leading these sessions.  I’ll be stepping up to the plate this Wednesday evening, 9/21 at 7pm CDT to help out.  I’ll be covering VCAP4-DCD exam blueprint objectives:

  • 1.1 Gather and analyze business requirements
  • 1.2 Gather and analyze application requirements
  • 1.3 Determine Risks, Constraints, and Assumptions

If you’re thinking of attempting the VCAP4-DCD exam or if you’re preparing for the VCDX certification, this session is for you.  Again, details below, sign up today – it’s free!

Updated 9/21/11: The live session is complete but you can download the recorded version at the Professional VMware link above.  I’m also embedding a link to the SlideRocket presentation for as long as my trial account is active (through the beginning of October).

Virtualization Wars: Episode V – VMware Strikes Back

July 12th, 2011

Snagit CaptureAt 9am PDT this morning, Paul Maritz and Steve Herrod take the stage to announce the next generation of the VMware virtualized datacenter.  Each new product and set of features are impressive in their own right.  Combine them and what you have is a major upgrade of VMware’s entire cloud infrastructure stack.  I’ll highlight the major announcements and some of the detail behind them.  In addition, the embargo and NDA surrounding the vSphere 5 private beta expires.  If you’re a frequent reader of blogs or the Twitter stream, you’re going to bombarded with information at fire-hose-to-the-face pace, starting now.

7-10-2011 4-22-46 PM


vSphere 5.0 (ESXi 5.0 and vCenter 5.0)

At the heart of it all is a major new release of VMware’s type 1 hypervisor and management platform.  Increased scalability and new features make virtualizing those last remaining tier 1 applications quantifiable.

7-10-2011 4-55-28 PM

Snagit Capture

ESX and the Service Console are formally retired as of this release.  Going forward, we have just a single hypervisor to maintain and that is ESXi.  Non-Windows shops should find some happiness in a Linux based vCenter appliance and sophisticated web client front end.  While these components are not 100% fully featured yet in their debut, they come close.

Storage DRS is the long awaited compliment to CPU and memory based DRS introduced in VMware Virtual Infrastructure 3.  SDRS will coordinate initial placement of VM storage in addition to keeping datastore clusters balanced (space usage and latency thresholds including SIOC integration) with or without the use of SDRS affinity rules.  Similar to DRS clusters, SDRS enabled datastore clusters offer maintenance mode functionality which evacuates (Storage vMotion or cold migration) registered VMs and VMDKs (still no template migration support, c’mon VMware) off of a datastore which has been placed into maintenance mode.  VMware engineers recognize the value of flexibility, particularly when it comes to SDRS operations where thresholds can be altered and tuned on a schedule basis. For instance, IO patterns during the day when normal or peak production occurs may differ from night time IO patterns when guest based backups and virus scans occur.  When it comes to SDRS, separate thresholds would be preferred so that SDRS doesn’t trigger based on inappropriate thresholds.

Profile-Driven Storage couples storage capabilities (VASA automated or manually user-defined) to VM storage profile requirements in an effort to meet guest and application SLAs.  The result is the classification of a datastore, from a guest VM viewpoint, of Compatible or Incompatible at the time of evaluating VM placement on storage.  Subsequently, the location of a VM can be automatically monitored to ensure profile compliance.

7-10-2011 5-29-56 PM

Snagit CaptureI mentioned VASA previously which is a new acronym for vSphere Storage APIs for Storage Awareness.  This new API allows storage vendors to expose topology, capabilities, and state of the physical device to vCenter Server management.  As mentioned earlier, this information can be used to automatically populate the capabilities attribute in Profile-Driven Storage.  It can also be leveraged by SDRS for optimized operations.

The optimal solution is to stack the functionality of SDRS and Profile-Driven Storage to reduce administrative burden while meeting application SLAs through automated efficiency and optimization.

7-10-2011 7-34-31 PM

Snagit CaptureIf you look closely at all of the announcements being made, you’ll notice there is only one net-new release and that is the vSphere Storage Appliance (VSA).  Small to medium business (SMB) customers are the target market for the VSA.  These are customers who seek some of the enterprise features that vSphere offers like HA, vMotion, or DRS but lack the fibre channel SAN, iSCSI, or NFS shared storage requirement.  A VSA is deployed to each ESXi host which presents local RAID 1+0 host storage as NFS (no iSCSI or VAAI/SAAI support at GA release time).  Each VSA is managed by one and only one vCenter Server. In addition, each VSA must reside on the same VLAN as the vCenter Server.  VSAs are managed by the VSA Manager which is a vCenter plugin available after the first VSA is installed.  It’s function is to assist in deploying VSAs, automatically mounting NFS exports to each host in the cluster, and to provide monitoring and troubleshooting of the VSA cluster.

7-10-2011 8-03-42 PM

Snagit CaptureYou’re probably familiar with the concept of a VSA but at this point you should start to notice the differences in VMware’s VSA: integration.  In addition, it’s a VMware supported configuration with “one throat to choke” as they say.  Another feature is resiliency.  The VSAs on each cluster node replicate with each other and if required will provide seamless fault tolerance in the event of a host node failure.  In such a case, a remaining node in the cluster will take over the role of presenting a replica of the datastore which went down.  Again, this process is seamless and is accomplished without any change in the IP configuration of VMkernel ports or NFS exports.  With this integration in place, it was a no-brainer for VMware to also implement maintenance mode for VSAs.  MM comes in to flavors: Whole VSA cluster MM or Single VSA node MM.

VMware’s VSA isn’t a freebie.  It will be licensed.  The figure below sums up the VSA value proposition:

7-10-2011 8-20-38 PM

High Availability (HA) has been enhanced dramatically.  Some may say the version shipping in vSphere 5 is a complete rewrite.  What was once foundational Legato AAM (Automated Availability Manager) is now finally evolving to scale further with vSphere 5.  Some of the new features include elimination of common issues such as DNS resolution, node communication between management network as well as storage along with failure detection enhancement.  IPv6 support, consolidated logging into one file per host, enhanced UI and enhanced deployment mechanism (as if deployment wasn’t already easy enough, albeit sometimes error prone).

7-10-2011 3-27-11 PMFrom an architecture standpoint, HA has changed dramatically.  HA has effectively gone from five (5) fail over coordinator hosts to just one (1) in a Master/Slave model.  No more is there a concept of Primary/Secondary HA hosts, however if you still want to think of it that way, it’s now one (1) primary host (the master) and all remaining hosts would be secondary (the slaves).  That said, I would consider it a personal favor if everyone would use the correct version specific terminology – less confusion when assumptions have to be made (not that I like assumptions either, but I digress).

The FDM (fault domain manager) Master does what you traditionally might expect: monitors and reacts to slave host & VM availability.  It also updates its inventory of the hosts in the cluster, and the protected VMs each time a VM power operation occurs.

Slave hosts have responsibilities as well.  They maintain a list of powered on VMs.  They monitor local VMs and forward significant state changes to the Master. They provide VM health monitoring and any other HA features which do not require central coordination.  They monitor the health of the Master and participate in the election process should the Master fail (the host with the most datastores and then the lexically highest moid [99>100] wins the election).

Another new feature in HA the ability to leverage storage to facilitate the sharing of stateful heartbeat information (known as Heartbeat Datastores) if and when management network connectivity is lost.  By default, vCenter picks two datastores for backup HA communication.  The choices are made by how many hosts have connectivity and if the storage is on different arrays.  Of course, a vSphere administrator may manually choose the datastores to be used.  Hosts manipulate HA information on the datastore based on the datastore type. On VMFS datastores, the Master reads the VMFS heartbeat region. On NFS datastores, the Master monitors a heartbeat file that is periodically touched by the Slaves. VM availability is reported by a file created by each Slave which lists the powered on VMs. Multiple Master coordination is performed by using file locks on the datastore.

As discussed earlier, there are a number of GUI enhancements which were put in place to monitor and configure HA in vSphere 5.  I’m not going to go into each of those here as there are a number of them.  Surely there will be HA deep dives in the coming months.  Suffice it to say, they are all enhancements which stack to provide ease of HA management, troubleshooting, and resiliency.

Another significant advance in vSphere 5 is Auto Deploy which integrates with Image Builder, vCenter, and Host Profiles.  The idea here is centrally managed stateless hardware infrastructure.  ESXi host hardware PXE boots an image profile from the Auto Deploy server.  Unique host configuration is provided by an answer file or VMware Host Profiles (previously an Enterprise Plus feature).  Once booted, the host is added to vCenter host inventory.  Statelessness is not necessarily a newly introduced concept, therefore, the benefits are strikingly familiar to say ESXi boot from SAN: No local boot disk (right sized storage, increased storage performance across many spindles), scales to support of many hosts, decoupling of host image from host hardware – statelessness defined.  It may take some time before I warm up to this feature. Honestly, it’s another vCenter dependency, this one quite critical with the platform services it provides.

For a more thorough list of anticipated vSphere 5 “what’s new” features, take a look at this release from


vCloud Director 1.5

Snagit CaptureUp next is a new release of vCloud Director version 1.5 which marks the first vCD update since the product became generally available on August 30th, 2010.  This release is packed with several new features.

Fast Provisioning is the space saving linked clone support missing in the GA release.  Linked clones can span multiple datastores and multiple vCenter Servers. This feature will go a long way in bridging the parity gap between vCD and VMware’s sun setting Lab Manager product.

3rd party distributed switch support means vCD can leverage virtualized edge switches such as the Cisco Nexus 1000V.

The new vCloud Messages feature connects vCD with existing AMQP based IT management tools such as CMDB, IPAM, and ticketing systems to provide updates on vCD workflow tasks.

vCD originally supported Oracle 10g std/ent Release 2 and 11g std/ent.  vCD now supports Microsoft SQL Server 2005 std/ent SP4 and SQL Server 2008 exp/std/ent 64-bit.  Oracle 11g R2 is now also supported.  Flexibility. Choice.

vCD 1.5 adds support for vSphere 5 including Auto Deploy and virtual hardware version 8 (32 vCPU and 1TB vRAM).  In this regard, VMware extends new vSphere 5 scalability limits to vCD workloads.  Boiled down: Any tier 1 app in the private/public cloud.

Last but not least, vCD integration with vShield IPSec VPN and 5-tuple firewall capability.

vShield 5.0

VMware’s message about vShield is that it has become a fundamental component in consolidated private cloud and multi-tenant VMware virtualized datacenters.  While traditional security infrastructure can take significant time and resources to implement, there’s an inherent efficiency in leveraging security features baked into and native to the underlying hypervisor.

Snagit Capture

There are no changes in vShield Endpoint, however, VMware has introduced static routing in vShield Edge (instead of NAT) for external connections and certificate-based VPN connectivity.


Site Recovery Manager 5.0

Snagit CaptureAnother major announcement from VMware is the introduction of SRM 5.0.  SRM has already been quite successful in providing simple and reliable DR protection for the VMware virtualized datacenter.  Version 5 boasts several new features which enhance functionality.

Replication between sites can be achieved in a more granular per-VM (or even sub-VM) fashion, between different storage types, and it’s handled natively by vSphere Replication (vSR).  More choice in seeding of the initial full replica. The result is a simplified RPO.

Snagit Capture

Another new feature in SRM is Planned Migration which facilitates the migration protected VMs from site to site before a disaster actually occurs.  This could also be used in advance of datacenter maintenance.  Perhaps your policy is to run your business 50% of the time from the DR site.  The workflow assistance makes such migrations easier.  It’s a downtime avoidance mechanism which makes it useful in several cases.

Snagit CaptureFailback can be achieved once the VMs are re protected at the recovery site and the replication flow is reversed.  It’s simply another push of the big button to go the opposite direction.

Feedback from customers has influenced UI enhancements. Unification of sites into one GUI is achieved without Linked Mode or multiple vSphere Client instances. Shadow VMs take on a new look at the recovery site. Improved reporting for audits.

Other miscellaneous notables are IPv6 support, performance increase in guest VM IP customization, ability to execute scripts inside the guest VM (In guest callouts), new SOAP based APIs on the protected and recovery sides, and a dependency hierarchy for protected multi tiered applications.


In summary, this is a magnificent day for all of VMware as they have indeed raised the bar with their market leading innovation.  Well done!


VMware product diagrams courtesy of VMware

Star Wars diagrams courtesy of Wookieepedia, the Star Wards Wiki

Watch VMware Raise the Bar on July 12th

July 11th, 2011

On Tuesday July 12th, VMware CEO Paul Maritz and CTO Steve Herrod are hosting a large campus and worldwide event where they plan to make announcements about the next generation of cloud infrastructure.

The event kicks off at 9am PDT and is formally titled “Raising the Bar, Part V”. You can watch it online by registering here.  The itinerary is as follows:

  • 9:00-9:45 Paul and Steve present – live online streaming
  • 10:00-12:00 five tracks of deep dive breakout sessions
  • 10:00-12:00 live Q&A with VMware cloud and virtualization experts
    • Eric Siebert
    • David Davis
    • Bob Plankers
    • Bill Hill

In addition, by attending live you also have the chance to win a free VMworld pass.  More details on that and how to win here.

I’m pretty excited both personally and for VMware.  This is going to be huge!

The time for ESXi is now. The Susan Gudenkauf interview

July 5th, 2011

I suspect VMware is going to orchestrate the release of the next generation of vSphere with VMworld 2011 or perhaps even sooner.  There is a big media event coming up on Tuesday July 12th called Raising the Bar, Part V. I’m guessing announcements and details will be showcased at this event.  At any rate, most people have known for quite some time that VMware ESX is being retired in favor of ESXi as VMware’s flagship type 1, enterprise, scalable, datacenter hypervisor going forward.  This next version of vSphere which VMware is about to release officially marks the end of ESX.  Only ESXi will be available onward into the future.  For most people, this doesn’t mean a lot since many have already made the formal transition from ESX to ESXi.  However, others have yet to commit to ESXi for various reasons.  Those who have already embraced ESXi are prepared for this next release of vSphere and all of the new features that it brings.  Those on ESX still are Behind the Eight Ball.  With the upcoming version of vSphere, we will no longer have a choice to stay the course with ESX.  The time to make the transition to ESXi is becoming critical and that time is now. I had a chance to talk with Susan Gudenkauf who oversees the ESXi program and is helping customers make this transition.  Following is the interview.  If you’re still hesitant about ESXi, I hope this Q&A session helps.

Q. Can you please introduce yourself and tell us a little about your history at VMware?

A. My name is Susan Gudenkauf and I have been a VMware employee for 8 years. I started at VMware in June of 2003 when there were about 240 employees. My first role here was Senior Systems Engineer and there were only about 10 of us in the world. We are still a tight knit group and those guys are some of my best friends now. I stayed in that role for about 18 months and left when there was around 100 SE’s (me still being the only female in the group). I went on to become a Technical Account Manager (TAM) so I could concentrate on the relationships with my customers instead of the more ‘hit-and-run’ work I was doing as an SE. At the time the territories were a lot larger and I covered 6 states and 3 Canadian Provinces by myself. After I had been a TAM for a couple of years I was promoted to TAM Manager and then Senior Manager. In January of 2011 I left Professional Services (PSO) to oversee the ESXi program as a Senior Program Manager focusing on customer migration. It was a huge step outside of my comfort zone but it’s been a wonderful experience so far. I’ve really been enjoying my career at VMware and it’s been fun getting to do different roles and having varying responsibilities.

Q. You are a legend in VMware certification history. Would you mind sharing that story?

A. I think the word ‘legend’ is a bit much (although it is certainly flattering) but I agree there is definitely some vibe around the VCP #1 thing (VMware Certified Professional). It’s one of the strangest and sweetest things that have ever happened to me. I didn’t know at the time it would become such a big deal, but I regularly get people asking for my autograph and for me to take photos with them. It’s really funny and I’m finally starting to enjoy that people care enough about it to even mention it. The first time I took the VMware course (on ESX 1.5) there wasn’t a VCP exam yet. I was working as a consultant for a partner at the time, but ended up going to VMware about 8 months after the course. The first week I started at VMware I took the VMware ESX 1.52 course and this time they had a certification exam – which we had to write on PAPER. The worst part was that we didn’t find out if we passed or failed for 6 weeks or something. It was pretty nerve wracking. Later I found out that the two guys I was friends with in the class (Ferhan Khan and Michael Cambian) were VCP #2 and VCP #3. It’s funny how we all ended up the first three VCPs in the world.

Q. What is your primary responsibility in your current role?

A. One of the most important responsibilities of my role is to bring awareness to the fact that ESXi is the only VMware hypervisor going forward. In July 2010 we announced that VMware ESX was going away and the next major release would have ESXi as the only platform. It is amazing how many people didn’t realize that when I started with this program. It’s been a major focus for me for all of 2011. Most of what the team has been working on is at the ESXi Info Center here:

Q. VMware ESXi may be new to some readers. Can you talk about what ESXi is and provide some historical background on its development?

A. Introduced in 2007, ESXi is the most advanced hypervisor in the market today. It is a “bare-metal” hypervisor and is thinner, lighter, more secure and easier to manage than ESX. ESXi also has a great advantage over other hypervisors due to the fact that it has complete independence from an Operating System. This is key because the hypervisor is the foundation to your private or hybrid cloud and you need it to be solid.

Q. Is ESXi experimental or for non-production use? What is VMware’s support stance on ESXi?

A. ESXi is absolutely designed for production use. It is a fully featured hypervisor that delivers greater performance, reliability, security and scalability than ESX. It can be used to run any of the advanced features of vSphere in a multitude of use cases. In fact, ESXi is already used in Production by a large percentage of VMware customers. We do have an entry level product called VMware vSphere Hypervisor which is based on ESXi but has limited management capabilities and doesn’t give users the advanced features such as high availability, live migration, power management, automatic load balancing, etc. Our support stance on ESXi is the same as our other solutions. We have Production Support (24×7), Basic Support (normal business hours) and Per-Incident Support.

Q. What can you tell me about the adoption rate of ESXi since it’s release?

A. I don’t have specific numbers in front of me, but since ESXi was released in 2007 we had seen a fairly gradual uptick in adoption…until vSphere 4 was released. I think that was really the tipping point for mainstream adoption. An interesting thing to note is that a leading indicator of adoption is the number of downloads each product gets. We’ve seen a reversal of ESX downloads to where they only count for 20% of the overall downloads now whereas ESXi is 80%. That’s really great validation for the strategic direction we have chosen.

Q. Is there a features, support, hardware compatibility, scalability, or stability gap between ESX and ESXi platforms?

A. Interestingly I hear the statement from some people that ESXi “doesn’t have the same functionality” that ESX does. This may have been true at one time, but since ESXi 4.1 came out we’ve really had feature parity. ESXi 4.1 supports Boot from SAN, scripted installations, integrated Active Directory support among other features. You can also expect the scalability that you’ve grown accustomed to with ESX.

Q. ESXi has significantly smaller code base than ESX. How does this impact the effort and time required to deploy and patch ESXi vs. ESX and what does the reduced footprint mean from a security standpoint?

A. You are right about the smaller code base Jason. ESXi is built on less than 100MB of code, whereas ESX is built on over 2GB. That’s a significant savings in space and it brings with it greater reliability and stability. An additional benefit of less code and independence from an Operating System is a lower risk of bugs and other security vulnerabilities.

Q. Has ESXi boot from firmware really taken off? Are there any caveats there?

A. Our major OEM partners offer ESXi pre-installed on their servers due to continuing customer demand. These customers are very enthusiastic about the super-convenient delivery model – just rack the server, power it on and ESXi is up and running. These customers also love the fact that they can run ESXi without local storage which increases the reliability of the server. There really aren’t any caveats other than making sure to use a flash device that’s certified for use for ESXi. These can be obtained from the OEMs. In the future, we intend to provide even more ways to deploy ESXi so that the customer can choose what’s best for their environment.

Q. Some customers have raised software compatibility concerns. How is ESXi impacting the partner ecosystem and what efforts are in place to ensure a seamless migration for ESX shops?

A. I work pretty closely with the Eco-Engineering team regarding our partners and software compatibility. Our partners have known about this transition for years now and those that have not already transitioned their tools to be compatible with ESXi are working diligently to complete this in the near future.

Q. Is VMware offering any special incentives for ESXi purchases, upgrades, or migrations?

A. One of the things I felt adamant about when I took this role was that we needed a way to help our customers migrate to ESXi with the least amount of disruption. Awareness and education are critical to any successful plan. I worked with our VMware Education team to bring an online course to our customers and most importantly to make that course available at no cost to them. It was a little bit of an uphill battle in the beginning since not only did I want VMware to pick up the cost of creating the course, but also the cost of purchasing a number of ESXi eBooks. We are running a promotion now where those that take the course and fill out the survey at the end will get a free eBook while our supplies last. We do still have some eBooks left right now but supplies are running short since it has been a pretty popular promotion. Here is a link to the course so folks can take the course while we still have the ebooks:

Q. Once is released as an ESXi only platform, how long will customers be supported on ESX?

A. VMware will offer 7 years of support from the general availability of a new Major Release. That means ESX 4.x will be supported 7 years from the date of general availability on our next major release. The 7 years of support is broken down into 5 years of General Support and 2 years of Technical Guidance following the end of the General Support.

Q. Futures: Is the ESXi the final frontier for the VMware virtualized datacenter/vCloud platform or are there more platform changes coming?

A. I don’t know that I would say anything in technology is the “final frontier”. I know that everything evolves and ESXi is the platform that brings greater efficiency, reliability and stability to virtualization. My whole philosophy on life is if we aren’t evolving, we are dying. The best part of that is VMware will continue to evolve and be a leader for virtualization and cloud infrastructure. Our company meetings are so cool since we get to see and hear ideas from some of the smartest people on the planet. We hear people say that they have no idea how some of these things are even possible, but when Steve Herrod (our CTO and Senior Vice President of R&D) says it can happen…it just does. It’s fun to be a part of that and watch ideas come to life. Now, as far as a discussion on futures goes I’ll go into detail on some really cool things coming up since I’m certain all of your readers are under NDA, right? OK, I’m kidding. I think we should get Steve to have that discussion.

Thank you for your time Susan!

Thanks so much for the interview Jason, it was an honor to be asked and I had a lot of fun with this. I hope your readers enjoy it!

You can contact Susan on Twitter at: @susangude

7-5-2011 6-40-19 AM

New Diskeeper White Paper: Optimization of VMware Systems

June 28th, 2011

diskeeperDiskeeper Corporation reached out to me via email last week letting me know that they’ve released a new white paper on optimizing VMs.  I’m making the three page document available for download via the following link:

Best Practice Protocols: Optimization of VMware Systems (416KB)