Archive for July, 2010

vSphere 4.1: Multicore Virtual CPUs

July 25th, 2010

With the release of vSphere 4.1, VMware has introduced Multicore Virtual CPU technology to its bare metal flagship hypervisor.  This is an interesting feature which had already existed in current versions of VMware Workstation.  VMware has consistently baked in new features in its Type 2 hypervisor products, such as Workstation, Player, Fusion, etc., more or less as a functionality/stability test before releasing the same features in ESX(i).  VMware highlights this new feature as follows:

User-configurable Number of Virtual CPUs per Virtual Socket: You can configure virtual machines to have multiple virtual CPUs reside in a single virtual socket, with each virtual CPU appearing to the guest operating system as a single core. Previously, virtual machines were restricted to having only one virtual CPU per virtual socket. See the vSphere Virtual Machine Administration Guide.

VMware multicore virtual CPU support lets you control the number of cores per virtual CPU in a virtual machine. This capability lets operating systems with socket restrictions use more of the host CPU’s cores, which increases overall performance.

Using multicore virtual CPUs can be useful when you run operating systems or applications that can take advantage of only a limited number of CPU sockets. Previously, each virtual CPU was, by default, assigned to a single-core socket, so that the virtual machine would have as many sockets as virtual CPUs.

You can configure how the virtual CPUs are assigned in terms of sockets and cores. For example, you can configure a virtual machine with four virtual CPUs in the following ways:

  • Four sockets with one core per socket (legacy, this is how we’ve always done it prior to vSphere 4.1)
  • Two sockets with two cores per socket (new in vSphere 4.1)
  • One socket with four cores per socket (new in vSphere 4.1)

VMware defines a CPU as:

The portion of a computer system that carries out the instructions of a computer program and is the primary element carrying out the computer’s functions.

VMware defines a Core as:

A logical execution unit containing an L1 cache and functional units needed to execute programs. Cores can independently execute programs or threads.

VMware defines a Socket as:

A physical connector on a computer motherboard that accepts a single physical chip. Many motherboards can have multiple sockets that can in turn accept multicore chips.

One of the benefits of multicore which physical computing had was increased density of the hardware.  VMs do not share this advantage as they are virtual to begin with and have no rack footprint to speak of.

VMware’s benefit statement for this feature is a legitimate one and is the primary use case.  It’s the same benefit which applied when multicore (as well as hyperthreading to some extent) technology was introduced to physical servers.  What VMware doesn’t advertise is that the limitation being discussed usually revolves around software licensing – a per-socket license model to be precise which is what many software vendors still use.  For example, if I own a piece of software and I have a single socket license, traditionally I was only able to use this software inside of a single vCPU VM.  With Multicore Virtual CPUs, Virtual Machines have now caught up with their physcial hardware counterparts in that a single socket VM can be created which has 4 cores per socket.  Using the working example, the advantage I have now is that I can run my application inside a VM which still has 1 socket, but 4 cores for a net result of 4 vCPUs instead of just 1 vCPU.  I didn’t have to pay my software vendor additional money for the added CPU power.  To show how this translates into dollars and cents, let’s assume a per socket license cost of my application to be $1,000 and then extrapolate those numbers using VMware’s example above of how CPUs can be assigned in terms of sockets and cores:

  • Four sockets with one core per socket = $1,000 x 4 sockets = $4,000 net license cost, 4 CPUs
  • Two sockets with two cores per socket = $1,000 x 2 sockets = $2,000 net license cost, 4 CPUs
  • One socket with four cores per socket = $1,000 x 1 socket = $1,000 net license cost, 4 CPUs
  •  

    Now, all of this said, the responsibility is on the end user to be in license compliance with his or her software vendors.  Just becasue you can do this doens’t mean you’re legally obliged to do so.  Be sure to read your EULA and check with your software vendor or reseller before implementing VMware Multicore Virtual CPUs.

    Implementation of Multicore Virtual CPUs was quite straightfoward in VMware Workstation.  Upon creating a new VM or editing an existing VM’s settings, the following interface was presented for configuring vCPUs and cores per vCPU in VMware Workstation.  In this example, a 2xDC (Dual Core) configuration is being applied which results in a total of 4 CPU cores which will serve the VM’s operating system, applications, and users. Note that here, the term “processors” on the first line translates to “sockets”:

    7-25-2010 11-39-53 AM

    Making the same 2xDC CPU configuration in vSphere 4.1 isn’t difficult but nonetheless it is done differently.  Configuring total vCPUs and cores per vCPU is achieved by applying configurations in two different areas of the VM configuration. The combination of the two configurations produces a mathematical calculation which ultimately determines cores per vCPU.

    First of all, the total number of cores (processors) is selected in the VM’s CPU configuration.  This hasn’t changed and should be familiar to you.  The number of cores (processors) available for selection here is going to be 1 thru 4 or 1 thru 8 if you have Enterprise Plus licensing.  I’ve purposely included the notation of the VM hardware version 7 which is required. An inconsistency here compared to VMware Workstation is that the term “virtual processors” translates to “cores”, not “sockets”:

     7-25-2010 11-41-09 AM

    Configuring the number of cores per processor is where VMware has deviated from the VMware Workstation implementation.  In ESX and ESXi, this configuration is made as an advanced setting in the .vmx file.  Edit the VM settings, navigate to the Options tab, choose General in the Advanced options list. Click the Configuration Parameters button which allows you to edit the .vmx file on a row by row basis.  Click the Add Row button and add the line item cpuid.coresPerSocket. For the value, your going to supply the number of cores per processor which is generally going to be a value of 2, 4, or 8 (Enterprise Plus licensing required).  Note, using a value of 1 here would serve no practical purpose because it would configure a single core vCPU which is what we’ve had all along up until this point:

    7-25-2010 11-45-38 AM

    As a supplement, here are the requirements for implementing Multicore Virtual CPUs:

    • VMware vSphere 4.1 (vCenter 4.1, ESX 4.1 or ESXi 4.1).
    • Virtual Machine hardware version 7 is required.
    • The VM must be powered off to configure Multicore Virtual CPUs.
    • The total number of vCPUs for the VM divided by the number of cores per socket must be a positive integer.
    • The cpuid.coresPerSocket value must be a power of 2. The documentation explicitely states a value of 2, 4, or 8 is required, but 1 works as well although as stated before it would serve no practical purpose.
      • 2^0=1 (anything to the power of 0 always equals 1)
      • 2^1=2 (anything to the power of 1 always equals itself)
      • 2^2=4
      • 2^3=8
    • When you configure multicore virtual CPUs for a virtual machine, CPU hot Add/Remove is disabled (previously called CPU hot plug).
    • You must be in compliance with the requirements of the operating system EULA.

    This feature rocks and I think customers have been waiting a long time for it.  Duncan mentioned it quite some time ago but obvioulsy it was unsupported at that time.  I am a little puzzled by the implementation mechanisms, mainly the configuration of the .vmx to specify cores per CPU.  I suppose it lends itself to scriptability and thus automation, but in that sense, we lack the flexibility to configure cores per CPU with guest customization when deploying VMs from a template.  Essentially this means cores per CPU needs to be hard coded in each of my templates or cores per CPU needs to be manually tuned after deploying each VM from a template.  When I take a step back, I guess that’s no different than any other virtual hardware configuration stored in templates, but with the cores per CPU setting being buried in the .vmx as an advanced setting, it’s that much more of a manal/administrative burden to configure cores per CPU for each VM deployed than it is to simply change the number of CPUs or amount of RAM.  It would be nice if the guest customization process offered a quick way to configure cores per processor.

    GoGo Inflight Internet

    July 24th, 2010

    During a recent trip, I decided to use GoGo Inflight Internet aboard a Delta Airlines flight.  I’ve only used the service once before and that is merely because the service typically isn’t offered on the flights I am on.  Both the reliability and latency of service far exceeded my expectations.  I used the service for a little over three hours and lost only 77 frames: 

    Ping statistics for w.x.y.z:
    Packets: Sent = 8650, Received = 8573, Lost = 77 (0% loss),
    Approximate round trip times in milli-seconds:
    Minimum = 107ms, Maximum = 3220ms, Average = 205ms

    I was easily able to upgrade a vCenter Server and build an ESXi host to vSphere 4.1, as well as process a bunch of email I had fallen behind on.  The cost was $9.95 and given my satisfaction of the service and what I was able to accomplish, it was well worth the price.  I wish more flights offered this service.

    Two Thumbs Up! 8-)

    VMworld 2010: An ROI Message for Your Manager

    July 22nd, 2010

    Are you stuck trying to figure out how to convince management into sending you to VMworld?  A justification template has been made available on the VMworld website.  Download.  Fill in the blanks.  Submit to management.

    Direct link to the letter (MS Word format)

    Gestalt IT Tech Field Day – NEC

    July 16th, 2010

    It’s the last presentation of the day and the last presentation overall for Gestalt IT Tech Field Day Seattle.  We’ve made a short journey from the Microsoft store in Redmond, WA to to NEC in Bellevue.  Anyone who knows the NEC brand is aware of their diverse portfolio of products and perhaps their services.  Today’s discussion, however, will focus on Storage Solutions.

    First a bit of background information on NEC as a corporation:

    • Founded in 1899
    • 142,000 employees
    • 50,000 patents worldwide

    Storage. NEC opened up with some of today’s storage challenges faced by many.  Enter HYDRAstor, a two-tier grid architecture comprised of the following key building blocks:

    • Accelerator nodes – Deliver linear performance scalability for backup and archive.
    • Storage nodes – Deliver non-disruptive capacity scalability from terabytes to petabytes.
    • Standard configurations are delivered with a ratio of 1 Accelerator node for every 2 Storage node – ie.:
      • HS8-2004R = 2AN + 4SN = 24TB-48TB Raw
      • HS8-2010R = 5AN = 10SN = 120TB Raw
      • HS8-2020R = 10AN+20SN = 240TB
      • HS8-2110R = 55AN+110SN = 1.3PB Raw

    HYDRAstor delivers the following industry standard benefits:

    • Scalability – Non disruptive independent linear scaling of capacity and performance; concurrent multiple generations of compute and storage technology.
    • Self evolving – Automated load balancing and incorporation of new technology reduces application downtime and data outages.
    • Cost efficiency – Reduce storage consumption by 95% or more with superior data deduplication. Ever “green” evolution of energy savings features.
    • Resiliency – Greater protection than RAID witih less overhead.
    • Manageability – No data migration, zero data provisioning, self-managing storage; single platform for multiple data types, formats and quality of service needs.

    A few of other key selling points about HYDRAstor:

    • Global Data Deduplication of backup and archive data is achieved during ingest by combining DataRedux with grid storage architecture.  Dedupe of 20% to 50% across all datasets.
    • Distributed Resilient Data (DRD) technology drives data protection beyond what RAID protection offers with less overhead.  At its native configuration, user data is protected against up to three simultaneous disk or node failures.  This equates to 150% greater resiliency than RAID6 and 300% greater resiliency than RAID5 with less storage overhead and no performance degradtion during rebuild and leveling processes.
    • Turnkey delivery.  According to the brochure, HYDRAstor can be installed and performing backup or archive in less than 45 minutes.  I’m not sure what the point of this proclaimation is other than it will most likely be purchased in a pre-racked, cabled, and hopefully configured state.  When I think about deploying enterprise storage, it’s not something I contemplate performing end to end over my lunch hour.

    I know some of the other delegates were really excited about HYDRAstor and its enabling technologies.  Sorry NEC, I wasn’t feeling it.  HYDRAstor’s approach seems to consume more rack space than the competition, more cabling, and based on today’s lab walkthru, more cooling.

    IMG00778-20100716-1554

    Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

    Gestalt IT Tech Field Day – Compellent

    July 16th, 2010

    Gestalt IT Tech Field Day 2 begins with Compellent, a storage vendor out of Eden Prairie, MN.  Compellent has been around for about eight years and, like other well known multiprotocol SAN vendors, offers spindles of FC, SATA, SAS, and SSD via FC block, iSCSI, NFS, and CIFS.

    Compellent’s hardware approach is a modular one.  Many of the components, such as drives and interfaces (Ethernet, FC, etc.), are easily replacable and hot swappable, eliminating the need to “rip and replace” the entire frame of hardware and providing the ability to upgrade components without taking down the array.

    In April of 2010, Compellent introduced the new zNAS solution:

    Compellent introduces the new zNAS solution, which consolidates file and block storage on a single, intelligent platform. The latest unified storage offering from Compellent integrates next-generation ZFS software, high-performance hardware and Fluid Data architecture to actively manage and move data in a virtual pool of storage, regardless of the size and type of block, file or drive. Enterprises can simplify management, intelligently scale capacity, improve performance for critical applications and reduce complexity and costs.

    Fluid Data Storage is Compellent’s granular approach to data management

    • Virtualization
    • Intelligence
    • Automation
    • Utilization

    Volume Creation

    Volume Recovery

    Volume Management

    Integration 

    • VMware
      • Leveraging many of the features mentioned above
      • HCL compatibility although I don’t see ESXi in the list which would be a major concern for VMware customers given that ESX is being phased out.  Compellent responded they believe their arrays are compatible with ESXi and will look into updating their VMware support page if that is the case.  VMware’s HCL also shows Compellent storage is not currently certified for ESXi. Significant correction to the earlier statement: VMware’s HCL for storage is inconsistently different than it’s HCL for host hardware in that the host hardware HCL lists explicit compatiblity for both ESX and ESXi, whereas the storage HCL explicitly lists ESX compatibility which implies equivilent ESXi compatibility. Compellent arrays, as of this writing, are both ESX4 and ESXi4 compatible.
    • Microsoft
      • PowerShell (for automation and consistency of storage management)
      • Hyper-V

    Compellent performed a live demo of their Replay (Snapshot) feature with a LUN presented to a Windows host.  It worked slick and as expected. Compellent’s Windows based storage management UI has a fresh, no-nonsense, 21st century feel to it which I can appreciate.

    We closed discussion answering the question “Why Compellent?”  Top Reasons:

    1. Efficiency
    2. Long term ROI, cost savings through the upgrade model
    3. Ease of use

    Follow them on Twitter at @Compellent.

    Thank you Compellent for the presentation and I’m sure I’ll see you back in Minnesota!

    Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

    Gestalt IT Tech Field Day – F5

    July 15th, 2010

     

    IMG00745-20100715-1434

     

    We’re on to our 3rd and final presentation here at Gestalt IT Tech Field Day.  After a short road trip into beautiful downtown Seattle, we’ve arrived at F5.  At 1,800 employees strong, F5 was named one of the best places to work in the Seattle area.  From a high level, F5’s business goal is to optimize the end user experience.

    Today, F5 showed us simulated long distance vMotion.  F5 enables this with mid-range BIG-IP appliances stretching a Layer 2 network between two geographically disbursed datacenters along with providing WAN Optimization to access IP based storage between datacenters.  In addition, the hardware appliances expose APIs which VMware Orchestrator uses to assist the F5 into directing traffic between sites.  F5 has tested at up to 300ms round trip latency and a 10Mbps link.  This is what it looks like:

     7-15-2010 4-02-32 PM

    Another thing I learned today is that just a few months ago, in March 2010, F5 released the BIG-IP LTM VE.  This is a virtual appliance that falls in the BIG-F5 family of products.  Today that appliance is supported on only one virtualization platform and it should come as no surprise that the hypervisor of choice is VMware.

    BIG-IP® Local Traffic Manager™ (LTM) Virtual Edition (VE) takes your Application Delivery Network virtual. You get the agility you need to create a mobile, scalable, and adaptable infrastructure for virtualized applications. And like physical BIG‑IP devices, BIG-IP LTM VE is a full proxy between users and application servers, providing a layer of abstraction that secures, optimizes, and load balances application traffic.

    Speaking of F5 and VMware, Why would you want F5 for VMware vSphere?

    •F5 Management Plug-In for VMware vSphere
    The F5 Management Plug-in simplifies common BIG-IP LTM administrative tasks in a vSphere environment, reduces the risk of error and enables basic automation.

    •Integration with vCenter Server
    Respond automatically to changes in the infrastructure with seamless integration between VMware and F5.

    •Increased VM density by up to 60 percent
    Free up server resources by offloading CPU-intensive operations to achieve maximum utilization and consolidation.

    Long-distance vMotion
    Enable fully automated long-distance VMotion and Storage VMotion events between data centers without downtime or user disruption. 

    •Acceleration of VMotion and Storage VMotion
    Accelerate VMotion events over the WAN up to 10x by compressing, deduplicating, and optimizing traffic.

    Other virtualization considerations with F5
    File Virtualization
    Infrastructure Virtualization
    Server Virtualization

     F5 and VMware Solution Guide

    What about F5 and Cloud Benefits?

    •Reduce Complexity
    With a reusable framework of services that can be leveraged across static, dedicated servers as well as across multi-site cloud deployments, you immediately gain value that grows as your applications grow.

    •Increased Control
    By integrating traffic management, dynamic provisioning, access control, and management, you can more readily outsource the processing of applications and data without giving up ownership and control.

    •Context Awareness
    Having a complete picture of the user, network, application, and services gives you a unique ability to use context to determine how applications and data are delivered.

    •Reduced Switching Costs
    With a centrally controlled method of delivering applications and data, you can move resources anywhere at a moment’s notice without worrying about the capabilities of host locations.

    This was a great session where I think I picked up the most information so far.  F5 is one of those technologies I see a lot in the datacenter but I’ve not worked intimately with.  I like their value-added integration with virtualization and adoption of a cloud vision.

    Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

    Gestalt IT Tech Field Day – Nimble Storage

    July 15th, 2010

    7-15-2010 11-31-48 AMNext up at Gestalt IT Tech Field Day is Nimble Storage who comes out of stealth mode and officially launches today.  Nimble Storage provides a unique iSCSI storage platform by eliminating traditional backup windows using efficient snapshot technology coupled with high performance flash drives.  A handful of use cases have already been identified for both virtualized and bare metal OS and application platforms.  I’m baffled as to how much competitive room there is in the storage realm, particularly with giants like NetApp, EMC, Hitachi, and others.  I believe this is a compliment to each of the players as it takes incredibly bright minds and innovation to stake and maintain a claim.

    The secret sauce is in Nimble’s CASL (pronounced “castle” Cache-Accelerated Sequential Layout) Architecture which can be thought of as a reincarnation of VMware co-founder Mendel Rosenblum’s Log-Structured File System.

    • Inline Compression
    • Large Adaptive Flash Cache
    • High-Capacity Disk Storage
    • Integrated Backup

    Resulting advantages provided are:

    • Inline compression (2:1 – 4:1 ratio)
    • High performance
    • Low cost SATA disk stores both primary data as well as 90 day snapshot retention
    • WAN-efficient offsite replication for cost-effective DR
    • Storage and Backup Optimized for VMware/Microsoft environments
    • Benefits for Sharepoint, SQL, and Exchange as well

    From the Nimble Storage website:

    Storing, accessing, and protecting your data shouldn’t be so complicated and expensive. Nimble’s breakthrough CASL™ architecture combines flash memory with high-capacity disk to converge storage, backup, and disaster recovery for the first time. The bottom line: High-performance iSCSI storage, instant backups and restores, and full-featured disaster recovery — all in one cost-effective, easy-to-manage solution.

    Benefits for VMware Deployments

    •Dramatic VM Consolidation and Cost Reduction
    Groundbreaking CASL architecture includes innovations that enable dramatic consolidation of Virtual Servers and desktops. The hybrid flash and low-cost HDD-based architecture deliver very high random performance for demanding workloads at very low cost. Built-in capacity optimization and block sharing capabilities provide large capacity savings for both flash and disk. The net result is a single array that can easily serve the performance and capacity requirements for hundreds of high performance virtual servers, dramatically reducing cost, rackspace, power, and management expense. Further consolidation and cost savings come from the built-in capacity optimized backup capability, which eliminates dedicated disk backup devices, while enabling 90 days of efficient backup.

    •Backup and Restore VMs Instantly
    Nimble arrays enable instant Hypervisor consistent backup and restore of datastores and VMs, while eliminating backup windows. Nimble Protection Manager integrates with vCenter APIs to simplify management of Hypervisor-consistent backups, replicas and restores for VMware environments by leveraging Nimble’s instant, capacity optimized array-based snapshots. This converged solution enables dramatically better RPOs and RTOs compared with traditional solutions.

    •Automated, Fast Offsite Disaster Recovery
    WAN-efficient replication and fast failover enable quick, cost effective disaster recovery. Combined with instant backup capabilities, this enables rapid restore and very granular recovery points in the event of a site disaster. The entire failover process can be automated via management tools such as VMware Site Recovery Manager (SRM) which leverages a Nimble SRA to control the storage level failover capabilities.

    •Simplified Virtual Infrastructure Management
    Using predefined ESX performance and data protection policies, storage for new datastores can be provisioned and protected in just three steps. The Nimble Protection Manager integrates with vCenter APIs to simplify management of Hypervisor-consistent backups, replicas and restores for VMware environments, by leveraging Nimble’s instant, capacity optimized array based snapshots. A vCenter plugin simplifies and accelerates the task of cloning datastore or VM templates, by leveraging Nimble’s instant, high space efficient zero copy clones.

    Two 3U capacity offerings available, both of which are served by an identical configuration of Active/Passive controllers, large flash layer, multicore Intel Xeon processors, and 2x quad GbE NICs (10GbE ready and available soon):

    1. CS220: 9TB primary + 108TB backup
    2. CS240: 18TB primrary + 216TB backup

    7-15-2010 1-24-01 PM

    Follow them on Twitter at @NimbleStorage.

    Introduction to Nimble Storage at Tech Field Day Seattle from Stephen Foskett on Vimeo.

    Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.