Archive for April, 2011

Palo Alto VCDX Defense Application Due Date June 6th

April 22nd, 2011

The application due date for the VCDX Defenses in Palo Alto, CA is fast approaching.  Applications are due June 6, 2011, 5:00 PM Pacific Time.

For those who may be interested in an upcoming Palo Alto defense date, plan accordingly to make sure you meet the pre-requisite exam requirements.  Details for the application process can be found here:

vmware.com/go/vcdx > VCDX Defense Overview

Product Review: Veeam Backup & Replication v5

April 21st, 2011

Do you like free?  Do you like backup and replication?  Do you like VMware?  If you answered “yes” to any of the three, then you might like this:  I wrote a product review on Veeam Backup and Replication 5 which discusses the following:

Pros and Cons of different approaches to data protection

An in-depth look at Veeam Backup & Replication v5

What Veeam Backup & Replication v5 is missing

Register for your free download:

Product Review: Veeam Backup & Replication v5



Sound familiar? It should.  This isn’t the first time I’ve written about Veeam.  Check out a few of my previous posts about Veeam Backup & Replication:

Veeam Backup & Replication 5.0

Gestalt IT Tech Field Day – Veeam

HDS and VAAI Integration

April 3rd, 2011

SnagIt CaptureOn day 1 of Hitachi Data Systems Geek Day 2.0, we met with Michael Heffernan, Global Product Manager – Virtualization.  You might know him as @virtualheff on Twitter.  I was pleased to listen to Heff as he discussed HDS integration with VMware vSphere vStorage API for Array Integration (VAAI for short and most easily pronounced “vee·double-ehh·eye”).  For those who aren’t aware, VMware introduced VAAI with the GA release of vSphere 4.1 on July 13th of last year.  In short, VAAI allows the burden of certain storage related tasks to be offloaded from the ESX/ESXi hypervisor to the storage array.  Generally speaking, the advantages touted are performance improvement of intrinsic tasks and increased scalability of the storage array. HDS is one of a few storage vendors who supported VAAI integration on the July launch date and in February of this year, they announced VAAI support with their VSP (see also Hu Yoshida’s writing on the announcement).

Heff started off with some virtualization in the datacenter background and IDC stats.  Here are a few that he shared with us:

  • Only 12.8% of all physical servers are virtualized in 2009
  • More than half of all workloads (51%) will be virtualized by the end of 2010
  • Two-thirds (69%) by 2013
  • VM densities continue to rise predictably, averaging:
    • 6 VMs per physical server in 2009
    • 8.5 VMs per physical server in 2013

A few time line infographics were also shown which tell a short story about VMware, HDS:

 SnagIt Capture   SnagIt Capture

VMware provides several integration points which storage vendors can take advantage of, VAAI being just one of them.  These integration points are use case specific and standardized by VMware.  As such, integration is developed in parallel by competing vendors and most often the resulting offerings from each look and feel similar.  Great minds in storage and virtualization think alike.

SnagIt Capture   SnagIt Capture

SnagIt CaptureHDS integrates with all three VAAI attach points VMware offers:

  1. Hardware Assisted Copy
  2. Hardware Assisted Zeroing
  3. Hardware Assisted Locking

Heff also used this opportunity to mention Hitachi Dynamic Provisioning (HDP) technology which is essentially HDS thin provisioning plus other lesser known benefits but has nothing more to do with VAAI than any other storage vendor which supports both VAAI and thin provisioning.  Others may disagree but I see no sustainable or realizable real world benefit with VAAI and thin provisioning at this time; the discussion is rather academic.

HDS went on to show VAAI benefits are real.  Tests show an 18% efficiency improvement in the block copy test on a 30GB virtual disk.  85% decrease in elapsed time to eager write zeros to a 30GB virtual disk.  The third VAAI benefit, hardware assisted locking, can be a little trickier to prove or require specific use cases.  Following are examples of VMFS operations that require locking metadata, and as a result a SCSI reservation which hardware assisted locking improves, per VMware KB Article: 1005009:

  • Creating a VMFS datastore
  • Expanding a VMFS datastore onto additional extents
  • Powering on a virtual machine
  • Acquiring a lock on a file
  • Creating or deleting a file
  • Creating a template
  • Deploying a virtual machine from a template
  • Creating a new virtual machine
  • Migrating a virtual machine with VMotion
  • Growing a file, for example, a Snapshot file or a thin provisioned Virtual Disk

Heff showcased the following hardware assisted locking results.  Up to 36% increase in performance and 75% reduction in lock conflicts for the power on/linked clone test:

SnagIt Capture   SnagIt Capture

SnagIt Capture

VAAI offloads some of the heavy lifting from the hypervisor to the back end storage array so it was appropriate for the discussion to ultimately lead to impact on the array.  This is where I currently feel the bigger benefit is: better scalability or more mileage out of the array.  HDS is also the second storage vendor I’ve heard say that block LUN size and number of VMs per LUN is no longer a constraint (from a performance standpoint, everything else being equal).  This point always interests me and is frankly a tough pill to swallow.  I wasn’t able to pin Heff down to more specific details nor have I seen actual numbers, case studies, or endorsements from any storage vendor’s customer environments.  To some degree, I think this design consideration is still going to be use case and environment dependent.  It will also continue to be influenced by other constraints such as replication.  It may become more of a reality when VMware expands VAAI integration beyond the original three features.  HDS did mention that in vSphere 5, VMware is adding two more VAAI features bringing the total to five assuming they are released.

HDS offers competitive storage solutions for the VMware use case and it is clear they are totally committed to the virtualization push from both a storage and compute perspective.  You can learn more about these solutions and stay in tune with their evolution at their VMware Solutions site.

Full Disclosure Statement: HDS Geek Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

Iomega StorCenter ix2-200 Network Storage, Cloud Edition

April 2nd, 2011

SnagIt Capture

I recently acquired an Iomega ix2-200 storage appliance which is perhaps the smallest storage device in EMC’s vast portfolio, save the VMAX thumb drive I’ve heard sparse sightings of.  This is a nifty little device which could prove quite useful in the home, home office, college dorm, or small business.  The ix2 serves as network attached storage (NAS) capable of several protocols mapping it to many of the most popular applications.  NFS, iSCSI, CIFS/SMB, Apple File Sharing, Bluetooth, FTP, TFTP (a new addition in the latest firmware update), rsync, and SNMP to name several.

A rich and easy to use browser-based interface provides access to the device and storage configuration.  The package includes software which I initially installed on my Windows 7 workstation to get up and running.  This software also integrates nicely with the PC it’s installed on providing file backup and other features, some of which are new in the -200 version of the appliance and cloud related.  I later ditched the management software due to an annoying iSCSI configuration bug.  Once the appliance is on the network, the web interface via its TCP/IP host address proved to be more reliable.  My unit shipped with a fairly old version of firmware which I wasn’t initially aware of based on feedback from the management interface which claimed it was all up to date.  Updating the firmware added some features and sped up iSCSI LUN creation time immensely.  

SnagIt CaptureWhat’s included:

  • Iomega® StorCenter ix2-200 Network Storage
  • 1 USB port on the front, 2 USB ports in the rear (for external drives, printers, and UPS connectivity)
  • 1 Gb Ethernet port in the rear
  • Ethernet Cable
  • Power Supply
  • Printed Quick Install Guide & other light documentation
  • Software CD
  • Service & Support: Three year limited warranty with product registration within 90 days of purchase.
  •  

    Technical Specifications:

    • Desktop, compact form factor
      • Width: 3.7 in (94mm)
      • Length: 8.0 in (203mm)
      • Height: 5.6 in (141mm)
      • Weight: 5 lbs (2.27 kg)
    • CPU at 1GHz with 256MB RAM
    • 2 x 3.5″ Easy-Swap SATA-II Hard Disk Drives
    • RAID 1, JBOD
    • 1 x RJ45 10/100/1000Mbps (GbE) Ethernet port. LAN standards: IEEE 802.3, IEEE 802.3u
    • 3 x USB 2.0 ports (to connect external HDD, printers, UPS, Bluetooth dongle)
    • Client computers for file system access—Windows PC, Mac OS, Linux
    • AC Voltage 100-240 VAC
    • Power consumption – 5 Watts (min) – 19 Watts (max)
    • Acoustic noise – 28 dB maximum

    Application Features:  The ix2-200 has an impressive set, most of which I don’t or probably will never use.

    • Content sharing
    • Torrent download manager
    • Photo slide show
    • Remote access
    • Active Directory support
    • USB printer sharing
    • Facebook, Flickr, and YouTube integration
    • Security camera integration
    • Several backup options, including cloud integrated
    • 

    UI Candy:  The management interface consists of five main tabs: Home, Dashboard, Users, Shared Storage, and Settings.  Please pardon the inconsistent cropping crudity:

    SnagIt Capture   SnagIt Capture   SnagIt Capture   SnagIt Capture

    The ix2 ships with 2x 1TB SATA-II drives.  RAID 1 (mirror) with automatic RAID rebuild and RAID 0 (Stripe w/o parity) support as well as JBOD mode also available.

    SnagIt Capture   SnagIt Capture

    Temperature and fan status.  My unit seems hot; I need to check that fan showing 0 RPM:

    SnagIt Capture

    Believe it or not, Jumbo Frames support at either 4000 or 9000 MTU:

    SnagIt Capture

    Speaking of jumbo frames, what about performance?  I was pleasantly surprised to find the ix2-200 officially supported by VMware vSphere for both iSCSI and NAS.  I’m in it for the vSphere use case so I benchmarked NFS and iSCSI in a way which is consistent with how I’ve performed previous storage performance tests which can also be compared in the VMware community (take a look here and here for those comparisons).  With two spindles, I wasn’t expecting grand results but I was curious nonetheless and I also wanted to share and compare with some co-workers who tested their home storage this past week.  Performance results were at times inconsistent during multiple runs of the same IO test.  In addition, NFS performance decreased after applying the latest firmware update.

    iSCSI

    ix2 iSCSI feels like a no-frills implementation.  iSCSI LUN security is user and Mutual CHAP based and seems particularly weak.  Individual LUNs can only be secured on a per user basis.  The user based security isn’t supported by vSphere and the CHAP implementation doesn’t seem to work at all in that my ESXi host was able to read/write to an ix2 LUN without having the required CHAP secret.  In summary, the only viable ESXi configuration is to connect the host or hosts to an unsecured iSCSI LUN or set of LUNs on the ix2.  Risks here include lack of data security as well as integrity since any host on the network with an iSCSI initiator can read/write to the iSCSI LUN.  As far as I can tell, there is no thin or virtual provisioning at the ix2 layer when creating iSCSI block LUNs.  This is merely an observation; I wasn’t expecting support for thin provisioning, dedupe, or compression.

    NFS

    NFS is more secure on the ix2 in that volume access can be restricted to a single IP address of the vSphere host.  Volumes are also secured individually which provides granularity.  It’s also flexible enough to support global subnet based access.  These are security features commonly found in enterprise NFS storage.  Similar to iSCSI above, NFS also supports user based access which again doesn’t provide much value in the vSphere use case.

    I’m not going to speak much in detail about the performance results.  I didn’t have much along the lines of expectations and I think the results speak for themselves.  iSCSI performed marginally better with the RealLife test.  However, I’m not convinced the security trade off makes iSCSI a clear winner.  Coupling the advantage in the Max Throughput test, I’m more in favor of NFS overall with the ix2-200.

    SnagIt Capture

    Supporting performance data collected from Iometer:

    SnagIt Capture

    Reliability of the ix2-200 is in the back of my mind.  I’ve heard one report of frequent failures and loss of RAID/data with the bigger brother Iomega ix4.  Time will tell.  With that, I won’t be placing any important data on the ix2.  As it is, I blew away the factory default RAID1 configuration for double the density, spindle, and performance of RAID0.  My intent for the ix2 is to use it as cheap lab storage for vSphere and potentially backup during the summer months.

    For more on the ix2, take a look at a nice writeup Stephen Foskett produced quite a while back.