Posts Tagged ‘Hardware’

USB Thumb Drive Not Recognized – 3 Fast Beeps

July 27th, 2011

No Earth-shattering material tonight.  In fact this tip isn’t even VMware/virtualization related other than the fact that the problem came up while working in the lab.  It has been several months since the last article I wrote under the “General” category which contains no VMware/virtualization content.

Anyway, I was working in the lab when…

My Windows 7 OS would no longer recognize my USB thumb drive.  Inserting the thumb drive into any of the USB  ports produced three quick USB-style beeps.  Having cut my x86 teeth in the days when A+ certification amounted to quite a bit, the three beeps told me something wasn’t right from a hardware standpoint but with a hint of driver hence the USB audio indicator.  I was mildly concerned because I sometimes carry data around on this drive which hasn’t been backed up or cannot be quickly reproduced.  A warm reboot of the OS produced no joy.  Neither did a power off.

Back in Windows Device Manager, the device was shown as disabled with an option to re-enable.  This did not work however.

Snagit Capture

This being a USB device which can easily be reinstalled, the next step was to uninstall the driver by right clicking on the device and choosing Uninstall (notice the “down arrow” depicted on the device indicating it is disabled):

Snagit Capture

After the uninstall of the driver, I unplugged the USB thumb drive, waited a few seconds, plugged it back in, and immediately heard the friendly USB sound I had been wanting all along.  Windows 7 went through a device discovery process, installed drivers, and I was on my way.

Disk.SchedNumReqOutstanding and Queue Depth

June 16th, 2011

There is a VMware storage whitepaper available which is titled Scalable Storage Performance.  It is an oldie but goodie.  In fact, next to VMware’s Configuration Maximums document,  it is one of my favorites and I’ve referenced it often.  I like it because it is efficient in specifically covering block storage LUN queue depth and SCSI reservations.  It was written pre-VAAI but I feel the concepts are still quite relevant in the block storage world.

One of the interrelated components of queue depth on the VMware side is the advanced VMkernel parameter Disk.SchedNumReqOutstanding.  This setting determines the maximum number of active storage commands (IO) allowed at any given time at the VMkernel.  In essence, this is queue depth at the hypervisor layer.  Queue depth can be configured at various points in the path of an IO such as the VMkernel which I already mentioned, in addition to the HBA hardware layer, the kernel module (driver) layer, as well as at the guest OS layer.

Getting back to Disk.SchedNumReqOutstanding, I’ve always lived by the definition I felt was most clear in the Scalable Storage Performance whitepaper.  Disk.SchedNumReqOutstanding is the maximum number of active commands (IO) per LUN.  Clustered hosts don’t collaborate on this value which implies this queue depth is per host.  In other words, each host has its own independent queue depth, again, per LUN.  How does Disk.SchedNumReqOutstanding impact multiple VMs living on the same LUN (again, same host)?  The whitepaper states each VM will evenly share the queue depth (assuming each VM has identical shares from a storage standpoint).

When virtual machines share a LUN, the total number of outstanding commands permitted from all virtual machines to that LUN is governed by the Disk.SchedNumReqOutstanding configuration parameter that can be set using VirtualCenter. If the total number of outstanding commands from all virtual machines exceeds this parameter, the excess commands are queued in the ESX kernel.

I was recently challenged by a statement agreeing to all of the above but with one critical exception:  Disk.SchedNumReqOutstanding provides an independent queue depth for each VM on the LUN.  In other words, if Disk.SchedNumReqOutstanding is left at its default value of 32, then VM1 has a queue depth of 32, VM2 has a queue depth of 32, and VM3 has its own independent queue depth of 32.  Stack those three VMs and we arrive at a sum total of 96 outstanding IOs on the LUN.  A few sources were provided to me to support this:

Fibre Channel SAN Configuration Guide:

You can adjust the maximum number of outstanding disk requests with the Disk.SchedNumReqOutstanding parameter in the vSphere Client. When two or more virtual machines are accessing the same LUN, this parameter controls the number of outstanding requests that each virtual machine can issue to the LUN.

VMware KB Article 1268 (Setting the Maximum Outstanding Disk Requests per Virtual Machine):

You can adjust the maximum number of outstanding disk requests with the Disk.SchedNumReqOutstanding parameter. When two or more virtual machines are accessing the same LUN (logical unit number), this parameter controls the number of outstanding requests each virtual machine can issue to the LUN.

The problem with the two statements above is that I feel they are poorly worded, and as a result, misinterpreted.  I understand what the statement is trying to say, but it’s implying something quite a bit different depending on how a person reads it.  Each statement is correct in that Disk.SchedNumReqOutstanding will gate the amount of active IO possible per LUN and ultimately per VM.  However, the wording implies that the value assigned to Disk.SchedNumReqOutstanding applies individually to each VM which is not the case.  The reason I’m pointing this out is due to the number of misinterpretations I’ve subsequently discovered via Google which I gather are the result of reading one of the latter sources above.

The scenario can be quickly proven in the lab.  Disk.SchedNumReqOutstanding is configured for the default value of 32 active IOs.  Using resxtop, I see my three VMs cranking out IO with IOMETER.  Each VM is configured with IOMETER to create 32 active IOs.  If what I’m being told by the challenge is true, I should be seeing 96 active IO being generated to the LUN from the combined activity of the three VMs.

Snagit Capture

But that’s not what’s happening.  Instead what I see is approximately 32 ACTV (active) IOs on the LUN, with another 67 IOs waiting in queue (by the way, ESXTOP statistic definitions can be found here).  In my opinion, the Scalable Storage Performance whitepaper most accurately and best defines the behavior of the Disk.SchedNumReqOutstanding value.

Snagit Capture

Now going back to the possibility of the Disk.SchedNumReqOutstanding stacking, LUN utilization could get out of hand rapidly with 10, 15, 20, 25 VMs per LUN.  We’d quickly exceed the max supported value of Disk.SchedNumReqOutstanding (and all HBAs I’m aware of) which is 256.  HBA ports themselves typically support a few thousand IOPS.  Stacking the queue depths for each VM could quickly saturate an HBA meaning we’d get a lot less mileage out of those ports as well.

While having a queue depth discussion, it’s also worth noting the %USD value is at 100% and LOAD is approximately 3.  The LOAD statistic corroborates the 3:1 ratio of total IO:queue depth and both figures paint the picture of an oversubscribed LUN from an IO standpoint.

In conclusion, I’d like to see VMware modify the wording in their documentation to provide better understanding leaving nothing open to interpretation.

Update 6/23/11:  Duncan Epping at Yellow Bricks responded with a great followup Disk.SchedNumReqOutstanding the story.

Howdy Partner

May 17th, 2011

I started my IT career working as a contractor in both short and long term engagements at medium to large customer sites.  Since then, and for the past 13+ years, I’ve grown my career in a customer role.  Along the way, I’ve picked up a tremendous amount of experience and expertise across several technologies.  VMware virtualization came onto the scene and I was drawn to specialize in… well, you know the story there. 

At present, I work for a great company and on a daily basis I’m at the helm of the largest vSphere implementation I’ve ever seen and possibly one of the largest in the region.  I’ve networked, made a lot of friends, maybe a few enemies, and I’ve been the recipient of an unmeasurable amount of opportunity, kindness, and generosity available only to customers in the VMware community.  However, from a role and operational aspect, I feel I’ve reached the peak of the mountain and I’ve seen and experienced all of the challenges that this mountain has to offer.  It’s time to try another mountain.

I’m hanging up my customer hat.  On Monday of next week, I begin a new role with Dell Compellent, a VMware Technology Alliance Partner.  I’ll have two titles:  Tactical Marketing Senior Advisor and Virtualization Product Specialist.  Each speaks to a degree of what my various responsibilities will entail.  My VMware experience will be leveraged continuously as I provide SME technical expertise to Storage Architects, Business Partners, and Customers on design, planning, and integration.  In addition, I’ll be involved with consulting, product demos, solution certification, white papers, and reference architectures.  In summary, I’ll be splitting my time between colleagues, customers, and more lab infrastructure than I might know what to do with, and at the same time exercising more of my design muscles.

So what does all this mean and how is it going to change Jason?  Let’s go through the list of things which come to my mind:

  • The VMware Virtualization Evangelist stays, though independent of this news I have been thinking about shortening the title to VMware vEvangelist (thoughts?).  That said, I’ll need to provide extra thought in what and how I write.  It is my underlying intent to deliver this news not from the standpoint of “hey, I got a new job”, but more importantly to instantiate the necessary transparency and disclosure from this point on.  This blog (and my twitter account @jasonboche) has always been and will continue to be mine.  I’ve made it quite clear in the past that my writing is my own and not the opinion or view of my employer.  This carries forward and I will continue to be an independent voice as much as possible but the fact that I work for a VMware Partner in the future will be inescapable.  Which brings me to the next point…
  • VMware’s policy is that, other than a few people which were grandfathered in, VMware Partners cannot be VMware User Group (VMUG) leaders.  I’ve been the Minneapolis VMUG leader for close to 5 years.  I’ve been involved with the group since the beginning when it was founded by @tbecchetti.  Although Dell Compellent was allowing me to continue carrying the VMUG torch, VMware forbids it.  It’s a fair policy and I agree 100% with it.  The Minneapolis VMUG members own and operate the group and this is clearly what’s best for the charter and its members.  A few weeks ago, I began the transition plan with the help of VMware and have talked with several potential candidates for taking over the VMUG leader role.  If I haven’t talked to you yet and you’re interested in leading or co-leading the group, please contact me via email expressing your interest.  Be sure to leave your name and contact information.  Our group has a quarterly meeting coming up this Friday which I’ll be conducting business as usual.  Our Q3 meeting in September is where I’ll likely be stepping down and introducing the new leader(s).
  • I’m still attending Gestalt Tech Field Day 6 evening activities in Boston 6/8 – 6/11, but I will not formally be a delegate nor will I be a delegate going forward as I’m no longer considered independent.  Again, Gestalt IT guidelines and I completely get it, it’s what is best for the group.  I’m looking forward to seeing some old friends as well as new faces from **I can’t let the cat out of the bag just yet, area locals will find out soon**.
  • I’m going to get my hands on kit which I’ve not had the chance to work with in the past.  Don’t be completely surprised if future discussion involves Dell Compellent.  At the same time, don’t automatically jump to a conclusion that I’ve transformed into a puppet.  Cool technology motivates me and is ultimately responsible for where I am at today.  I enjoy sharing the knowledge with peers when and where I can.  I believe that by sharing, everyone wins.
  • VMworld – you’ll probably see me at the booth.
  • Partner Exchange – I may be there as well.
  • VMworld Europe – I hope but not counting on it.  I didn’t ask.

I think that covers everything.  Compellent is a local (to me) storage company which I like.  I think Dell will add a lot of strength, opportunity, and growth.  I’m excited to say the least!

Jas

HDS and VAAI Integration

April 3rd, 2011

SnagIt CaptureOn day 1 of Hitachi Data Systems Geek Day 2.0, we met with Michael Heffernan, Global Product Manager – Virtualization.  You might know him as @virtualheff on Twitter.  I was pleased to listen to Heff as he discussed HDS integration with VMware vSphere vStorage API for Array Integration (VAAI for short and most easily pronounced “vee·double-ehh·eye”).  For those who aren’t aware, VMware introduced VAAI with the GA release of vSphere 4.1 on July 13th of last year.  In short, VAAI allows the burden of certain storage related tasks to be offloaded from the ESX/ESXi hypervisor to the storage array.  Generally speaking, the advantages touted are performance improvement of intrinsic tasks and increased scalability of the storage array. HDS is one of a few storage vendors who supported VAAI integration on the July launch date and in February of this year, they announced VAAI support with their VSP (see also Hu Yoshida’s writing on the announcement).

Heff started off with some virtualization in the datacenter background and IDC stats.  Here are a few that he shared with us:

  • Only 12.8% of all physical servers are virtualized in 2009
  • More than half of all workloads (51%) will be virtualized by the end of 2010
  • Two-thirds (69%) by 2013
  • VM densities continue to rise predictably, averaging:
    • 6 VMs per physical server in 2009
    • 8.5 VMs per physical server in 2013

A few time line infographics were also shown which tell a short story about VMware, HDS:

 SnagIt Capture   SnagIt Capture

VMware provides several integration points which storage vendors can take advantage of, VAAI being just one of them.  These integration points are use case specific and standardized by VMware.  As such, integration is developed in parallel by competing vendors and most often the resulting offerings from each look and feel similar.  Great minds in storage and virtualization think alike.

SnagIt Capture   SnagIt Capture

SnagIt CaptureHDS integrates with all three VAAI attach points VMware offers:

  1. Hardware Assisted Copy
  2. Hardware Assisted Zeroing
  3. Hardware Assisted Locking

Heff also used this opportunity to mention Hitachi Dynamic Provisioning (HDP) technology which is essentially HDS thin provisioning plus other lesser known benefits but has nothing more to do with VAAI than any other storage vendor which supports both VAAI and thin provisioning.  Others may disagree but I see no sustainable or realizable real world benefit with VAAI and thin provisioning at this time; the discussion is rather academic.

HDS went on to show VAAI benefits are real.  Tests show an 18% efficiency improvement in the block copy test on a 30GB virtual disk.  85% decrease in elapsed time to eager write zeros to a 30GB virtual disk.  The third VAAI benefit, hardware assisted locking, can be a little trickier to prove or require specific use cases.  Following are examples of VMFS operations that require locking metadata, and as a result a SCSI reservation which hardware assisted locking improves, per VMware KB Article: 1005009:

  • Creating a VMFS datastore
  • Expanding a VMFS datastore onto additional extents
  • Powering on a virtual machine
  • Acquiring a lock on a file
  • Creating or deleting a file
  • Creating a template
  • Deploying a virtual machine from a template
  • Creating a new virtual machine
  • Migrating a virtual machine with VMotion
  • Growing a file, for example, a Snapshot file or a thin provisioned Virtual Disk

Heff showcased the following hardware assisted locking results.  Up to 36% increase in performance and 75% reduction in lock conflicts for the power on/linked clone test:

SnagIt Capture   SnagIt Capture

SnagIt Capture

VAAI offloads some of the heavy lifting from the hypervisor to the back end storage array so it was appropriate for the discussion to ultimately lead to impact on the array.  This is where I currently feel the bigger benefit is: better scalability or more mileage out of the array.  HDS is also the second storage vendor I’ve heard say that block LUN size and number of VMs per LUN is no longer a constraint (from a performance standpoint, everything else being equal).  This point always interests me and is frankly a tough pill to swallow.  I wasn’t able to pin Heff down to more specific details nor have I seen actual numbers, case studies, or endorsements from any storage vendor’s customer environments.  To some degree, I think this design consideration is still going to be use case and environment dependent.  It will also continue to be influenced by other constraints such as replication.  It may become more of a reality when VMware expands VAAI integration beyond the original three features.  HDS did mention that in vSphere 5, VMware is adding two more VAAI features bringing the total to five assuming they are released.

HDS offers competitive storage solutions for the VMware use case and it is clear they are totally committed to the virtualization push from both a storage and compute perspective.  You can learn more about these solutions and stay in tune with their evolution at their VMware Solutions site.

Full Disclosure Statement: HDS Geek Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

Iomega StorCenter ix2-200 Network Storage, Cloud Edition

April 2nd, 2011

SnagIt Capture

I recently acquired an Iomega ix2-200 storage appliance which is perhaps the smallest storage device in EMC’s vast portfolio, save the VMAX thumb drive I’ve heard sparse sightings of.  This is a nifty little device which could prove quite useful in the home, home office, college dorm, or small business.  The ix2 serves as network attached storage (NAS) capable of several protocols mapping it to many of the most popular applications.  NFS, iSCSI, CIFS/SMB, Apple File Sharing, Bluetooth, FTP, TFTP (a new addition in the latest firmware update), rsync, and SNMP to name several.

A rich and easy to use browser-based interface provides access to the device and storage configuration.  The package includes software which I initially installed on my Windows 7 workstation to get up and running.  This software also integrates nicely with the PC it’s installed on providing file backup and other features, some of which are new in the -200 version of the appliance and cloud related.  I later ditched the management software due to an annoying iSCSI configuration bug.  Once the appliance is on the network, the web interface via its TCP/IP host address proved to be more reliable.  My unit shipped with a fairly old version of firmware which I wasn’t initially aware of based on feedback from the management interface which claimed it was all up to date.  Updating the firmware added some features and sped up iSCSI LUN creation time immensely.  

SnagIt CaptureWhat’s included:

  • Iomega® StorCenter ix2-200 Network Storage
  • 1 USB port on the front, 2 USB ports in the rear (for external drives, printers, and UPS connectivity)
  • 1 Gb Ethernet port in the rear
  • Ethernet Cable
  • Power Supply
  • Printed Quick Install Guide & other light documentation
  • Software CD
  • Service & Support: Three year limited warranty with product registration within 90 days of purchase.
  •  

    Technical Specifications:

    • Desktop, compact form factor
      • Width: 3.7 in (94mm)
      • Length: 8.0 in (203mm)
      • Height: 5.6 in (141mm)
      • Weight: 5 lbs (2.27 kg)
    • CPU at 1GHz with 256MB RAM
    • 2 x 3.5″ Easy-Swap SATA-II Hard Disk Drives
    • RAID 1, JBOD
    • 1 x RJ45 10/100/1000Mbps (GbE) Ethernet port. LAN standards: IEEE 802.3, IEEE 802.3u
    • 3 x USB 2.0 ports (to connect external HDD, printers, UPS, Bluetooth dongle)
    • Client computers for file system access—Windows PC, Mac OS, Linux
    • AC Voltage 100-240 VAC
    • Power consumption – 5 Watts (min) – 19 Watts (max)
    • Acoustic noise – 28 dB maximum

    Application Features:  The ix2-200 has an impressive set, most of which I don’t or probably will never use.

    • Content sharing
    • Torrent download manager
    • Photo slide show
    • Remote access
    • Active Directory support
    • USB printer sharing
    • Facebook, Flickr, and YouTube integration
    • Security camera integration
    • Several backup options, including cloud integrated
    • 

    UI Candy:  The management interface consists of five main tabs: Home, Dashboard, Users, Shared Storage, and Settings.  Please pardon the inconsistent cropping crudity:

    SnagIt Capture   SnagIt Capture   SnagIt Capture   SnagIt Capture

    The ix2 ships with 2x 1TB SATA-II drives.  RAID 1 (mirror) with automatic RAID rebuild and RAID 0 (Stripe w/o parity) support as well as JBOD mode also available.

    SnagIt Capture   SnagIt Capture

    Temperature and fan status.  My unit seems hot; I need to check that fan showing 0 RPM:

    SnagIt Capture

    Believe it or not, Jumbo Frames support at either 4000 or 9000 MTU:

    SnagIt Capture

    Speaking of jumbo frames, what about performance?  I was pleasantly surprised to find the ix2-200 officially supported by VMware vSphere for both iSCSI and NAS.  I’m in it for the vSphere use case so I benchmarked NFS and iSCSI in a way which is consistent with how I’ve performed previous storage performance tests which can also be compared in the VMware community (take a look here and here for those comparisons).  With two spindles, I wasn’t expecting grand results but I was curious nonetheless and I also wanted to share and compare with some co-workers who tested their home storage this past week.  Performance results were at times inconsistent during multiple runs of the same IO test.  In addition, NFS performance decreased after applying the latest firmware update.

    iSCSI

    ix2 iSCSI feels like a no-frills implementation.  iSCSI LUN security is user and Mutual CHAP based and seems particularly weak.  Individual LUNs can only be secured on a per user basis.  The user based security isn’t supported by vSphere and the CHAP implementation doesn’t seem to work at all in that my ESXi host was able to read/write to an ix2 LUN without having the required CHAP secret.  In summary, the only viable ESXi configuration is to connect the host or hosts to an unsecured iSCSI LUN or set of LUNs on the ix2.  Risks here include lack of data security as well as integrity since any host on the network with an iSCSI initiator can read/write to the iSCSI LUN.  As far as I can tell, there is no thin or virtual provisioning at the ix2 layer when creating iSCSI block LUNs.  This is merely an observation; I wasn’t expecting support for thin provisioning, dedupe, or compression.

    NFS

    NFS is more secure on the ix2 in that volume access can be restricted to a single IP address of the vSphere host.  Volumes are also secured individually which provides granularity.  It’s also flexible enough to support global subnet based access.  These are security features commonly found in enterprise NFS storage.  Similar to iSCSI above, NFS also supports user based access which again doesn’t provide much value in the vSphere use case.

    I’m not going to speak much in detail about the performance results.  I didn’t have much along the lines of expectations and I think the results speak for themselves.  iSCSI performed marginally better with the RealLife test.  However, I’m not convinced the security trade off makes iSCSI a clear winner.  Coupling the advantage in the Max Throughput test, I’m more in favor of NFS overall with the ix2-200.

    SnagIt Capture

    Supporting performance data collected from Iometer:

    SnagIt Capture

    Reliability of the ix2-200 is in the back of my mind.  I’ve heard one report of frequent failures and loss of RAID/data with the bigger brother Iomega ix4.  Time will tell.  With that, I won’t be placing any important data on the ix2.  As it is, I blew away the factory default RAID1 configuration for double the density, spindle, and performance of RAID0.  My intent for the ix2 is to use it as cheap lab storage for vSphere and potentially backup during the summer months.

    For more on the ix2, take a look at a nice writeup Stephen Foskett produced quite a while back.

    

    Pre Hitachi Data Systems Geek Day 2.0

    March 22nd, 2011

    SnagIt CaptureHitachi Data Systems Geek Day 2.0 starts tomorrow.  HDS has invited storage and virtualization experts from many points on the globe to come and be immersed in the latest storage solutions HDS has to offer.  The event kicks off at 8am, going into the evening, and wraps up Thursday afternoon.

    Asked what in particular I would like HDS to cover, my response stemmed from the VMware Virtual Infrastructure/vCloud angle. Interests such as unified storage, VAAI support, plugin integration, scalability, storage virtualization update (USPV?), replication, and SRM integration.

    HDS responded by putting together a two-day event which includes a VAAI demo session, and a presentation on the next wave of server and storage virtualization.  In addition to that, we’ll cover the converged datacenter, we’ll receive a Hitachi Clinical Repository overview and demo, Hitachi Command Suite 7.0 hands on, unified compute, and storage economics.

    You can follow what the list of delegates have to say on Twitter by watching the hash tag #HDSDay.  We’re all bloggers so expect to see content from those respective sources as well.  From Pete Gerr’s blog:

    The current list of distinguished bloggers making their way to Sefton Park, UK for the event includes:

    Enterprise Storage is the key enabler for many VMware technologies and Tier 1 virtualized workloads.  I’m looking forward to what Hitachi has to showcase over the next couple of days in addition to seeing some faces I haven’t seen in a while and meeting new contacts in the storage industry.

    EMC Celerra BETA Patch Pumps Up the NFS Volume

    March 21st, 2011

    A while back, Chad Sakac of EMC announced on his blog that he is looking for customers to volunteer their storage arrays to run various performance tests in addition to a piece of NFS specific BETA code for DART.  Having installed the BETA code (which I’m told is basically a nas executable swap in), I proceeded to compare NFS performance results with baseline results I had captured pre-patch.  In most test cases, the improvements ranged from significant to over twice the performance gain.  Most of the performance gains appear to surround write I/O.

    Following are the results comparing NFS performance with four different workload types before BETA patch and after BETA patch on a Celerra NS-120 with 15 x 15k spindles:

    SnagIt Capture

    Detailed supporting data.  Keep in mind the NFS patch is still BETA with no firm release date as of yet from EMC:

    SnagIt Capture

    This looks like great stuff from EMC and assuming the code reaches GA status, it would bolster the design choice of NFS in the datacenter.  Chad may still be looking test results for certain use cases.  If you’re interested in participating in the tests with your EMC array, please reach out to Chad using the comments section in Chad’s blog post linked above.