Connect a fibre attached tape device to a VM on ESX

October 27th, 2008 by jason Leave a reply »

Have you ever considered virtualizing your tape backup server? Maybe you’ve thought about it in the past but reasoning produced drawbacks that were too compelling to go forth and virtualize. For instance, pinning a VM to a clustered ESX host which is connected with a SCSI cable attached tape device hanging off of it. Pinning a VM to a clustered host means you lose the benefit of a VM portability, you lose the flexibility of host maintenance during production cycles, and you lose the use of valuable dollars spent on VMotion, DRS, HA, shared storage, and FT (future).

What if you had the hardware to make it possible? Would you do it? If I had to purchase hardware to specifically make this happen, cost effectiveness would need to be researched. Everything else being equal, if I had the hardware infrastructure in place already, yes I would. I had access to the hardware, so I headed into my lab to give it a shot.

What’s required?

  • Hardware
    • One or more ESX hosts
    • At least one fibre HBA in each ESX host that supportes fibre tape devices (enabled in the HBA BIOS typically at POST)
    • A fibre attached tape device (the fibre HBA in a tape device is called an NSR or Network Storage Router)
    • At least one fibre SAN switch
      • If using more than one in a single fabric, be sure they are ISL’d together
      • If multipathing in separate fabrics, at least two HBAs per host will be required and at least two NSRs in the tape device (although this is really going overboard and would be quite expensive)
    • Fibre cable between ESX host and SAN switch
    • Fibre cable between NSR and SAN switch
    • Optional components: shared storage
  • Software
    • VMware ESX or ESXi
    • Virtual Infrastructure Client
    • Latest firmware on HBA(s), NSR(s), and SAN switch(es)
    • Appropriate zoning created and enabled on SAN switch for all ESX host HBAs and NSR
    • Optional components: VirtualCenter, VMotion, DRS, HA

The steps to setting this up aren’t incredibly difficult.

  1. Attach fibre cables between HBAs and SAN switch
  2. Attach fibre cable between NSR and SAN switch
  3. On the fibre SAN switch, zone the NSR to all HBAs in all ESX hosts that will participate. Be sure to enable the active zone configuration. On Brocade SAN switches this is a two step process.
  4. Perform a scan on the fibre HBA cards (on all ESX hosts) to discover the fibre tape device. In this case, I’ve got an HP MSL5026 autoloader containing a robotic library and two tape drives:
    fctape1
  5. Once each ESX host can “see” the tape device, add the tape device to the VM as a SCSI passthru device. In the drop down selection box, the two tape drives are seen as “tape” and the robotic library is seen as “media”. Take a look at the .vmx file and see how the SCSI passthru device maps back to VMHBA1:1:2 and ultimately the tape drive as a symbolic link:
    fctape2 fctape3 fctape9 fctape10 fctape11
  6. The VM can now see the tape device. Notice it is SCSI and not fibre. At this time, VMs only see SCSI devices. Fibre is not virtualized within the VMware virtual machines to the extent that a VM can see virtual fibre or a virtual HBA. The current implementation of NPIV support in VMware is something different and will be explored in an upcoming blog:
    fctape4

Good news! The fibre attached tape drive works perfectly using Windows ntbackup.exe. Effective throughput rate of many smaller files to tape is 389MB/minute or 6.5MB/second. As expected, running a second backup job with less files but larger sizes, I saw an increased throughput rate of 590MB/minute or nearly 10MB/second. These speeds are not bad:
fctape5 fctape6 fctape8

Now for the bad news. When trying to migrate the VM while it was powered on (VMotion) or powered off (cold migration), I ran into a snag. VMware sees the fibre tape device as a raw disk with an LSI Logic SCSI controller which is not supported for migration (I tried changing the LSI Logic bus to use Physical bus sharing, but that did not work):
fctape7 fctape9

The VM migration component of my test was a failure, but the fibre connectivity was a success. Perhaps we’ll have SCSI passthru migration ability in a future version of VMware Virtual Infrastructure. Maybe v-SCSI passthru is the answer (v-* seems to be the next generation answer to many datacenter needs). What this experiment all boils down to is that I can’t do much more with a fibre attached tape device than I can with a SCSI attached tape device. In addition, a VM with an attached SCSI passthru device remains pinned to an ESX host and therefore doesn’t belong on a clustered host.  However, I can think of a few potential advantages of a fibre attached tape device which may still be of interest:

  1. Fibre cabling offers better throughput speed and more bandwidth than SCSI.
  2. Fibre cabling offers much longer cable run distances in the datacenter.
  3. A failed SCSI card on the host often means a motherboard replacement. A failed HBA on the host means replacing an HBA.
  4. Fibre cabling allows multipathing while SCSI cabling along with the required bus termination does not.
  5. Fibre cabling leverages a SAN fabric infrastructure which can be used to gather detailed reports using native and robust SAN fabric tools such as SAN switch performance monitor, SANsurfer, HBAnywhere, etc.
  6. VMs with fibre attached tape can still be migrated to other zoned hosts by simply removing the tape device in virtual hardware, performing the migration, then re-adding the tape device, all without leaving my chair. A SCSI attached tape device would actually need to be re-cabled behind the rack.
Advertisement

No comments

  1. cititechs says:

    This is a proven working method. I’ve done it with a Quantum Scalar i500 and Symantec Backup Exec 11

    Only note that I would add is if your library has more than one drive. You need to add a SCSI Device per tape drive to your VM.

  2. Used Laptop says:

    Very informative site.But i have to follow it throughly as i am no computer geek like you man.

  3. hazard1yard says:

    Seems to work to a degree, I set up and I can see the tape drive in my Windows VM but I cannot see the robot. In ESX it shows both, with differing SCSI id’s but they both have the same lun number of 0.

    So near but yet so far ……..

  4. hazard1yard says:

    Just an update, I can see the robot and the tape drive in ESX, if I assign the tape drive as a scsi device to a VM it is all okay and comes up in the VM. As soon as I assign the robot and try to boot the VM it says something about “cannot find file” and does not boot.
    Slowly but surely ….

  5. Gabriel says:

    was testing this out … worked for some time. However after some recent updates … this setup causes the HP NS E1200 to reboot at random … a way to consistently reporduce the problem if you use the HP TL LL test LTO DRIVE ASSESSMENT TEST

  6. Frank Weyer says:

    Hello,

    use this also since several months under ESX 3.5 UP4 with HP Dataprotector 6.10

    Have now upgraded to vSphere4.
    Can see the Devices on ESX Level and in the virtual machine.
    But get errore if i try to write/read data to/from the tape.

    any adieas

  7. kesparlat says:

    This solution is not supported, I prefer work with hardware passthru and give a physical HBA to the VM.

  8. Marilyn L. says:

    We are using a Quantum Scarlar i2000, Data Protector 6.0, and a Brocade Switch. Would like to fibre connect. Any ballpark costs for this project?