Lab

The Lab – Then and Now

Documenting its evolution using VMware

The lab is a critical component in my my learning and memory retention process. Most of my training and certification has been acquired with the benefit of using the lab. The time and money used in building, maintaining, and cooling the lab I believe has been money well spent in terms of career opportunities and advancement.

At its beginning, the lab consisted of every piece of hardware I had physical space, electrical, and network ports for. The more the merrier. However, underlying problems were developing that, ironically, some of today’s businesses still deal with. There was very little consistency and most of the hardware was what I would call low end or desktop class. The hardware didn’t stack well vertically, was failure prone, wasn’t very scalable, and had no redundancy. Although it was fun for quite some to set up, tear down, etc., I had been through enough of it and my days of A+ certification were far behind me. I was starting to experiment with a lot of different operating systems and configurations. This usually meant a fresh, brand new installation of an OS on a piece of hardware each time. As I grew more curious about the software and OS side of things, more OS builds were needed. Since each OS was installed on dedicated hardware, there existed a 1:1 relationship between running OS experiments and physical hardware in the lab (and electrical, and network, and tangled cords, etc.). I was getting a bit tired of the mess of hardware I was dealing with.

Then I was introduced to VMware. Chances are if you are reading this, you have an idea of what VMware was able to do for the lab. Eliminate hardware. Reduce hardware complexity. Introduce virtual hardware platform consistency. Flexibility. Uptime. Efficiency. Compatibility. Etc. I’ll be honest in saying that VMware doesn’t solve all of the labs problems. It eliminated most of the old problems, but at the same time introduces a few new challenges. Instead of many small boxes, I now have a few large ones. The main obstacle is cost. This isn’t to say that VMware makes virtualization more expensive. Quite the opposite in most cases. But now instead of using free hand-me-down computers in the lab, I need to pony up money for decent hardware with server class (and sometimes datacenter class) operational characteristics: performance, scalability, redundancy, fault tolerance, vertical stack, etc. I welcome these challenges though as they get me thinking deeper and more strategically (good exercise for my career brain), plus they bring my lab that much more in alignment with what I work with on a daily basis in my career.

Lab Update 1/19/10:

I thought I’d post a lab update since John Troyer nudged me letting me know this week’s weekly podcast was focusing home labs for VCP and VCDX studies.  My lab has grown to what some may consider a ridiculous size.  I’ve received comments such as “Why do you need all that stuff for your home?”  My response is “that stuff” is not for my home, it’s for my lab, which happens to be in my home.  If you want to talk about what’s ridiculous, have a look at co-location costs.  I’ve been monitoring them for years looking for something affordable.  Thus far I can’t justify the rent they charge, plus the access to equipment is an inconvenient 30 minute drive, very limited, and fee-based.  That’s why the lab is still in my home.  No I’m not going to lab in the cloud.  I need my hands on equipment which clouds won’t give me such as fibre channel switches and SANs.  What’s important is that the lab suits me and my career pursuits well.

With the exception of a physical file server which will be virtualized soon, the lab is used 100% for VMware virtualization.  The DL385 G2 hardware is deployed as vSphere (FT compatible) hosts.  These make nice virtual infrastructure hosts with plenty of power to run many VMs including additional ESX hosts deployed as VMs for testing various things.  For quite some time now I’ve been running the pair in a two-host cluster, one as an ESX host and the other as an ESXi host.  This may seem odd but VMware does support the mixed configuration.  The reason for the mix is that I need to stay up to speed on both platforms.  A mixed cluster is a good way to do that and it has actually uncovered an incompatibility issue which I was able to report to VMware.

The lab has seen a few exciting additions in the past month.  I recently picked up an EMC Celerra NS-120 Unified Storage SAN which added a second rack to my lab as well as two 220V 30 Amp Single Phase circuits to provide the power to it.  An additional HP StorageWorks 2/8V fibre channel switch should arrive tomorrow to complete a pair for more in depth NPIV testing and to extend the SAN fabric from the HP MSA1000 SAN to the EMC Celerra NS-120 SAN.  Arriving soon will be NetApp FAS3050c storage.  With the new EMC and NetApp storage, I should be able to retire the HP MSA1000 SAN.  It’s a decent SAN but it’s pretty old, slow’ish, and its a one trick FC pony – not nearly as full featured as the EMC and NetApp offerings.  I’m looking forward to things like Dedupe (3D), Thin Provisioning, File Level Retention (FLR), Replication, etc.  And finally, a pair of dual port fibre channel PCI-Express HBAs which support NPIV, courtesy of Emulex.

Lab Update 10/23/10:

The legacy file/SQL/MySQL/IIS/DHCP/Veeam/blog server mentioned in the update above was P2Vd a few months ago.  The physical DL380 G3 is shut down and has been for sale.  The HP MSL5026 tape library was also decommissioned and sold.  No more fucking tapes!  All Veeam backups now go to disk and stay on disk.  The tricky part is keeping a decent amount of backup retention (RPO) while juggling SAN provisioning on a semi-regular basis.  With these last few moves, the lab is 100% virtualized, all running on 2 DL385 G2 hosts and 2 excellent SAN storage arrays.  There are 2 other SAN attached vSphere compatible DL385 hosts which I reserve for special projects as needed but to be honest they haven’t been powered on in at least 6 months.

Lab Update 2/25/12:

An an effort to continue scaling up, I brought in two new hosts last fall with 8 cores and 64GB RAM each.  The 8×32 DL385 G2 hosts have been up for sale on Craigslist.  Also brought in a Dell Compellent Storage Center SAN with a few trays of spindles spanning different tiers of storage.  For the first time ever I’ll be able to start working with larger datastores in vSphere, up to 64TB, while not needing the actual spindle capacity to back them.

Rack 1 (three 110V 20 Amp circuits)

  • 3Com Superstack III 48-port 10/100/1000 Ethernet switch
  • Cisco Catalyst 3500 XL 24-port Fast Ethernet switch
  • HP DL385 (1x AMD DC Opteron, 4GB RAM)
  • HP DL385 (1x AMD DC Opteron, 4GB RAM)
  • 15″ LCD rack monitor
  • KVM switchbox/Keyboard tray
  • HP DL385 G2 (2x AMD QC Opteron, 32GB RAM)
  • HP DL385 G2 (2x AMD QC Opteron, 34GB RAM)
  • HP DL585 G2 (4x AMD DC Opteron, 64GB RAM)
  • HP DL585 G2 (4x AMD DC Opteron, 64GB RAM)
  • HP StorageWorks 2/8V SAN switch
  • HP StorageWorks 2/8V SAN switch

Rack 2 (two 220V 30 Amp single phase circuits)

Snagit Capture      

No comments

  1. Jase,

    One word, “WOW”!

    I totally understand your reasons for doing this – just wish I had the space. Very jealous.

    Keep up the good work with the site.

    Cheers,

    Simon

  2. Wow! That’s some serious Tech ‘Bling’! 🙂 Nice job documenting it as well!
    -Carlo

  3. Sergey Kurganov says:

    Very impressive! I wonder what is monthly electrical bill to keep this home lab running..

    Sergey

  4. GGuglie says:

    Really impressive!!!!
    I’m very very jealous 😉

  5. Censored says:

    Yeah, it’s indeed impressive :/ I wish I had the money for even half of it 😐

  6. Brian says:

    Your setup is awesome. Those storage pieces must have cost you some serious $$.

  7. Pete says:

    Amazing setup ! So whats next for the lab ? How about a pair of cisco 3750s and setup crosstack etherchannel ?

  8. Chrys Bundy says:

    Completely jealous… My better half just looked over my shoulder and said “Don’t get any ideas” … a boy can dream tho, right? Kudos, Jason.

    @cway1979

  9. DWZ says:

    .. what are you doing with that lot at home? does it work when you download pictures from your digital camera ?!

  10. Whao!, Just impressive and considering the electric bill, very expensive. Good work with the cabling as well.

    I thought, running another server and NAS for my virtualization lab at home (http://www.virtualizationtalk.net/58-building-home-virtualization-lab-on-a-budget/) would be a project but your setup is huge.

    You don’t have a budget for this, do you? 🙂

  11. Nice lab Jason – I wish I had 2 racks full of kit in the garage!

    Not a fan of the cable management arms though, they trap warm air…

    I’d use the rack struts to tidy cables 🙂

  12. RJ says:

    DUDE !!!!!!! awsome ! .

    my lab is running over whitebox solutions and cheap Networking Devices purely because of Budget constraints.

    I can only hope my second Kit will be as impressive as yours. Im thinking of Buying Refurbished Rack solutions for the second lab im planning. ( Remote site ) .

    KUTGW !

  13. Jason,
    This is impressive! I do want to build my own home lab, but probably will be doing so with the “hypervisor in a hypervisor” approach, to save money and space. Any good tips on building a white box from scratch and running the lab through Workstation 7?

  14. Mike Barratt says:

    Nice lab! Certainly beats my servers stacked on top of each other in the floor…… I need to get them in the garage, but I think its best I fix the leaky roof first 😀

  15. Unbelievable. I could use a storage system to port my video cameras too and possibly justify it to the wife but I dont think she would go for a SAN at home…. =)

  16. MI says:

    Does the HP PROLIANT DL385 G2 support hardware virtualization in the BIOS and CPU?

  17. jason says:

    Yes it does. The AMD QC Barcelona processors also support VMware FT.

  18. CG says:

    All I can say is WOW!!

    I am kind of stuck between two i7 whiteboxes and the DL 385

    Let me know what you think

  19. paul says:

    Love it. My questions is how do you finance this? whats the secret?

  20. Nizam says:

    That’s an awesome setup. I totally agree with your logic in regards to “home lab”. I’d wish you the best of luck but it looks like you’re already doing well. Cheers!

  21. Dan says:

    I was at VMWorld 2011 and I think you should really employ the ioCache solution from Fusion-io. Talk about a kick in the pants!

  22. ifanslv says:

    very nice lab, you can build a private cloud with these hardwares. have you ever play with open source hypervisor?

  23. Hamish G says:

    Financing something like this is not a real biggie once you’ve stepped up in the world of IT. It sort of spirals, the better the pay grade you get from working with the small stuff, the more you can afford the big stuff. I’m working with towers currently, due to space restraints but once we move house soon, I’ll be racking up. Most recent purchase? An HP ML350p with dual processors and 64GB of RAM 🙂

    How much fun is it to be a geek…?

  24. Minhaj says:

    Does DL385 G2 has any issues with vmware 5.1. If not then I will go for the purchase. How many servers can we run if I have two 385g2 with current CPU on server.

  25. Andrew says:

    Do you have recurring licensing costs?

  26. jason says:

    Some items, yes.
    Other items, no.

Leave a Reply