Posts Tagged ‘DRS’

Putting some money where my VMware mouth is

February 15th, 2009

I came home this afternoon from a Valentines Day wedding in North Dakota to find that my one and only workstation in the house (other than the work laptop) had a belated Valentines Day present for me:  It would no longer boot up.  No Windows.  No POST.  No video signal.  No beep codes.

DSC00473

I was feeling adventurous and I needed a relatively quick and inexpensive fix.  I decided to take one of the thin clients I received from Chip PC via VMworld 2008 plus a freshly deployed Windows XP template on the Virtual Infrastructure and promote this VDI solution to main household workstation status for the next few weeks.  The timing on this could not have been better.  The upcoming Minnesota VMUG on Wednesday March 11th is going to be VDI focused.  I guess I’ll have more to contribute at that meeting than I had originally planned on.  With any luck, Chip PC will be in attendance and we can discuss some things.

The thin client:  Chip PC Xtreme PC NG-6600 (model: EX6600N, part number: CPN04209).

Specs:

  • RMI – Alchemy Au 1550, 500MHz RISC processor (equivalent to 1.2GHz x86 TC processors)
  • 128MB DDR RAM
  • 64MB Disk-On-Chip with TFS
  • 128-bit 3D graphics acceleration engine with separate 2x8MB display memory SDRAM
  • Dual DVI ports each supporting 1920×1200 16-bit color.  Supports quad displays up to 1024×768
  • Audio I/O
  • 4 USB 2.0 ports
  • 10/100 Ethernet NIC
  • Power draw:  3.5W work mode, .35W sleep mode
  • OS:  Enhanced Microsoft Windows CE (6.00 R2 Professional)
  • Integrated applications (Plugins – note plugins are downloaded at no charge from the Chip PC website and are not, by default, embedded or included with the thin client – just enough OS concept)
    • Citrix ICA
    • RDP 5.2 and 6
    • Internet Explorer 6.0
    • VDM Client
    • VDI Client
    • Media Player
    • VPN Client
    • Ultra VNC
    • Pericom (Team Talk) Terminal Emulation
    • LPD Printer
    • ELO Touch Screen
  • Compatibility
    • Citrix WinFrame, MetaFrame, and Presentation Server 4.5
    • MS Windows Server 2000/2003
    • MS Windows NT 4.0 – TS Edition
    • VMware Virtual Desktop Interface using RDP
  • Full support of both local and network printers:  LPD, LPR, SMB, LPT, USB, COM
  • Support for USB mass storage (thumb drives – deal breaker for me)
  • Support for wireless USB NIC (not included)
  • etc. etc. etc.

DSC00474

Truth be told, this isn’t really a promotion in the sense that I had already performed extensive testing on it.  I hadn’t even taken the thing out of the box yet other than to register it for the extended warranty.  I’ve had only a little experience on these devices as I have an identical unit in the lab at work which I’ve spent a total of 30 minutes on.  To the best of my knowledge, this is the Cadillac unit from Chip PC.

I don’t have any fancy VDI brokering solutions here in the home lab and I’m not up to speed on VMware View so the plan is to leverage Thin Client -> RDP -> Windows XP desktop on VMware Virtual Infrastructure 3.5.

I think this is going to be a good test.  A trial by fire of VDI (granted, a fairly simple variation).  I spout a lot about the goodness that is VMware and now I’ll be eating some of my own dog food from the desktop workspace.  I’m a power user.  I’ve got my standard set of applications that I use on a regular basis and I’ve got a few hardware devices such as a flatbed scanner, iPod Shuffle, USB thumb drives, digital cameras, etc.  I should know within a short period of time whether or not this will be a viable solution for the short term.  Also add to the mix my wife’s career.  She uses our home computer to access her servers at work on a fairly regular basis.  Lastly, my wife sometimes works from home while I’m away at the office or traveling.  It’s going to be critical that this solution stays up and running and continues to be viable for my wife while I’m remote and not able to provide computer support.

So where am I at now?  I’ve got the VDI session patched along with my most critical applications installed to get me by in the short term:  Quicken, SnagIt, network printer, and Citrix clients.  I’ll install MS Office later but for now I can use the published application version of Office on my virtualized Citrix servers.  I’ve been listening some Electro House on www.di.fm on the VDI and music quality is as good as it was on my PC before it died, although it doesn’t completely drive my 5.1 surround in the den.  Pretty sure I’m getting 2.1 right now.  Oh well, at least the sub is thumpin.  Shhhh… the thin client is sleeping:

DSC00478

So what else?  As long as I’m throwing caution to the wind, I think it’s time to take the training wheels off VMware DPM (Distributed Power Management) and see what happens in a two node cluster.

2-15-2009 10-53-10 PM

Based on the environment below, what do you think will happen?  CPU load is very low, however, memory utilization is close to being over committed in a one host scenario. Will DPM kick in?

2-15-2009 10-53-59 PM

Most of my infrastructure at home is virtual including all components involving internet access both incoming and outgoing.  If the blog becomes unavailable for a while in the near future, I’ll give you one guess as to what happened.  🙂

No matter what the outcome, vmwarenews.de aka Roman Haug – you are no longer welcomed to republish my blog articles.  Albeit flattering, the fact that you have not even so much as asked in the first place has officially pissed me off.  You publish my content as if it were your own, written by you as indicated by the “by Roman” header preceeding each duplicated post.  Please remove my content from your site and refrain from syndicating my content going forward.  Thank you in advance.

Update: Roman Haug has offered an apology and I believe we have reached an understanding.  Thank you Roman!

Great iSCSI info!

January 27th, 2009

I’ve been using Openfiler 2.2 iSCSI in the lab for a few years with great success as a means for shared storage. Shared storage with VMware ESX/ESXi (along with the necessary licensing) allows us great things like VMotion, DRS, HA, etc. I’ve recently been kicking the tires of Openfiler 2.3 and have been anxious to implement partly due to the ease in its menu driven NIC bonding feature which I wanted to leverage for maximum disk I/O throughput.

Coincidentally, just yesterday a few of the big brains in the storage industry got together and published what I consider one of the best blog entries in the known universe. Chad Sakac and David Black (EMC), Andy Banta (VMware), Vaughn Stewart (NetApp), Eric Schott (Dell/EqualLogic), Adam Carter (HP/Lefthand) all conspired.

One of the iSCSI topics they cover is link aggregation over Ethernet. I read and re-read this section with great interest. My current swiSCSI configuration in the lab consists of a single 1Gb VMKernel NIC (along with a redundant failover NIC) connected to a single 1Gb NIC in the Openfiler storage box having a single iSCSI target with two LUNs. I’ve got more 1Gb NICs that I can add to the Openfiler storage box, so my million dollar question was “will this increase performance?” The short answer is NO with my current configuration. Although the additional NIC in the Openfiler box will provide a level of hardware redundancy, due to the way ESX 3.x iSCSI communicates with the iSCSI target, only a single Ethernet path will be used for by ESX to communicate to the single target backed by both LUNs.

However, what I can do to add more iSCSI bandwidth is to add the 2nd Gb NIC in the Openfiler box along with an additional IP address, and then configure an additional iSCSI target so that each LUN is mapped to a separate iSCSI target.  Adding the additional NIC in the Openfiler box for hardware redundancy is a no brainer and I probably could have done that long ago, but as far as squeezing more performance out of my modest iSCSI hardware, I’m going to perform some disk I/O testing to see if the single Gb NIC is a disk I/O bottleneck.  I may not have enough horsepower under the hood of the Openfiler box to warrant going through the steps of adding additional iSCSI targets and IP addressing.

A few of the keys I extracted from the blog post are as follows:

“The core thing to understand (and the bulk of our conversation – thank you Eric and David) is that 802.3ad/LACP surely aggregates physical links, but the mechanisms used to determine the whether a given flow of information follows one link or another are critical.

Personally, I found this doc very clarifying.: http://www.ieee802.org/3/hssg/public/apr07/frazier_01_0407.pdf

You’ll note several key things in this doc:

* All frames associated with a given “conversation” are transmitted on the same link to prevent mis-ordering of frames. So what is a “conversation”? A “conversation” is the TCP connection.
* The link selection for a conversation is usually done by doing a hash on the MAC addresses or IP address.
* There is a mechanism to “move a conversation” from one link to another (for loadbalancing), but the conversation stops on the first link before moving to the second.
* Link Aggregation achieves high utilization across multiple links when carrying multiple conversations, and is less efficient with a small number of conversations (and has no improved bandwith with just one). While Link Aggregation is good, it’s not as efficient as a single faster link.”

the ESX 3.x software initiator really only works on a single TCP connection for each target – so all traffic to a single iSCSI Target will use a single logical interface. Without extra design measures, it does limit the amount of IO available to each iSCSI target to roughly 120 – 160 MBs of read and write access.

“This design does not limit the total amount of I/O bandwidth available to an ESX host configured with multiple GbE links for iSCSI traffic (or more generally VMKernel traffic) connecting to multiple datastores across multiple iSCSI targets, but does for a single iSCSI target without taking extra steps.

Question 1: How do I configure MPIO (in this case, VMware NMP) and my iSCSI targets and LUNs to get the most optimal use of my network infrastructure? How do I scale that up?

Answer 1: Keep it simple. Use the ESX iSCSI software initiator. Use multiple iSCSI targets. Use MPIO at the ESX layer. Add Ethernet links and iSCSI targets to increase overall throughput. Ser your expectation for no more than ~160MBps for a single iSCSI target.

Remember an iSCSI session is from initiator to target. If use multiple iSCSI targets, with multiple IP addresses, you will use all the available links in aggregate, the storage traffic in total will load balance relatively well. But any individual one target will be limited to a maximum of single GbE connection’s worth of bandwidth.

Remember that this also applies to all the LUNs behind that target. So, consider that as you distribute the LUNs appropriately among those targets.

The ESX initiator uses the same core method to get a list of targets from any iSCSI array (static configuration or dynamic discovery using the iSCSI SendTargets request) and then a list of LUNs behind that target (SCSI REPORT LUNS command).”

Question 4: Do I use Link Aggregation and if so, how?

Answer 4: There are some reasons to use Link Aggregation, but increasing a throughput to a single iSCSI target isn’t one of them in ESX 3.x.

What about Link Aggregation – shouldn’t that resolve the issue of not being able to drive more than a single GbE for each iSCSI target? In a word – NO. A TCP connection will have the same IP addresses and MAC addresses for the duration of the connection, and therefore the same hash result. This means that regardless of your link aggregation setup, in ESX 3.x, the network traffic from an ESX host for a single iSCSI target will always follow a single link.

For swiSCSI users, they also mention some cool details about what’s coming in the next release of ESX/ESXi. Those looking for more iSCSI performance will want to pay attention. 10Gb Ethernet is also going to be a game changer, further threatening fibre channel SAN technologies.

I can’t stress enough how neat and informative this article is. To boot, technology experts from competing storage vendors pooled their knowledge for the greater good. That’s just awesome!

Make VirtualCenter highly available with VMware Virtual Infrastructure

November 17th, 2008

A few days ago I posted some information on how to make VirtualCenter highly available with Microsoft Cluster Services.

Monday Night Football kickoff is coming up but I wanted follow up quickly with another option (as suggested by Lane Leverett): Deploy the VirtualCenter Management Server (VCMS) on a Windows VM hosted on a VMware Virtual Infrastructure cluster. Why is this a good option? Here are a few reasons:

  1. It’s fully supported by VMware.
  2. You probably already have a VI cluster in your environment you can leverage. Hit the ground running without spending the time to set up MSCS.
  3. Removing MSCS removes a 3rd party infrastructure complexity and dependency which requires an advanced skill set to support.
  4. Removing MSCS removes at least one Windows Server license cost and also removes the need for the more expensive Windows Enterprise Server licensing and the special hardware needs required by MSCS.
  5. Green factor: Let VCMS leverage the use of VMware Distributed Power Management (DPM).

How does it work? It’s pretty simple. A virtualized VCMS shares the same advantages any other VM inherently has when running on a VMware cluster:

  1. Resource balancing of the four food groups (vProcessor, vRAM, vDisk, and vNIC) through VMware Distributed Resource Scheduler (DRS) technology
  2. Maximum uptime and quick recovery via VMware High Availability (HA) in the event of a VI host failure or isolation condition (yes, HA will still work if the VCMS is down. HA is a VI host agent)
  3. Maximum uptime and quick recovery via VMware High Availability (HA) in the event of a VMware Tools heartbeat failure (ie. the guest OS croaks)
  4. Ability to perform host maintenance without downtime of the VCMS

A few things to watch out for (I’ve been there and done that, more than once):

  1. If you’re going to virtualize the VCMS, be sure you do so on a cluster with the necessary licensed options to support the benefits I outlined above (DRS, HA, etc.) This means VI Enterprise licensing is required (see the licensing/pricing chart on page 4 of this document). I don’t want to hide the fact that a premium is paid for VI Enterprise licensing, but as I pointed out above, if you’ve already paid for it, the bolt ons are unlimited use so get more use out of them.
  2. If your VCMS (and Update manager) database is located on the VCMS, be sure to size your virtual hardware appropriately. Don’t go overboard though. From a guest OS perspective, it’s easier to grant additional virtual resources from the four food groups than it is to retract them.
  3. If you have a power outage and your entire cluster goes down (and your VCMS along with it), it can be difficult to get things back on their feet while you don’t have the the use of the VCMS. Particularly if you’ve lost the use of other virtualized infrastructure components such as Microsoft Active Directory. Initially it’s going to be command line city so brush up on your CLI. It really all depends on how badly the situation is once you get the VI hosts back up. One example I ran into is host A wouldn’t come back up. Host B wasn’t the registered owner of the VM I needed to bring up. This requires running the vmware-cmd command to register the VM and bring it up on host B.

Well, I missed the first few minutes of Monday Night Football, but everyone who reads (tolerates) my ramblings is totally worth it.

Go forth and virtualize!