Microsoft Hyper-V customers to expect upcoming downtime

December 17th, 2008 by jason No comments »

This morning Microsoft issued an out of band security bulletin rated Critical which impacts Microsoft Hyper-V virtualized environments (and their respective running VMs) hosted on a Windows platform running any version of Internet Explorer.  The critical vulnerability is Remote Code Execution.  The bulletin advises that a reboot of the host may be required, which is Microsoft lingo for “you can count on a reboot”, they just don’t want to be nailed down to saying as such.  With some companies in their official year end freeze period where no changes other than emergency are allowed, there is no doubt this vulnerability comes at an inconvenient time leaving many IT skeleton crews scrambling.

VMware ESX/ESXi hosts are not directly impacted by the vulnerability and may continue running business as usual.  Those who are running VMware VirtualCenter on Microsoft Windows will likely require a reboot of the Windows host, however, this does not impact running VMs or ESX/ESXi hosts.

A great disturbance in the Force

December 15th, 2008 by jason No comments »

Today I felt a great disturbance in the Force, as if millions of voices cried out in terror.  Mohamed Fawzi of the blog Zeros & Ones posted a VMware vs Hyper-V comparison that I felt was neither fair nor truthful.  In fact, I think it is the worst bit of journalism I’ve witnessed in quite a while and even in the face of the VMworld 2008/Microsoft Hyper-V poker chip fiasco, I don’t know if Microsoft would even endorse this tripe.

I didn’t have a lot of time today for rebuttal and thus following are my brief responses:

Cost: It is impossible to summarize cost of a product (and TCO) in one short sentence as you have done.

Support: VMware was the first virtualization company to be listed on the Microsoft SVVP program.  Enough said about that.  If you want to talk about Linux, VMware supports many distros.  Hyper-V last time I checked supports one.

Hardware Requirements: No comparison.  Microsoft does not have VMotion/hot migration or similar.  New server “farms” are not necessarily needed, although a rolling upgrade can be performed using Enhanced VMotion Compatibility where the majority of the technology that will allow this comes from the processor hardware vendors.

Advanced Memory Management: Content based page sharing is a proven technology that I use in a production environment with no performance impacts.  Microsoft does not have this technology and therefore forces their customers to achieve higher consolidation ratios by spending more money on RAM than what would be needed in a VMware datacenter.  Other memory overcommit technologies such as ballooning and swapping come with varying levels of penalty and VMware offers the flexibility to the customer as to what they would like to do in these areas.  Microsoft offers no flexibility or choices.

Hypervisor: ESXi embedded is 32MB.  ESXi installable is about 1GB.  Hyper-V’s comparable products once installed are 1GB and in the 4-10GB neighborhood.  Your point of the Hyper-V hypervisor being 872KB, whether truth or not, bears no relevance for comparison purposes.

Drivers Support: VMware maintains tight control which fosters platform stability.  Installation of XYZ drivers and software adds to instability, support costs, and down time.

Processor Support: False.  ESX/ESXi operates on x86 32bit and x64 64bit processors.  Current 3rd party vendor neutral performance benchmarking between ESX and Hyper-V shows no performance degradation in ESX compared to Hyper-V as a result of address translation or otherwise.  A more truthful headline to be exposed here is Hyper-V isn’t compatible with 32-bit hardware.  Why didn’t you mention this in your Hardware Requirements section?

Application Support: I don’t see any Windows support issues.  Again I remind you, VMware is certified on the Microsoft SVVP program.  Another comparison is made with a particular VMotion restriction.  I’ll grant you that if you admit Microsoft has no VMotion or hot migration at all.

Product Hypervisor Technology: We already covered this in the Drivers Support section.

Epic virtualization and storage blogger Scott Lowe provides his responses here.

Mohamed Fawzi, while it is nice to meet you, it is unfortunate that we met under these terms.  Having just discovered your blog today, I hope you don’t mind if I take a look at some of your other material as it looks like you’ve been at the blogging for a while (much longer than I).  I hope to find some good and interesting reads.

Veeam to release more (free) virtual management goodness

December 15th, 2008 by jason No comments »

VI management software company Veeam is poised to release a free tool on Monday 12/22/08.  Register for your free copy at Veeam’s website and avoid the last minute holiday shopping rush by following this link.  The only details Veeam tells us is that if we liked FastSCP, we’ll love what’s coming next.  I like their FastSCP tool so I’m totally excited!

MEPS (my ESX partitioning scheme)

December 15th, 2008 by jason No comments »

Here is a topic that has been discussed in great depth on the VMTN forums over the years but Roger Lund has asked me if I would post my ESX partitioning scheme.  Here it is, with a bit of my reasoning which I’ve learned along the way.  Enjoy!

Create the following partitions in the following order:

Mount Point Type Size Primary? Notes
/boot ext3 250MB Yes The default from VMware is 97MB.  When we migrated from ESX 2.x to ESX 3.x the partition size grew from 50MB to nearly 100MB.  I came up with 250MB to leave breathing room for future versions of ESX which may need an even larger /boot partition.  This is all a moot point because I don’t do in place upgrades.  I rebuild with new versions.
<swap> 1600MB Yes Twice the maximum amount of allocatable service console memory.  My COS memory allocation is 500MB but if I ever increase COS memory to the 800MB max in the future, I’ve already got enough swap for it without having to rebuild the box to repartition.
/ ext3 4096MB Yes The default from VMware is 3.7GB.  We want plenty of space for this mount point so that we do not suffer the serious consequences of running out.
/home ext3 4096MB Not really needed anymore and the default from VMware is that this partition no longer is created.  For me this is just a carryover from the old ESX days.  And disk space is fairly cheap (unless booting from SAN).  I’ll put this and other custom partitioning out to pasture when I convert to ESXi where we are force fed VMware’s recommended partitioning.
/tmp ext3 4096MB The default from VMware is that it doesn’t exist, rather it creates the /tmp folder under the / mount point.  This is not a great idea.  VMware uses a small portion of /tmp for the installation of the VirtualCenter agent but my philosophy is we should have plenty of sandbox space in /tmp for the unpacking/untarring of 3rd party utils such as HP Systems Insight Manager agents.
/var ext3 4096MB The default from VMware is 1.1GB and additionally VMware makes the mount point /var/log isolating this partition strictly for VMware logs.  We want plenty of space for this mount point so that we do not suffer the serious consequences of running out.  In addition, we want this to be a separate mount point so as not to risk the / mount point by consuming its file system space.  VMware logs and other goodies are stored on this mount point.
<vmkcore> 110MB The default from VMware is 100MB.  I got the 110MB recommendation from Ron Oglesby in his RapidApp ESX Server 3.0 Quick Start Guide (this book is a gem by the way; my copy is worn down to the nub).  Although I asked Ron, he never explained to me where he came up with 110MB but let’s just assume the extra 10MB is cushion “just in case”.  This is the VMKernel core dump partition.  The best case scenario is you rarely if ever have a use for this partition although it is required by ESX whether it’s used by a purple screen of death (PSOD) or not.
Leave the remaining space unpartitioned.  This can be partitioned as VMFS-3 later using VirtualCenter for maximum block alignment.

Not sure how your current ESX partitions are configured?  Log on to the service console (COS) and run the command vdf -h

Update: The partitioning scheme above has been superseded in a new blog entry on 1/13/09.  /opt was added.  Here is the link to that post.

hgfs registry value causes issues with Terminal Services VMs

December 13th, 2008 by jason No comments »

I originally brought this up back in October with my Tip for virtualization Citrix servers invovling user profiles post.  I’m bringing it up again because this week VMware updated their knowledgebase document 1317 Windows Guest Cannot Update hgfs.dat and it’s missing a piece of key information that administrators need to be aware of.  I’m not going to rehash the whole hgfs registry value again.  You can read the details about that in my October post linked above.  The workarounds for hgfs issues caused by VMware Tools do work, however, what’s not mentioned is that a re-installation or upgrade of VMware Tools will re-install the hgfs value back in the registry thus introducing problems again.  With the amount of ESX/ESXi version upgrades coming from VMware lately, which in turn cause VMware Tools upgrades, this scenario is not going to be uncommon for anyone who is virtualizing Terminal Services or Citrix.  Add to that, VMware even recently released an interim VMware Tools upgrade patch subsequent to ESX 3.5.0 Update 3 (ESXe350-200811401-T-BG).

It should be noted that the hgfs registry value is associated with VMware shared folders technology (not used with ESX/ESXi) and only gets installed during a Complete installation type.  A Typical installation type will not install the hgfs registry value.  I perform Complete installation types of VMware Tools because I make use of the VMware Descheduled Time Accounting Service.  My virtualized Citrix servers have been impacted by this twice:  The first time when I orginally rolled out the virtualized Citrix servers.  The second time a few months later I discovered hgfs was installed again after a VMware Tools upgrade.  I’ve asked VMware to update hgfs related KB articles with the piece about the VMware Tools upgrades.  As I pointed out in my October article, one of the nasty side effects of the hgfs value on Terminal Services VMs is the constant growing of the user profile folders under \Documents and Settings\.  Left undiscovered for a while and it becomes a pretty big mess and the speed at which ugliness infiltrates \Documents and Settings\ is compounded by the number of Terminal Services users logging on to the server throughout the day every day.

New VMware VI network port diagram request for comments

December 12th, 2008 by jason No comments »

Quick update I’ve been meaning to post for a few weeks now – sorry for the delay.  I received a new network diagram that reader Shlomo Rivkin has been working on and he would like some community input on it.  Here’s the new version being submitted for discussion:

12-12-2008 5-04-56 PM

The high res version of the above diagram is here.

Feel free to compare and contrast it to the version below which is posted on my blog as well as the VMware VMTN communities:

vmware_network_ports

The high res version of the above diagram is here.

Sorry for the shortness of this post – heading to a parade with my family.

Update 6/28/13: VMware has added VMware KB 2054806 Network port diagram for vSphere 5.x which provides an updated port diagram and detailed port information pertaining to vSphere 5.x.

Snagit Capture

WorkBay chairs

December 12th, 2008 by jason No comments »

I’m not sure where exactly the fine line is drawn between productivity and antisocial behavior but this is a nifty idea.  For the office of course.  Not for home use on family members (ie. wives)  🙂  Reminds me of those chairs Will Smith used in the movie Men In Black.  Thanks for the heads up @davikes.