Posts Tagged ‘Microsoft’

How VMware virtualized Exchange 2007

January 8th, 2009

I often hear questions or concerns about virtualizing Exchange.  E-Oasis found a new VMware white paper and provides a nice lead in explaining how VMware corporate took their physical servers and migrated to virtual, reducing aggregrate hardware usage.

One might ask why VMware’s Exchange servers were not virtualized before this, particularly when VMware was a smaller company with less mailboxes?  Perhaps they decided earlier versions of Exchange were not virtualization candidates?  Maybe limitations in earlier version of ESX made it less than attractive?  I don’t know why but it would have been cooler to see VMware put their money where their mouth is earlier on.  Perhaps someone from VMware can chime in on a comment here.

At any rate, it’s an absolutely beautiful white paper and I’m actually surprised at the level of detail some of the diagrams get into providing network host names and IP addresses for the infrastructure.  I suppose they could be ficticious, but the names look rather authentic and not made up to me.  Kudos.

Take a look at VMware’s whitepaper here.

Guest blog entry: VMotion performance

January 5th, 2009

Good afternoon VMware virtualization enthusiasts and Hyper-V users whom Microsoft has condoned on your behalf that you don’t have a need for hot migration if you have an intern and $50,000 cash.

Simon Long has shared with us this fantastic article he wrote regarding VMotion performance.  More specifically, fine tuning concurrent VMotions allowed by vCenter.  This one is going in my document repository and tweaks ‘n’ tricks collection.  Thank you Simon and everyone please remember that virtualization is not best enjoyed in moderation!

Simon can be reached via email at contact (at) simonlong.co.uk as well as @SimonLong_ on Twitter.


I’ll set the scene a little….

I’m working late, I’ve just installed Update Manager and I‘m going to run my first updates. Like all new systems, I’m not always confident so I decided “Out of hours” would be the best time to try.

I hit “Remediate” on my first Host then sat back, cup of tea in hand and watch to see what happens….The Host’s VM’s were slowly migrated off 2 at a time onto other Hosts.

“It’s gonna be a long night” I thought to myself. So whilst I was going through my Hosts one at time, I also fired up Google and tried to find out if there was anyway I could speed up the VMotion process. There didn’t seem to be any article or blog posts (that I could find) about improving VMotion Performance so I created a new Servicedesk Job for myself to investigate this further.

3 months later whilst at a product review at VMware UK, I was chatting to their Inside Systems Engineer, Chris Dye, and I asked him if there was a way of increasing the amount of simultaneous VMotions from 2 to something more. He was unsure, so did a little digging and managed to find a little info that might be helpful and fired it across for me to test.

After a few hours of basic testing over the quiet Christmas period, I was able to increase the amount of simultaneous VMotions…Happy Days!!

But after some further testing it seemed as though the amount of simultaneous VMotions is actually set per Host. This means if I set my vCenter server to allow 6 VMotions, I then place 2 Hosts into maintenance mode at the same time, there would actually be 12 VMotions running simultaneously. This is certainly something you should consider when deciding how many VMotions you would like running at once.

Here are the steps to increase the amount of Simultaneous VMotion Migrations per Host.

1. RDP to your vCenter Server.
2. Locate the vpdx.cfg (Default location “C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter”)
3. Make a Backup of the vpxd.cfg before making any changes
4. Edit the file in using WordPad and insert the following lines between the <vpdx></vpdx> tags;

<ResourceManager>
<maxCostPerHost>12</maxCostPerHost>
</ResourceManager>

5. Now you need to decide what value to give “maxCostPerHost”.

A Cold Migration has a cost of 1 and a Hot Migration aka VMotion has a cost of 4. I first set mine to 12 as I wanted to see if it would now allow 3 VMotions at once, I now permanently have mine set to 24 which gives me 6 simultaneous VMotions per Host (6×4 = 24).

I am unsure on the maximum value that you can use here, the largest I tested was 24.

6. Save your changes and exit WordPad.
7. Restart “VMware VirtualCenter Server” Service to apply the changes.

Now I know how to change the amount of simultaneous VMotions per Host, I decided to run some tests to see if it actually made any difference to the overall VMotion Performance.

I had 2 Host’s with 16 almost identical VM’s. I created a job to Migrate my 16 VM’s from Host 1 to Host 2.

Both Hosts VMotion vmnic was a single 1Gbit nic connected to a CISCO Switch which also has other network traffic on it.


The Network Performance graph above was recorded during my testing and is displaying the “Network Data Transmit” measurement on the VMotion vmnic. The 3 sections highlighted represent the following;

Section 1 – 16 VM’s VMotioned from Host 1 to Host 2 using a maximum of 6 simultaneous VMotions.
Time taken = 3.30

Section 2 – This was not a test, I was simply just migrating the VM’s back onto the Host for the 2nd test (Section 3).

Section 3 – 16 VM’s VMotioned from Host 1 to Host 2 using a maximum of 2 simultaneous VMotions.
Time taken = 6.36

Time Different = 3.06
3 Mins!! I wasn’t expecting it to be that much. Imagine if you had a 50 Host cluster…how much time would it save you?
I tried the same test again but only migrating 6 VM’s instead of 16.

Migrating off 6 VM’s with only 2 simultaneous VMotions allowed.
Time taken = 2.24

Migrating off 6 VM’s with 6 simultaneous VMotions allowed.
Time taken = 1.54

Time Different = 30secs

It’s still an improvement all be it not so big.

Now don’t get me wrong, these tests are hardly scientific and would never have been deemed as completely fair test but I think you get the general idea of what I was trying to get at.

I’m hoping to explore VMotion Performance further by looking at maybe using multiple physical nics for VMotion and Teaming them using EtherChannel or maybe even using 10Gbit Ethernet. Right now I don’t have the spare Hardware to do that but this is definitely something I will try when the opportunity arises.

Update 4/5/11Limit Concurrent vMotions in vSphere 4.1 by Elias Khnaser.

Update 10/3/12:  Changes to vMotion in vSphere 4.1 per VMware KB 1022851:

In vSphere 4.1:
  • Migration with vMotion and DRS for virtual machines configured with USB device passthrough from an ESX/ESXi host is supported
  • Fault Tolerant (FT) protected virtual machines can now vMotion via DRS. However, Storage vMotion is unsupported at this time.
    Note: Ensure that the ESX hosts are at the same version and build.
In addition to the above, vSphere 4.1 has improved vMotion performance and allows:
  • 4 concurrent vMotion operations per host on a 1Gb/s network
  • 8 concurrent vMotion operations per host on a 10Gb/s network
  • 128 concurrent vMotion operations per VMFS datastore

Note: Concurrent vMotion operation is currently supported only when source and destination hosts are in the same cluster. For further information, see the Configuration Maximums for VMware Sphere 4.1 document.

The vSphere 4.1 configuration maximums above remain true for vSphere 5.x.  Enhanced vMotion operations introduced in vSphere 5.1 also count against the vMotion maximums above as well as the Storage vMotion configuration maximums (8 concurrent Storage vMotions per datastore and 2 concurrent Storage vMotions per host as well as 8 concurrent non-vMotion provisioning operations per host).  Eric Sloof does a good of explaining that here.

Introducing: IT Knowledge Exchange/TechTarget

December 18th, 2008

Have you seen TechTarget’s IT Knowledge Exchange? If you are an IT staff member in search of answers or excellent technical blogs, ITKE is one site you’ll want to bookmark. Their award winning editorial staff include virtualization bloggers such as Eric Siebert, David Davis, prolific VirtualCenter plugin writer Andrew Kutz, Rick Vanover, Edward Haletky, and many more.

Search or browse by hundreds of tags covering hot IT topics such as Database, Exchange, Lotus Domino, Microsoft Windows, Security, Virtualization, etc.

Their value proposition is simple: provide IT professionals and executives with the information they need to perform their jobs—from developing strategy, to making cost-effective IT purchase decisions and managing their organizations’ IT projects.

One month ago, brianmadden.com was purchased by TechTarget. I think this addition will be a nice shot in the arm for ITKE. In one transaction they integrate an established rich Citrix/Terminal Services/Virtualization knowledgebase and talented staff of bloggers that it can in turn use to help its readers and advertising clientele.

TechTarget has over 600 employees, was founded in 1999, and went public in May 2007 via a $100M IPO.

12-18-2008 8-27-33 AM

Access a CD/DVD from the ESX console

December 17th, 2008

If by chance you need to access the CD/DVD ROM tray on your ESX host from the service console (COS), it is not as straight forward as clicking on the cup holder icon under “My Computer”.  The media needs to be mounted in the RHEL based service console operating system first.  This blog entry explains how.

1.  Determine which device represents the tray holding the media you want to mount using the command ll /dev |grep cdrom. In this case on a Dell PER900, I see two CD/DVD ROM instances.  /dev/hda represents the physical tray on the ESX host.  /dev/scd0 represents the virtual .iso media connected via the DRAC:

12-17-2008 11-04-37 AM

2.  I want to mount the virtual .iso media represented by /dev/scd0.  The command is mount /dev/scd0 /mnt/cdrom.  As seen in the following example, once I have mounted the device, the CD/DVD media is now accessible at the /mnt/cdrom location.  In this case, it’s a Windows Server 2003 CD.  Why would I want to stick a Windows CD in an ESX host?  Perhaps I’d like to create an .iso image to be stored on a VMFS volume using the dd if=/mnt/cdrom of=/vmfs/volumes/vmfs_storage1/win2k3.iso command:

12-17-2008 11-03-39 AM

3.  When finished, don’t forget to unmount the media.  The command for this is umount /mnt/cdrom.  Notice the media cannot be unmounted when someone or something is presently accessing the media directory structure (as indicated by the “device is busy” error message on the first unmount attempt):

12-17-2008 11-07-50 AM

Microsoft Hyper-V customers to expect upcoming downtime

December 17th, 2008

This morning Microsoft issued an out of band security bulletin rated Critical which impacts Microsoft Hyper-V virtualized environments (and their respective running VMs) hosted on a Windows platform running any version of Internet Explorer.  The critical vulnerability is Remote Code Execution.  The bulletin advises that a reboot of the host may be required, which is Microsoft lingo for “you can count on a reboot”, they just don’t want to be nailed down to saying as such.  With some companies in their official year end freeze period where no changes other than emergency are allowed, there is no doubt this vulnerability comes at an inconvenient time leaving many IT skeleton crews scrambling.

VMware ESX/ESXi hosts are not directly impacted by the vulnerability and may continue running business as usual.  Those who are running VMware VirtualCenter on Microsoft Windows will likely require a reboot of the Windows host, however, this does not impact running VMs or ESX/ESXi hosts.

A great disturbance in the Force

December 15th, 2008

Today I felt a great disturbance in the Force, as if millions of voices cried out in terror.  Mohamed Fawzi of the blog Zeros & Ones posted a VMware vs Hyper-V comparison that I felt was neither fair nor truthful.  In fact, I think it is the worst bit of journalism I’ve witnessed in quite a while and even in the face of the VMworld 2008/Microsoft Hyper-V poker chip fiasco, I don’t know if Microsoft would even endorse this tripe.

I didn’t have a lot of time today for rebuttal and thus following are my brief responses:

Cost: It is impossible to summarize cost of a product (and TCO) in one short sentence as you have done.

Support: VMware was the first virtualization company to be listed on the Microsoft SVVP program.  Enough said about that.  If you want to talk about Linux, VMware supports many distros.  Hyper-V last time I checked supports one.

Hardware Requirements: No comparison.  Microsoft does not have VMotion/hot migration or similar.  New server “farms” are not necessarily needed, although a rolling upgrade can be performed using Enhanced VMotion Compatibility where the majority of the technology that will allow this comes from the processor hardware vendors.

Advanced Memory Management: Content based page sharing is a proven technology that I use in a production environment with no performance impacts.  Microsoft does not have this technology and therefore forces their customers to achieve higher consolidation ratios by spending more money on RAM than what would be needed in a VMware datacenter.  Other memory overcommit technologies such as ballooning and swapping come with varying levels of penalty and VMware offers the flexibility to the customer as to what they would like to do in these areas.  Microsoft offers no flexibility or choices.

Hypervisor: ESXi embedded is 32MB.  ESXi installable is about 1GB.  Hyper-V’s comparable products once installed are 1GB and in the 4-10GB neighborhood.  Your point of the Hyper-V hypervisor being 872KB, whether truth or not, bears no relevance for comparison purposes.

Drivers Support: VMware maintains tight control which fosters platform stability.  Installation of XYZ drivers and software adds to instability, support costs, and down time.

Processor Support: False.  ESX/ESXi operates on x86 32bit and x64 64bit processors.  Current 3rd party vendor neutral performance benchmarking between ESX and Hyper-V shows no performance degradation in ESX compared to Hyper-V as a result of address translation or otherwise.  A more truthful headline to be exposed here is Hyper-V isn’t compatible with 32-bit hardware.  Why didn’t you mention this in your Hardware Requirements section?

Application Support: I don’t see any Windows support issues.  Again I remind you, VMware is certified on the Microsoft SVVP program.  Another comparison is made with a particular VMotion restriction.  I’ll grant you that if you admit Microsoft has no VMotion or hot migration at all.

Product Hypervisor Technology: We already covered this in the Drivers Support section.

Epic virtualization and storage blogger Scott Lowe provides his responses here.

Mohamed Fawzi, while it is nice to meet you, it is unfortunate that we met under these terms.  Having just discovered your blog today, I hope you don’t mind if I take a look at some of your other material as it looks like you’ve been at the blogging for a while (much longer than I).  I hope to find some good and interesting reads.

WordPress 2.7 has been released

December 11th, 2008

It’s finally here.  Don’t get me wrong, I haven’t been waiting on pins and needles for this release.  I’m happy with the WordPress 2.6.5 version I’m on now but maybe once I see the new features in 2.7 I’ll get more excited about it.  At any rate, I’ll be proceeding with much caution.  Probably not for at least a few weeks.  Much like a Microsoft Windows service pack, I’ll let other early adopters find out the joys first, then I’ll stand on the shoulders of their learning and success and avoid the pitfalls myself.  My concerns are with the dozen or more plugins/widgets I use in addition to my blog theme.  If you have any experience or hear any sort of news good/bad/ugly, please share the knowledge.  Comments always welcome here (as long as they are not spam).