Archive for May, 2009

VMware VCP # 1

May 28th, 2009

IMG00074-20090528-1140

Left to Right: VCP#1, VCP#2712

This is VMware history.

vSphere Has Arrived

May 21st, 2009

It has been a long wait but last night (and to my surprise) vSphere was finally released and from what I’ve seen so far, it was well worth the wait. Not that VI3 isn’t a great product, but the new features vSphere boasts are absolutely amazing. Whereas with VI3 VMware put any resemblance of competition to shame, vSphere totally and completely annihilates it.

With the vSphere NDA embargo lifted a while back for bloggers, there has already been plenty of coverage on most of the new features so I’m not going to go into each of them in great detail here. I’ll just touch on a few things that have caught my attention. There is plenty more to digest on other blogs and of course VMware’s site.

First of all, let me get this out of the way: By far the best and most complete collection of vSphere resources on the internet can be found at Eric Siebert’s vSphere-land site. If you can’t find what you’re looking for there, it doesn’t exist.

Now, a few of my favorite and notable observations thus far:

  • The What’s New in vSphere 4.0 page – This is the list of new major features in vSphere. Note there are approximately 150 new features in vSphere in all, this is the list of the major notable ones worth highlighting:
    • One feature which was news to me and I hadn’t seen during the private beta was Virtual Machine Performance Counters Integration into Perfmon which seems to have replaced the shortlived and ‘never made it out of experimental support’ VMware Tools Descheduler Service. “vSphere 4.0 introduces the integration of virtual machine performance counters such as CPU and memory into Perfmon for Microsoft Windows guest operating systems when VMware Tools is installed. With this feature, virtual machine owners can do accurate performance analysis within the guest operating system. See the vSphere Client Online Help.”
    • New CLI commands: vicfg-dns, vicfg-ntp, vicfg-user, vmware-cmd, and vicfg-iscsi
    • There appears to be no end in sight for product name changes. VIMA has become vMA. It’s still 64-bit only as far as I know.
    • It’s official, and Rick Vanover reported it first in Virtualization Review magazine: Storage VMotion renamed to Enhanced Storage VMotion, particularly when changing disk formats hot on the fly (ie. full to thin provisioned). Not to be confused with Enhanced VMotion Compatibility (EVC) which is a completely different feature – I predict a lot of people confusing these two technologies interchanging one for the other.
  • The Upgrade Guide – Easy but critically important reading. A few things that I quickly pulled of this document that are worth noting:
    • SQL2000 is not a supported database platform for vCenter. SQL2008 is on the supported list. Good job VMware. Some folks may remember it taking an inconveniently long time to get SQL2005 on the supported database list when VI3 was released.
    • Another vCenter database detail I caught: During an upgrade, DBO must be granted to both MSDB and the vCenter database whereas with VI3 DBO was only needed on MSDB and you didn’t dare grant DBO to the vCenter database or you ended up with new database tables and an empty datacenter.
    • Quickly summarized, the VM upgrade path is: VMware Tools, shut down VM, upgrade VM hardware to version 7, power on. No VMFS datastore upgrades to worry about.
    • Both the 2.5 VIC and vSphere client can be installed simultaneously on the same machine and is supported as such. This will be very helpful for customers straddling both VI environments during their transition. I’ve got a blog entry coming up on ThinApp’ing the client soon which will provide yet another client installation option.
  • Configuration Maximums for VMware vSphere 4.0 – Ahh once again my most favorite VMware document of them all. Look at some of these insanely scalable supported configurations:
    • 8 vCPUs in a VM
    • 255GB RAM in a VM
    • IDE drive support in a VM
    • 10 vNICs in a VM
    • 512 vCPUs per host
    • 320 running VMs on a host
    • 64 lCPUs in a host
    • 20 vCPUs per core
    • 1TB RAM in a host
    • 4,096 virtual switch ports in a host
    • These are just a few that I hand picked. We’re looking at serious consolidation ratio possibilities here!
  • Systems Compatibility Guide – This is the offline version of the vSphere HCL. Ok, in case you have been living under a rock, vSphere is 64-bit only. You’ll want to make sure your hardware is compatible with vSphere. I won’t beat around the bush here – A lot of hardware that was supported by VI3 has dropped off the list (even much of the 64-bit hardware). If you don’t have the required hardware now, plan your 2010 budget accordingly. As a point of interest, I found it odd that an HP DL385G2 and G5 was on the HCL, but the G3 and G4 are missing. Pay close attention, particularly if you plan to utilize FT as that feature carries with it its own set of strict requirements.

There are boatloads of new goodies in vSphere. It’s going to be around for a long time so take your time to learn it. No need to rush or be the first datacenter to run vSphere for bragging rights. Watch the blogs and the bookstores. There will be new vSphere content gushing from all angles for many months and even years to come. Be sure to share your findings with the VMware virtual community. Collaboration and networking makes us strong and successful.

Lab Manager “Valid NIC Requirement” prerequisite check fails

May 17th, 2009

If you’re installing Lab Manager 3.x and the Valid NIC Requirement prerequisite check fails, verify your Lab Manager server has a static IP address configuration and not a configuration that is assigned by DHCP.

For other Lab Manager requirements, be sure to check out the Installation and Upgrade Guide.

5-17-2009 12-32-11 PM

VI Administration – Even From A Galaxy Far, Far Away

May 16th, 2009

1

2

Introducing Hyper9 Virtualization Mobile ManagerTM Beta

Mobile Monitoring and Support
3 Hyper9 knows VI administrators. We understand their challenges, which is why we’ve removed so much complexity, risk and cost from virtualization management. But we also know about something else VI admin’s need – freedom. Freedom from the infrastructure. Freedom to leave at the end of the day without concern. Freedom to take lunch, or even a vacation, without looking back. It’s why we created Virtualization Mobile Manager (VMM).

What’s New in VMM?

VMM offers administrators remote monitoring and support – browser-based management that works on a wide variety of mobile devices. VMM now supports VMware, Xen and Hyper-V and useless features and complexity don’t bog down VMM. Affordable, easy to use, and workable even on simple cell phones.

Developed by virtualization infrastructure expert Andrew Kutz, VMM enables remote network control, extended scalability and multi-platform support, all within a mobile display designed for optimal efficiency. It’s all part of our commitment to providing VI administrators with the tools they need to work smarter and take action right away.

Take Me To My People Program
The VMM beta is available to the entire VI admin universe today – but as usual, true believers get something extra. The first 15 to sign up through boche.net will receive the following perks:

  • 50% off pricing on our already low pricing
  • Automatic entry into Win a Mobile Device contest, beginning in June
  • In exchange for a little feedback – a limited edition Hyper9 T-shirt

VMM Highlights?
There’s no question that VMM is the ticket to higher intelligence for VI administrators.
Here are some of the details that just may make a believer out of you:

Features4

  • Monitor on the Go
  • Supports all major hypervisors
  • Runs on Windows, Linux and OS X
  • Accessible via a Web Browser

Mobile

  • Monitor Host and VM Performance Statistics (CPU, Memory)
  • Control VM’s and Take Action On the Go (start, stop, pause, reset, disable network)
  • Optimized for Mobile Devices (Apple iPhone, Blackberry, Google Android and Windows Mobile devices)

Supported Hypervisors

  • VMware Server 2
  • VMware Infrastructure 3.5 Hosts (VMware ESX 3.5, ESXi 3.5, VirtualCenter 2.5)
  • Microsoft Hyper-V
  • Citrix XenServer 5

Supported Platforms

  • VMM is hosted as an Apache Tomcat web application
  • Windows, Linux and OS X

Supported Modes

5

For more information, to join the beta and download the product, please see visit this link. Be sure to tell them boche.net – VMware Virtualization Evangelist sent you. Remember, only the first 15 to register are eligible for the Take Me To My People Program benefits.

If you’d prefer, you can send me your information via email (your name and email address) and I will connect you with a Hyper9 representative so that you may take advantage of this limited time offer.

Lab Manager Network Port Requirements

May 13th, 2009

I need to become a VMware Lab Manager expert and so it begins.  From what I’ve seen so far, Lab Manager 3.x has made great progress since I last kicked the tires 15 months ago on Lab Manager 2.x.  The biggest news by far is that ESX hosts can be managed both by Lab Manager Server and vCenter Server with all the fixins (DRS, HA, VMotion).  Although I’ve already found that VMs connected to an internal only vSwitch remain pinned to the host due to VMotion rules.

Nothing too Earth shattering here; this information comes straight from page 20 of the Lab Manager Installation and Upgrade Guide.

Systems TCP Port UDP Port
Client browser to access Lab Manager Server system 443
Client browser to access ESX hosts 902, 903
Lab Manager Server system and ESX hosts to access SMB share

(import and export operations only)

139, 445 137, 138
ESX hosts to access NFS media datastores or NFS virtual machine datastores 2049
Lab Manager Server system to access Lab Manager agent on ESX hosts 5212
Lab Manager Server system to access ESX host agent on ESX hosts 443
Lab Manager Server system to access the VirtualCenter Server system 443
Lab Manager Server system to communicate with virtual router on some ESX hosts

(for fenced configurations)

514
Lab Manager Server system to access LDAP Server 389 LDAP

636 LDAPS

Before the installation of Lab Manager, be sure that ports above won’t conflict with an existing configuration by running the netstat -b command from the Windows command line.

Celebrity Twitter Overkill

May 13th, 2009

I linked to a hilarious Twitter video a while back. Here’s another installment which focuses on celebrity Twitterers. Note quite as good as the first IMO one but still a must see:

vSphere Memory Hot Add/CPU Hot Plug

May 10th, 2009

I’ve been experimenting with vSphere’s memory hot add and CPU hot plug features to determine its usefulness with Windows Server operating systems. I came up with mixed results depending on the version and architecture of the OS.

A few notes about the results:

  1. Memory hot remove is not supported at all by vSphere. It’s not an option no matter what the guest OS.
  2. Although virtual hardware can be hot added depending on the OS, there are caveats in certain cases
    1. A guest reboot may be required (this is outlined in the table below).
    2. Memory that is hot added to guests that support the hot add without a reboot will result in 100% sustained CPU utilization in the guest OS for a variable period of time that is dependent on the amount of of memory that is added. In my testing (and keep in mind your mileage may vary on different hardware):
      1. 1GB of RAM hot added resulted in 100% CPU for 1-3 seconds.
      2. 3GB of RAM hot added resulted in 100% CPU for about 10 seconds.
  3. CPU hot unplug is supported by vSphere but was not supported by any of the Windows operating systems that I tested.
  4. Going from 1vCPU to 2vCPUs in Windows 2008 guest operating systems did not result in a HAL change. From what I can tell, Windows 2008 uses the same HAL for uniprocessor and SMP. When a vCPU is hot added, it does show up right away in the Device Manager, however, it’s not seen in Task Manager or Computer Properties therefore my assumption is that processes are not being scheduled on the added vCPU until after the reboot at which time the additional vCPU shows up in all places that it should (ie. Task Manager, Computer Properties, etc.)
  5. I certainly like the innovation and flexibility here but I’m not sure hot add technology is going to mesh well with planned change management systems. The most important thing to recognize though is that VMware offers this technology to us as our choice to use or not use. It’s not a feature VMware held back drawing their own conclusion that nobody on the planet could ever use it. Microsoft does this today with Hyper-V memory over commit. Or rather they don’t offer memory over commit in Hyper-V because they made the decision on behalf of all their customers that nobody could or should use memory over commit. Instead you should pad your hosts with more physical memory at additional cost to you.

Here is the table of results I came up with:

Memory hot
add
Memory hot
remove
CPU hot
plug
CPU hot
unplug
Windows Server 2003 STD x86 :-( :-( :-( :-(
Windows Server 2003 STD x64 :-( :-( :-( :-(
Windows Server 2003 ENT x86 8-) :-( :-( :-(
Windows Server 2003 ENT x64 8-) :-( :-( :-(
Windows Server 2008 STD x86 8-) * :-( :-( :-(
Windows Server 2008 STD x64 8-) * :-( 8-) * :-(
Windows Server 2008 ENT x86 8-) :-( :-( :-(
Windows Server 2008 ENT x64 8-) :-( 8-) * :-(
Windows Server 2008 DC x86 8-) :-( :-( :-(
Windows Server 2008 DC x64 8-) :-( 8-) :-(
Windows Server 2008 R2 DC x64
(experimental support only)
8-) :-( 8-) :-(
* Reboot of guest OS required to recognize added hardware