Posts Tagged ‘Virtualization’

VMware VI3 Implementation and Administration

January 11th, 2010

I recently finished reading the book VMware VI3 Implementation and Administration by Eric Siebert (ISBN-13: 978-0-13-700703-5).  VMware VI3 Implementation and Administration was a very enjoyable read. I don’t mean to sound cliché but for me it was one of those books that is hard to put down. Released in May of 2009, along with the next generation of VMware IV (vSphere), the timing of its arrival to market probably could have been better, but better late than never. Datacenters will be running on VI3 for quite some time. With that in mind, this book provides a tremendous amount of value and insight. I can tell that Eric put a lot of time and research into this book; the quality of the content shows. Much of the book was review for me, but I was still able to pick up bits and pieces here and there I wasn’t aware of, as well as some fresh perspective and new approaches to design, administration, and support.

To be honest and objective, I felt that Chapter 9, “Backing Up Your Virtual Environment”, lacked the completeness which all other chapters were given. A single page was dedicated to VMware Consolidated Backup with no detailed examples or demonstrations of how to use it, which would have been found throughout other chapters. To add, there was only a few sentences covering Replication which is a required component in many environments. Eric likes to discuss 3rd party solutions and this would have been a great opportunity to go into more detail or at least mention some products affordable to businesses of any size which could leverage replication solutions.

Overall, this is a great book. Eric has a no-nonsense writing style backed by decades of in the trench experience. Along with the print copy, you get a free electronic online edition as well allowing you to access the book anywhere where there is internet connectivity.  Pick up your copy today!  I thank you Eric and look forward to your upcoming vSphere book!

Virtualizing the grid

July 8th, 2009

I picked up this interesting map off Christopher Crowhurst’s blog. It’s a visualization of the United States power grid. The source comes from NPR’s article “Visualizing The Grid“. Follow the link to NPR and click on the various tabs at the top to see power plant, solar power, and wind power sources across the United States.

How much power are you saving due to virtualization?  Don’t forget virtualization cuts power consumption in more ways than just one.  The most obvious would be the reduction in server hardware count in the datacenter.  There are other indirect power savings vectors such as reduction in cooling, reduction in network and SAN switches due to server consolidation, less UPS utilization, and maybe even a reduction in datacenter size which in and of itself presents more indirect savings:  security, plumbing, utility lighting, cleaning, maintenance, real estate, etc.

7-8-2009 9-46-46 AM

In search of an application migration solution

June 22nd, 2009

I’m reaching out to software vendors and/or readers who might be aware of an end to end application migration solution or an application migration story outlining solutions, challenges, successes, etc. The solution should be software driven. The solution will seamlessly migrate applications, services, and daemons from one platform (Windows or Linux) to another platform in the same family. As an example, the solution would migrate applications and/or services from Windows 2000 Server to Windows Server 2003. On the Linux side, another example would be migrating applications and/or daemons from SLES 9.x to SLES 10.x.

As I stated, the solution would need to be as seamless and end to end as possible. Application and platform dependencies would need to be taken into consideration, and addressed or mitigated. For example, service packs, .DLLs, .NET framework, PERL, etc. on the Windows side. Kernel versioning, compiling, PERL, etc. on the Linux side.

I’m not exactly looking for professional services, however, I would be interested in hearing about your process and the tools you use. The solution need not be virtualization specific. The right tool would work with IBM, HP, Dell, whitebox, or virtual hardware.

If you are a vendor with a solution, or a reader with some application migration background, please drop me a line at jason@boche.net or feel free to reply to this blog entry with a comment. Feedback both small or large is welcomed as usual.

Thank you in advance.

ThinLaunch Software Announces the Immediate Availability of Thin Desktop 2.3.2

June 13th, 2009

6-13-2009 7-33-25 PM

(St. Paul, MN) ThinLaunch Software, LLC (www.thinlaunch.com) announces the immediate availability of Thin Desktop 2.3.2, Thin Desktop 2.3.2 enhances the award winning Thin Desktop application announced in August, 2008. Thin Desktop 2.3.2 simplifies deployment and adoption of Virtual Desktop Strategies by overcoming common barriers associated with the implementation of these strategies.

Thin Desktop enhances the overall value of virtualization by simplifying the deployment and implementation of virtual desktops at the user device. Thin Desktop replaces the local user interface, then locks down and monitors the user / client device. This allows the administrator to gain complete control over the client end point and the user experience. When compared to group policy methods, “registry hacks” and other similar approaches, Thin Desktop is far easier to implement, deploy and maintain. Unlike the implementation of a traditional Thin Client model, Thin Desktop requires no changes to the enterprise infrastructure and has no server footprint or management server.

When a PC or Thin Client is locked down using Thin Desktop, the typical shell / user interface is hidden from the user and replaced by the designated connection or application. At the same time, underlying capabilities allowed by the administrator can remain intact. No changes to the enterprise infrastructure are required and no additional tools or management functionality is needed.

The release of version 2.3.2 enhances deployment of Thin Desktop using industry standard methods, tools and architectures. An administrator can now deploy and implement Thin Desktop on any PC or Thin Client via standard unattended silent install capability and existing software distribution and imaging methods.

“Thin Desktop 2.3.2 is the result of feedback form a wide variety of customers with very diverse use cases and requirements. A common thread is the desire to adopt virtual desktop technologies while preserving investments in current hardware, infrastructure and skill sets – with a clear path for future hardware and virtualization options.”, said ThinLaunch Software General Manager, Mike Cardinal. “Customer environments with both PC and Thin Client devices will coexist for the foreseeable future. Most users don’t care about the box connected to the monitor, keyboard and mouse – and administrators don’t want them to care.”

For additional information and an Evaluation Download of Thin Desktop, visit the website at www.thinlaunch.com


About ThinLaunch Software, LLC
ThinLaunch Software, LLC has developed Thin Desktop to enhance the value of client device assets. Established in May of 2007, ThinLaunch software is privately held and based in Eagan, MN, a suburb of St. Paul, MN.
ThinLaunch Software and Thin Desktop are registered trademark of ThinLaunch Software, LLC. Additional trademarks and Patents Pending. Please visit the website at: www.thinlaunch.com

Cloud Camp Minneapolis

April 18th, 2009

IMG00028-20090418-1006Today I attended Cloud Camp Minneapolis from 9:00am to 3:30pm on the University of Minnesota East Bank campus. I think the event was large success. Registration was SOLD OUT and it looked like there was somewhere between 100 and 150 attendees. I think it speaks well for the technology and the event organization when that many people will give up the majority of an absolutely gorgeous Saturday.

The event started with a continental like breakfast where people mingled and socialized for an hour before the speaking agenda began. I ran into a few familiar faces and also met with new people I hadn’t met before. The coffee was strong and the bagels looked good.

After breakfast, we were ushered into the main auditorium. George Reese (pictured top left), cloud book author and event organizer from enStratus Networks, kicked things off by briefly introducing himself as well as the premier sponsors: VISI, enStratus, Microsoft, Hosso The Rackspace Cloud, Aserver, and Right Scale.

Shortly after, the Lightning Talks began. This is where the premier event sponsors were allowed just a few minutes to deliver their cloud speech along with a little product marketing while literally whipping through their slide deck. When I say just a few minutes, I literally mean it. I think five vendors all got up and delivered their presentations in a total of 15 minutes. If you’ve ever watched the television program “Mad Money”, it was like cloud talk and offerings during the lightning round. It was both an interesting and refreshing approach.

Next we had a lengthy group discussion on hot cloud topics which were in turn used to dynamically develop the afternoon breakout session topics. We touched on things such as security, mobility, legal and liability implications, small business, etc.

We broke for lunch where I had discussions with a few locals on phone, cable, and internet service providers (ISPs) in the state of Minnesota.

After lunch the large group broke up into the smaller breakout sessions mentioned previously. I attended two sessions: Mobility and SMB.

The mobility session had a good crowd mixture comprised of service providers, application developers, and CEOs. The discussions jumped from topic to topic as people offered up their problems, questions, and philosophies orbiting cloud mobility and isolation. Not to my surprise, there was very little along the lines of answers or solutions. That’s ok. I wasn’t expecting any. Frankly, I found comfort among large numbers of industry experts who, like I, didn’t have the answers and were just as perplexed about figuring out how this is all going to work out. Developers seemed to be the most concerned about the application layer (Applications as a Service) as discussions touched on APIs and applications in the cloud and their impact on development techniques as it applies to mobility. I got a sense of less concern over platform in the cloud, also known as Platform as a Service. One developer talked about his current experience of using Amazon’s Elastic Compute Cloud (EC2). His direct benefits: he owns and supports nothing, and he pays only for what he uses. When he’s not using it, there’s essentially little or no cost. When he’s done, I imagine he saves what he needs, and the rest is destroyed. There is no traditional decommissioning and writing off of assets. There is no hardware that needs to be disposed of properly.

The SMB session was another good mixture of attendees nearly the same as above but with more of a concentration on small business, as well as micro and nano business (phrases coined during the session representing entities smaller than small business). The general idea of this session was if and how small businesses can benefit from cloud offerings. Talks began with the various ways to define a small business: by revenue? by headcount? by technology? There are examples of large manufacturing plants that have small technology footprints. Likewise, small operations can generate large amounts of revenue with the assistance of technology. Group members proposed that there exists many inefficiencies in small business, particularly in the technology and infrastructure. This is where renting platforms, applications, services, and infrastructure from cloud providers could make sense for SMBs. Wouldn’t small businesses rather focus their time and energy on developing their products and services instead of being tied down by the technology they need to run their business on? From a customer or partner credibility standpoint, does a business look more professional and equipped running their business in a certified cloud datacenter, or a broom closet? What impacts will regulation and legislation have? Decisions of how to securely store and deliver customer information in a small business shouldn’t be taken lightly. There are consequences that could easily break the trust and financial backing that a small business or startup’s survivability relies on.

In all, I had a great time at Cloud Camp Minneapolis. If you asked me six months ago what I know about the cloud, I would have had nothing to say other than “I don’t get it”. I’ve gradually been warming up to the concept and today Cloud Camp Minneapolis went a long way in delivering my first feeling of personal and professional accomplishment in that I think I’m actually caught up and on the same page as many of my peers and higher experts in the cloud community. However, I have to be honest in saying that I walked away somewhat disappointed and in disbelief that virtualization discussion was nearly non-existent. The last two VMworld virtualization conferences I attended in Las Vegas and Cannes were strongly focused on cloud computing and VMware’s Virtual Datacenter OS (VDC-OS). There was maybe one mention of VMware in one sentence and a brief reference to VDI. Microsoft was on site talking about Azure and there was no mention of Hyper-V. No mention of XenServer, Virtual Iron, etc. I’ve been led to understand that virtualization is key component to cloud infrastructure, applications, and mobility. I anticipated much of today’s discussions would revolve around virtuailzation. I couldn’t have been more wrong. After the event finished, I sent out a tweet re: no virtualization talk today. I received a response stating virtualization is merely a widget or one small component among many in the cloud. Virtualization is not really as integral as I’m being told by Paul Maritz of VMware. Maybe this is a case of Jason has been drinking too much VMware Kool-Aid for too long. The answers about the cloud are coming. Slowly but surely. Hopefully Paul is right and VMware does have a significant role to play in their version of global cloud computing. I’d like to see it, realize it, and experience it.

New ESX(i) 3.5 security patch released; scenarios and installation notes

April 11th, 2009

On Friday April 10th, VMware released two patches:

Both address the same issue:

A critical vulnerability in the virtual machine display function might allow a guest operating system to run code on the host. The Common Vulnerabilities and Exposures Project (cve.mitre.org) has assigned the name CVE-2009-1244 to this issue.

Hackers must love vulnerabilities like this because they can get a lot of mileage out of essentially a single attack. The ability to execute code on an ESX host can impact all running VMs on that host.

Although proper virtualization promises isolation, the reality is that no hardware or software vendor is perfect and from time to time we’re going to see issues like this. Products are under constant attack from hackers (both good and bad) to find exploits. In virtualized environments, it’s important to remember that guest VMs and guest operating systems are no different than their physical counterparts in that they need to be properly protected from the network. That means adequate virus protection, spyware protection, firewalls, encryption, packet filtering, etc.

This vulnerability in VMware ESX and ESXi is really a two factor attack. In order to compromise the ESX or ESXi host, the guest VM must first be vulnerable to compromise on the network to provide the entry point to the host. Once the guest VM is compromised, the next step is to get from the guest VM to the ESX(i) host. Hosts without the patch will be vulnerable to the next attack which we know from reading above will allow who knows what code to be executed on the host. If the host is patched, we maintain our guest isolation and the attack stops at the VM level. Unfortunately, the OS running in the guest VM is still compromised, again highlighting the need for adequate protection of the operating system and applications running in each VM.

The bottom line is this is an important update for your infrastructure. If your ESX or ESXi hosts are vulnerable, you’ll want to get this one tested and implemented as soon as possible.

I installed the updates today in the lab and discovered something interesting that is actually outlined in both of the KB articles above:

  • The ESXi version of the update requires a reboot. Using Update Manager, the patch process goes like this: Remediate -> Maintenance Mode -> VMotion VMs off -> Patch -> Reboot -> Exit Maintenance Mode. The duration of installation of the patch until exiting maintenance mode (including the reboot in between) took 12 minutes.
  • The ESX version of the update does not require a reboot. Using Update Manager, the patch process goes like this: Remediate -> Maintenance Mode -> VMotion VMs off -> Patch -> Exit Maintenance Mode. The duration of installation of the patch until exiting maintenance mode (with no reboot in between) took 1.5 minutes.

Given reboot times of the host, patching ESX hosts goes much quicker than patching ESXi hosts. Reboot times on HP Proliant servers aren’t too bad but I’ve been working with some powerful IBM servers lately and the reboot times on those are significantly longer than HP. Hopefully we’re not rebooting ESX hosts on a regular basis so with that in mind, reboot times aren’t a huge concern, but if you’ve got a large environment with a lot of hosts requiring reboots, the reboot times are going to be cumulative in most cases. Consider my environment above. A 6 node ESXi cluster is going to take 72 minutes to patch, not including VMotions. A 6 node ESX cluster is going to take 9 minutes to patch, not including VMotions. This may be something to really think about when weighing the decision of ESX versus ESXi for your environment.

Update: One more item critical to note is that although the ESX version of the patch requires no reboot, the patch does require three other patches to be installed, at least one of which requires a reboot.  If you already meet the requirements, no reboot will be required for ESX to install the new patch.

In closing, while we are on the subject of performing a lot of VMotions, take a look at a guest blog post from Simon Long called VMotion Performance. Simon shows us how to modify VirtualCenter (vCenter Server) to allow more simultaneous VMotions which will significantly cut down the amount of time spent patching ESX hosts in a cluster.

Setup for Microsoft cluster service

April 1st, 2009

Setting up a Microsoft cluster on VMware used to be a fairly straight forward task with a very minimal set of considerations. Over time, the support documentation has evolved into something that looks like it was written by the U.S Internal Revenue Service. I was an Accountant in my previous life and I remember Alternative Minimum Tax code that was easier to follow than what we have today, a 50 page .PDF representing VMware’s requirements for MSCS support. Even with that, I’m not sure Microsoft supports MSCS on VMware. The Microsoft SVVP program supports explicit versions and configurations of Windows 2000/2003/2008 on ESX 3.5 update 2 and 3, and ESXi 3.5 update 3 but no mention is made regarding clustering. I could not find a definitive answer on the Microsoft SVVP program site other than the following disclaimer:

For more information about Microsoft’s policies for supporting software running in non-Microsoft hardware virtualization software please refer to http://support.microsoft.com/?kbid=897615. In addition, refer to http://support.microsoft.com/kb/957006/ to find more information about Microsoft’s support policies for its applications running in virtual environments.

At any rate, here are some highlights of MSCS setup on VMware Virtual Infrastructure, and by the way, all of this information is fair game for the VMware VCP exam.

Prerequisites for Cluster in a Box

To set up a cluster in a box, you must have:

* ESX Server host, one of the following:

* ESX Server 3 – An ESX Server host with a physical network adapter for the

service console. If the clustered virtual machines need to connect with external

hosts, then an additional network adapter is highly recommended.

* ESX Server 3i – An ESX Server host with a physical network adapter for the

VMkernel. If the clustered virtual machines need to connect with external

hosts, a separate network adapter is recommended.

* A local SCSI controller. If you plan to use a VMFS volume that exists on a SAN, you

need an FC HBA (QLogic or Emulex).

You can set up shared storage for a cluster in a box either by using a virtual disk or by

using a remote raw device mapping (RDM) LUN in virtual compatibility mode

(non‐pass‐through RDM).

When you set up the virtual machine, you need to configure:

* Two virtual network adapters.

* A hard disk that is shared between the two virtual machines (quorum disk).

* Optionally, additional hard disks for data that are shared between the two virtual

machines if your setup requires it. When you create hard disks, as described in this

document, the system creates the associated virtual SCSI controllers.

Prerequisites for Clustering Across Boxes

The prerequisites for clustering across boxes are similar to those for cluster in a box.

You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* Shared storage must be on an FC SAN.

* You must use an RDM in physical or virtual compatibility mode (pass‐through

RDM or non‐pass‐through RDM). You cannot use virtual disks for shared storage.

Prerequisites for Standby Host Clustering

The prerequisites for standby host clustering are similar to those for clustering across

boxes. You must have:

* ESX Server host. VMware recommends three network adapters per host for public

network connections. The minimum configuration is:

* ESX Server 3 – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the service console.

* ESX Server 3i – An ESX Server host configured with at least two physical

network adapters dedicated to the cluster, one for the public and one for the

private network, and one network adapter dedicated to the VMkernel.

* You must use RDMs in physical compatibility mode (pass‐through RDM).

You cannot use virtual disk or RDM in virtual compatibility mode

(non‐pass‐through RDM) for shared storage.

* You cannot have multiple paths from the ESX Server host to the storage.

* Running third‐party multipathing software is not supported. Because of this

limitation, VMware strongly recommends that there only be a single physical path

from the native Windows host to the storage array in a configuration of

standby‐host clustering with a native Windows host. The ESX Server host

automatically uses native ESX Server multipathing, which can result in multiple

paths to shared storage.

* Use the STORport Miniport driver for the FC HBA (QLogic or Emulex) in the

physical Windows machine.

Cluster in a Box Cluster Across Boxes Standby Host Clustering
Virtual disks Yes No No
Pass-through RDM (physical compatibility mode) No Yes Yes
Non-pass-through RDM (virtual compatibility mode) Yes Yes No

Caveats, Restrictions, and Recommendations

This section summarizes caveats, restrictions, and recommendation for using MSCS in

a VMware Infrastructure environment.

* VMware only supports third‐party cluster software that is specifically listed as

supported in the hardware compatibility guides. For latest updates to VMware

support for Microsoft operating system versions for MSCS, or for any other

hardware‐specific support information, see the Storage/SAN Compatibility Guide for

ESX Server 3.5 and ESX Server 3i.

* Each virtual machine has five PCI slots available by default. A cluster uses four of

these slots (two network adapters and two SCSI host bus adapters), leaving one

PCI slot for a third network adapter (or other device), if needed.

* VMware virtual machines currently emulate only SCSI‐2 reservations and do not

support applications using SCSI‐3 persistent reservations.

* Use LSILogic virtual SCSI adapter.

* Use Windows Server 2003 SP2 (32 bit or 64 bit) or Windows 2000 Server SP4.

VMware recommends Windows Server 2003.

* Use two‐node clustering.

* Clustering is not supported on iSCSI or NFS disks.

* NIC teaming is not supported with clustering.

* The boot disk of the ESX Server host should be on local storage.

* Mixed HBA environments (QLogic and Emulex) on the same host are not

supported.

* Mixed environments using both ESX Server 2.5 and ESX Server 3.x are not

supported.

* Clustered virtual machines cannot be part of VMware clusters (DRS or HA).

* You cannot use migration with VMotion on virtual machines that run cluster

software.

* Set the I/O time‐out to 60 seconds or more by modifying

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\

TimeOutValue.

The system might reset this I/O time‐out value if you recreate a cluster. You must

reset the value in that case.

* Use the eagerzeroedthick format when you create disks for clustered virtual

machines. By default, the VI Client or vmkfstools create disks in zeroedthick

format. You can convert a disk to eagerzeroedthick format by importing,

cloning, or inflating the disk. Disks deployed from a template are also in

eagerzeroedthick format.

* Add disks before networking, as explained in the VMware Knowledge Base article

at http://kb.vmware.com/kb/1513.

phew!