Posts Tagged ‘vSphere’

The Future of VMware Lab Manager

September 12th, 2010

With the release of VMware vCloud Director 1.0 at VMworld 2010 San Franciso, what’s in store for VMware Lab Manager?  The future isn’t entirely clear for me.  I visualize two potential scenarios:

  1. Lab Manager development and product releases continue in parallel with VMware vCloud Director.  Although the two overlap in functionality in certain areas, they will co-exist on into the future in perfect harmony.
  2. VMware vCloud Director gains the features, popularity, pricing, and momentum needed to obsolete and sunset Lab Manager.

I’ve got no formal bit of information from VMware regarding the destiny of Lab Manager. In lieu of that, I’ve been keeping my ear to the rail trying to pick up clues from VMware body language.  Here are some of the items I’ve got in my notebook thus far:

Development Efforts:  First and foremost, what seems obvious to me is that VMware has all but stopped development of Lab Manager well beyond the past year.  Major functionality hasnt been introduced since the 3.x version.  Let’s take a look:

4.0 was released in July 2009 which provided compatibility with the recent launch of vSphere, that’s really it. I don’t count VMware’s half baked attempt at integrating with vDS which they market as DPM for Lab Manager (one problem, the service VMs prevent successful host maintenance mode and, in turn, prevent DPM from working; this bug has existed for over a year with no attempts at fixing).  To further add, the use of the Host Spanning network feature leverages the vDS and implies the requirement Enterprise Plus licensing for the hosts.  This drives up the sticker price of an already costly development solution by some accounts.

4.0.1 was released in December 2009, again to provide compatibility with vSphere 4.0 Update 1. VMware markets this release as introducing compatibility with Windows 7 and 2008 R2 (which in and of itself is not a lie), but anyone who knows the products realizes the key enabler was vSphere 4.0.1 and not Lab Manager 4.0.1 itself.

4.0.2 is released in July 2010 to provide compatibility with vSphere 4.1.  No new features to speak of other than what vSphere 4.1 brings to the table.

SnagIt Capture

Are you noticing the pattern?  Development efforts are being put forth merely to keep up compatibility with the vSphere releases.  Lab Manager documentation hasn’t been updated since the 4.0 release.  The 4.0.1 and 4.0.2 versions both point back to the 4.0 documentation.  Lab Manager documentation hasn’t been updated in over a year even considering two Lab Manager code releases since then.  Further evidence there has been no recent feature development in the Lab Manager product itself.

This evidence seems to make it clear that VMware is positioning Lab Manager for retirement.  The logical replacement is vCloud Director.  I haven’t heard of large scale developer layoffs in Palo Alto so a conclusion could be drawn here that most developer effort was pulled from Lab Manager and put on on vCloud Director 1.0 to get it out the door in Q3 2010.

Bug Fixes & Feature Requests:  This really ties into Development Efforts, but due to its weight, I thought it deserved a category of its own.  Lab Manager has acquired a significant following over the years by delivering on its promise of making software development more efficient and cost effective through automation.  Much like datacenter virtualization itself, a number of customers have become dependent on the technology.  As much as VMware has satisified these customers by maintaining Lab Manager compatibility with vSphere, at the same time customers are getting the short end of the stick.  Customers continue to pay their SnS fees but the value add of SnS is diminishing as VMware development efforts slowed down to a crawl.  At one time, SnS would net you new features, bug fixes, in addition to new versions of the software which provide compatibility with the host platforms.  Instead, the long list of customer feature requests (and great ideas I might add) sits dead in a VMware Communities forum thread like this.  The number of bugs fixed in the last two releases of Lab Manager I can almost count on two hands.  And what about squashing these bugs: this, this, and this?  Almost nothing has changed since Steven Kishi (I believe) exited the role of Director of Product Manager for VMware Lab Manager.

Again, this evidence seems to make it clear that VMware is sending Lab Manager off into the sunset.  Hello vCloud Director.

Marketing Efforts:  From my perspective, VMware hasn’t spent much time focusing on Lab Manager marketing.  By a show of customer or partner hands, who has seen a Lab Manager presentation from VMware in the last 6-12 months?  This ties strongly into the Development Efforts point made above.  Why market a product which seems to be well beyond its half life?  Consistent with the last thought above, marketing has noticably shifted almost entirely from Lab Manager to vCloud Director.

Chalk up another point for the theory which states Lab Manager will be consumed by vCloud Director.

Lack of Clear Communication:  About the only voice in my head (of which there are many) which reasons Lab Manager might be sticking around (other than a VMware announcement of a Lab Manager video tutorial series which has now gone stale) is the fact that VMware has not made it formally and publically clear that Lab Manager is being retired or replaced by vCloud Director.  Although I’m making a positive point here for the going concern of Lab Manager, I think there is ultimately an expiration date of Lab Manager in the not so distant future.  If you understand the basics of vCloud Director or if you have installed and configured it, you’ll notice similarities between it and Lab Manager.  But there is not 100% coverage of Lab Manager functionality and integration.  Until VMware can provide that seamless migration, they obviously aren’t going to pull the plug on Lab Manager.  Quite honestly, I think this is the most accurate depiction of where we’re sitting today.  VMware has a number of areas to address before vCloud Director can successfully replace Lab Manager.  Some are technical such as getting that 100% gap coverage between the two products from a features standpoint.  Some are going to be political/marketing based.  Which customers are ready to replace a tried and true solution with a version 1.0 product?  Some may be cost based.  Will VMware take a 1:1 trade in on Lab Manager for vCloud Director or will there be an uplift fee?  Will Enterprise Plus licensing be a requirement for future versions of vCloud Director?  vCloud Direct0r 1.0 requires Enterprise Plus licensing according to the VMware product’s ‘buy’ page.  Some will be a hybrid.  For instance, existing Lab Manager customers rely on a MS SQL (Express) database.  vCloud Director 1.0 is back ended with Oracle, a costly platform Lab Manager customers won’t necessarily have already in terms of infrastructure and staff.

SnagIt Capture

In summary, this point is an indicator that both Lab Manager and vCloud Director will exist in parallel, however, the signs can’t be ignored that Lab Manager is coasting on fumes.  Its ongoing presence and customer base will require support and future compatibility upgrades from VMware.  Maintaining support on two technologies for VMware is more expensive than to maintain just one.  A larger risk for VMware and customers may be that development efforts for vSphere have to slow down to allow Lab Manager to keep pace.  Even worse, new technology doesn’t see the light of day in vSphere because it cannot be made backward compatible with Lab Manager.  Unless we see a burst in development or marketing for Lab Manager, we may be just a short while away from a formal announcement from VMware stating the retirement of Lab Manager along with the migration plan for Lab Manager customers to become vCloud Director customers.

What are your thoughts?  I’d like to hear some others weigh in.  Please be sure to not disclose any information which would violate an NDA agreement.

Update 2/14/11: VMware has published a VMware vCenter Lab Manager Product Lifecycle FAQ for it’s current customers which fills in some blanks.  Particularly:

What is the future of the vCenter Lab Manager product line?

As customers continue to expand the use of virtualization both inside the datacenter and outside the firewall, we are focusing on delivering infrastructure solutions that can support these expanded scalability and security requirements. As a result, we have decided to discontinue additional major releases of vCenter Lab Manager. Lab Manager 4 will continue to be supported in line with our General Support Policy through May 1, 2013.

When is the current end-of-support date for vCenter Lab Manager 4?

For customers who are current on SnS, General Support has been extended to May 1, 2013.

Are vCenter Lab Manager customers eligible for offers to any new products?

To provide Lab Manager customers with the opportunity to leverage the scale and security of vCloud Director, customers who are active on SnS may exchange their existing licenses of Lab Manager to licenses of vCloud Director at no additional cost. This exchange program is entirely optional and may be exercised anytime during Lab Manager’s General Support period. This provides customers the freedom and flexibility to decide whether and when to implement a secure enterprise hybrid cloud.

The Primary License Administrator can file a Customer Service Request to request an exchange of licenses. For more information on the terms and conditions of the exchange, contact your VMware account manager.

Update 6/25/13: VMware notified its customers via email that support for Lab Manager 4.x has been extended:

June 2013

Dear VMware Valued Customers,VMware is pleased to announce a 1-year extension to the support for VMware vCenter Lab Manager 4.x. As reference, the original end of support date for this product was May 1, 2013. The new official end of support date will be May 21, 2014. This new end of support date aligns with VMware vSphere 4.x (noted in the support lifecycle matrix below as VMware ESX/ESXi 4.x and vCenter Server 4.x) end of support. This new date also allows the vCenter Lab Manager customer base more time to both use the 4.x product and evaluate options for moving beyond vCenter Lab Manager in the near future.

Additional Support Materials:

vCenter Server JVM Memory

September 6th, 2010

For those of you who have installed VMware vCenter Server 4.1, have you noticed anything new during the installation process?  A new screen was introduced at the end of the installation wizard for specifying the anticipated size of the virtual infrastructure which the respective vCenter Server would be managing.  There are three choices here: Small, Medium, & Large.  Sorry, no Supersize available yet.  If you require this option, I’m sure VMware wants to talk to you.

SnagIt Capture

The selection you make from the installation wizard not only defines the Maximum Memory Pool value for the Java Virtual Machine, but also the Initial Memory Pool value.  Following is a chart which takes a look at vCenter Server 4.0 & 4.1 JVM Memory Configuration comparisions:

vCenter/JVM Initial Memory Pool Max Memory Pool Thread Stack Size
4.0 128MB 1024MB 1024KB
4.1 Small (<100 hosts, default) 256MB 1024MB 1024KB
4.1 Medium (100-400 hosts) 256MB 2048MB 1024KB
4.1 Large (> 400 hosts) 512MB 4096MB 1024KB

As noted by the table above, in vCenter Server 4.0, the JVM Maximum Memory Pool was configured by default at 1024MB.  The vCenter Server 4.1 installation also defaults to 1024MB (Small <100 hosts) if left unchanged. One other comparison – pay attention to the difference in Initial Memory Pool. By default, vCenter 4.1 uses twice the amount of RAM out of the gate than previous versions.

Although the installation wizard JVM tuning component is new in 4.1, the ability to tune the JVM for vCenter is not.  The Configure Tomcat application has been available in previous versions of vCenter.  Some organizations with growing infrastructures may have been instructed by VMware support to tune the JVM values to overcome a vCenter issue having to do with scaling or some other issue.

SnagIt Capture

SnagIt Capture

Judging from the table, one can assume that the 1024MB value was appropriate for managing less than 100 hosts in vCenter 4.0.  As a point of reference, the Configuration Maximums document states that 300 hosts can be managed by vCenter 4.0.  This would imply that managing 100 hosts or more with vCenter 4.0 requires an adjustment to the out of box setting for the JVM Maximum Memory Pool (change from 1024MB to 2048MB). 

With vCenter 4.1, VMware has improved scaling in terms of the number of hosts a vCenter Server can manage.  The Configuration Maximums document specifies vCenter 4.1 can manage 400 hosts but the table above implies VMware may be preparing to support more than 400 hosts in the near future.  And that’s awesome because vCenter Server sprawl sucks. Period.

So have fun tuning the JVM but before you go, a few parting tips:

  • The Initial Memory Pool value defines the memory footprint (Commit Size) of the Tomcat process when the service is first started.  The Maximum Memory Pool defines the memory footprint which the Tomcat process is allowed to grow to.  Make sure you have sufficient RAM installed in your server to accommodate both of these values.
  • Setting the Initial Memory Pool to a value greater than the Maximum Memory Pool will prevent the Tomcat VJM from starting.  I thought I’d mention that before you spend too much time pulling your hair out.
  • If you would like to learn more about tuning Tomcat, vast resources exist on the internet.  This looks like a good place to start.

Unable To Retrieve Health Data

September 5th, 2010

SnagIt CaptureA number of people, including myself, have noticed that after upgrading to VMware vCenter 4.1, the vCenter Service Status shows red and displays the error message:

Unable to retrieve health data from https://<VC servername or IP address>/converter/health.xml

VMware has provided a workaround to this issue in KB 1025010.  The workaround involves installing the ldp.exe application binary from Microsoft, however, since I’m running vCenter Server on Windows Server 2008 R2, the binary is already in place by default and no download and installation was required. I’ve applied the workaround and after a service restart and a brief wait, the Service Status health went completely green, which is desired.

It’s worth nothing for posterity that step 3a is missing a small piece which I have provided in red below:

Double-click DC=virtualcenter,DC=vmware,DC=int, then double-click

vCalendar 2.0 Released; 1.0 Free Electronic Download

September 2nd, 2010

Welcome back! I can’t believe a year has elapsed since vCalendar was first launched.  vCalendar 1.0 was a lot of fun to say the least. It certainly fulfilled the purpose I had originally intended for myself – to provide a virtualization tip a day on my desktop both at the office and at home.

Truth be told, I began working on the next version of vCalendar right after the first version was released back in August 2009. Like vSphere 4.0 and 4.1, vCalendar 2.0 boasts 150 new features. That’s right – 150 brand new virtualization facts, tips, best practices, configuration maximums, and historical events.  What’s in the new version?  It’s safe to say you’ll probably find some vSphere 4.1 tips, additional advanced concepts, some more key dates in virtualization history, among other new, improved, and valuable items.  I highly suggest you order the new vCalendar 2.0 to find out!

But wait… if there’s 150 new entries, what happened to last year’s entries?  I have to say, it was extremely difficult, but with just 365 days in the year, I had to find 150 of last year’s entries to remove in order to make way for the 150 new entries.  What’s unfortunate is that most of the archived entries are still relevant and therefore valuable. I struggled with the thought of letting the archived entries disappear forever. So here’s what I’ve done about that.  I’m releasing vCalendar 1.0 as a free download in a searchable Adobe PDF formatYou can download vCalendar 1.0 by clicking on this link. I thank you for your support and I hope you get some additional mileage out of it.  The remaining entries from last year which were carried forward were combined into the pool of new entries and all were randomized to provide a fresh new vCalendar.

Continue reading at the official vCalendar web page to learn more including the information on how to obtain vCalendar 2.0.

Veeam Reporter 4.0 Free Edition

August 16th, 2010

SnagIt CaptureToday, Veeam has launched a new free version of an existing product which you may already be familiar with: Veeam Reporter Free Edition.  Veeam Reporter is an enterprise virtual infrastructure tool which is best described by Veeam on their product page:

Veeam Reporter™ discovers, documents and analyzes your entire virtual infrastructure. It maintains a complete history of all objects, settings and changes. And it trends performance and utilization. So you can really understand your virtual infrastructure—past, present and future.

When it comes to documenting and reporting on your virtual infrastructure, Reporter does it all.

This new free version contains most of the features of the full version.  The free edition can easily be upgraded to the full version of Veeam Reporter to gain these additional capabilities (A features comparison can found here):

  • Capacity planning (report pack)
  • Historical change management (beyond the most recent 24 hours)
  • Microsoft Visio reports for multipathing, network, vMotion, and datastore utilization
  • Full access to archive data—to create custom reports or update your configuration management database (CMDB)
  • Full dashboard capabilities
  • Automatic report distribution

I was invited by Veeam to take a look at the beta version of Veeam Reporter Free Edition.  I’ve captured some of my experience and documented it here.


Installation of Veeam Reporter Free Edition is fairly straightforward but I should disclose that I’m working with a beta (pre GA) version.  I installed on Microsoft Windows Server 2008 R2 Standard (64-bit only) which is my preferred platform, if supported by the vendor’s product (Veeam Reporter supports it).  Veeam Reporter requires Microsoft .NET Framework 3.5.1.  In Windows Server 2008 R2, this is installed as a Feature:

SnagIt Capture

If installing the Veeam Reporter’s Web UI (the default), the IIS Role is also required during the .NET Framework instllation…plus a few extra roles:

 SnagIt Capture

SnagIt Capture

During the beta, I ran into a JavaScrip error message after the installation was complete:

8-15-2010 8-45-45 PM

As it turns out, the issue has nothing to do with JavaScript, rather, the Static Content Role must be installed for IIS:

8-16-2010 6-31-57 PM

During the Veeam Reporter installation routine, I also installed the Microsoft PowerShell component which is optional:

SnagIt Capture

The Veeam Reporter PowerShell snap-in enables users to perform reporting tasks by running single cmdlets or custom automation scripts via the command-line interface.  The PowerShell SnapIn ReporterDBSnapIn is installed which adds the following Veeam Reporter specific cmdlets to the PowerShell environment:


As is quite common with virtualization management tools, including VMware vCenter itself, a back end database is required for the storage of datacenter information.  Veeam Reporter has the ability to leverage an existing Microsoft SQL Server.  In the absence of a dedicated SQL server, Veeam Reporter will install Microsoft SQL Express and integrate with it locally.  Installation of a local SQL Express instance takes quite some time as the necessary SQL binaries (including SP1) are downloaded at this time (this also implies internet connectivity from the Veeam Reporter server is required).

SnagIt Capture

A logoff/logon is required at the end of the installation as opposed to a system reboot:

SnagIt Capture


Now that the installation is complete, the next step is to configure Veeam Reporter Free Edition.  There’s really not much to the initial configuration or data collection.  Add to that, the installation and data collection process is agentless – a definite plus. 

So before any data can be displayed, it needs to be collected from the vCenter Server(s).  This is handled by creating a Collection Job which points at the vCenter Server and pulls in the data that Veeam uses.  A collection job should be scheduled to run periodically so that it grabs updated data at regular intervals.  I set up a Collection Job to run automatically once per day at midnight.  For the purposes of instant gratification, I manually ran the job to get some data:

8-16-2010 8-54-55 PM

In addition to configuring a Collection Job, I also set up a few of the ancillary items one would commonly find in reporting and management applications such as an Email server.

Now that I have some data, I can start creating useful reports and that’s where the fun begins.  I will cover some of the reports in the next update so stay tuned.

In the mean time, download your copy of Veeam Reporter Free Edition today and get started!


Free Book – vSphere on NetApp Best Practices

August 2nd, 2010

Hello gang!  For anyone who doesn’t specifically follow the NetApp blogs, this is just a quick heads up to let you know that NetApp has updated its popular NetApp and VMware vSphere Storage Best Practices book and is offering 1,000 free copies of the new Version 2.0 edition

The free copies are available while supplies last so get registered for yours soon!

vSphere 4.1: Multicore Virtual CPUs

July 25th, 2010

With the release of vSphere 4.1, VMware has introduced Multicore Virtual CPU technology to its bare metal flagship hypervisor.  This is an interesting feature which had already existed in current versions of VMware Workstation.  VMware has consistently baked in new features in its Type 2 hypervisor products, such as Workstation, Player, Fusion, etc., more or less as a functionality/stability test before releasing the same features in ESX(i).  VMware highlights this new feature as follows:

User-configurable Number of Virtual CPUs per Virtual Socket: You can configure virtual machines to have multiple virtual CPUs reside in a single virtual socket, with each virtual CPU appearing to the guest operating system as a single core. Previously, virtual machines were restricted to having only one virtual CPU per virtual socket. See the vSphere Virtual Machine Administration Guide.

VMware multicore virtual CPU support lets you control the number of cores per virtual CPU in a virtual machine. This capability lets operating systems with socket restrictions use more of the host CPU’s cores, which increases overall performance.

Using multicore virtual CPUs can be useful when you run operating systems or applications that can take advantage of only a limited number of CPU sockets. Previously, each virtual CPU was, by default, assigned to a single-core socket, so that the virtual machine would have as many sockets as virtual CPUs.

You can configure how the virtual CPUs are assigned in terms of sockets and cores. For example, you can configure a virtual machine with four virtual CPUs in the following ways:

  • Four sockets with one core per socket (legacy, this is how we’ve always done it prior to vSphere 4.1)
  • Two sockets with two cores per socket (new in vSphere 4.1)
  • One socket with four cores per socket (new in vSphere 4.1)

VMware defines a CPU as:

The portion of a computer system that carries out the instructions of a computer program and is the primary element carrying out the computer’s functions.

VMware defines a Core as:

A logical execution unit containing an L1 cache and functional units needed to execute programs. Cores can independently execute programs or threads.

VMware defines a Socket as:

A physical connector on a computer motherboard that accepts a single physical chip. Many motherboards can have multiple sockets that can in turn accept multicore chips.

One of the benefits of multicore which physical computing had was increased density of the hardware.  VMs do not share this advantage as they are virtual to begin with and have no rack footprint to speak of.

VMware’s benefit statement for this feature is a legitimate one and is the primary use case.  It’s the same benefit which applied when multicore (as well as hyperthreading to some extent) technology was introduced to physical servers.  What VMware doesn’t advertise is that the limitation being discussed usually revolves around software licensing – a per-socket license model to be precise which is what many software vendors still use.  For example, if I own a piece of software and I have a single socket license, traditionally I was only able to use this software inside of a single vCPU VM.  With Multicore Virtual CPUs, Virtual Machines have now caught up with their physcial hardware counterparts in that a single socket VM can be created which has 4 cores per socket.  Using the working example, the advantage I have now is that I can run my application inside a VM which still has 1 socket, but 4 cores for a net result of 4 vCPUs instead of just 1 vCPU.  I didn’t have to pay my software vendor additional money for the added CPU power.  To show how this translates into dollars and cents, let’s assume a per socket license cost of my application to be $1,000 and then extrapolate those numbers using VMware’s example above of how CPUs can be assigned in terms of sockets and cores:

  • Four sockets with one core per socket = $1,000 x 4 sockets = $4,000 net license cost, 4 CPUs
  • Two sockets with two cores per socket = $1,000 x 2 sockets = $2,000 net license cost, 4 CPUs
  • One socket with four cores per socket = $1,000 x 1 socket = $1,000 net license cost, 4 CPUs

    Now, all of this said, the responsibility is on the end user to be in license compliance with his or her software vendors.  Just becasue you can do this doens’t mean you’re legally obliged to do so.  Be sure to read your EULA and check with your software vendor or reseller before implementing VMware Multicore Virtual CPUs.

    Implementation of Multicore Virtual CPUs was quite straightfoward in VMware Workstation.  Upon creating a new VM or editing an existing VM’s settings, the following interface was presented for configuring vCPUs and cores per vCPU in VMware Workstation.  In this example, a 2xDC (Dual Core) configuration is being applied which results in a total of 4 CPU cores which will serve the VM’s operating system, applications, and users. Note that here, the term “processors” on the first line translates to “sockets”:

    7-25-2010 11-39-53 AM

    Making the same 2xDC CPU configuration in vSphere 4.1 isn’t difficult but nonetheless it is done differently.  Configuring total vCPUs and cores per vCPU is achieved by applying configurations in two different areas of the VM configuration. The combination of the two configurations produces a mathematical calculation which ultimately determines cores per vCPU.

    First of all, the total number of cores (processors) is selected in the VM’s CPU configuration.  This hasn’t changed and should be familiar to you.  The number of cores (processors) available for selection here is going to be 1 thru 4 or 1 thru 8 if you have Enterprise Plus licensing.  I’ve purposely included the notation of the VM hardware version 7 which is required. An inconsistency here compared to VMware Workstation is that the term “virtual processors” translates to “cores”, not “sockets”:

     7-25-2010 11-41-09 AM

    Configuring the number of cores per processor is where VMware has deviated from the VMware Workstation implementation.  In ESX and ESXi, this configuration is made as an advanced setting in the .vmx file.  Edit the VM settings, navigate to the Options tab, choose General in the Advanced options list. Click the Configuration Parameters button which allows you to edit the .vmx file on a row by row basis.  Click the Add Row button and add the line item cpuid.coresPerSocket. For the value, your going to supply the number of cores per processor which is generally going to be a value of 2, 4, or 8 (Enterprise Plus licensing required).  Note, using a value of 1 here would serve no practical purpose because it would configure a single core vCPU which is what we’ve had all along up until this point:

    7-25-2010 11-45-38 AM

    As a supplement, here are the requirements for implementing Multicore Virtual CPUs:

    • VMware vSphere 4.1 (vCenter 4.1, ESX 4.1 or ESXi 4.1).
    • Virtual Machine hardware version 7 is required.
    • The VM must be powered off to configure Multicore Virtual CPUs.
    • The total number of vCPUs for the VM divided by the number of cores per socket must be a positive integer.
    • The cpuid.coresPerSocket value must be a power of 2. The documentation explicitely states a value of 2, 4, or 8 is required, but 1 works as well although as stated before it would serve no practical purpose.
      • 2^0=1 (anything to the power of 0 always equals 1)
      • 2^1=2 (anything to the power of 1 always equals itself)
      • 2^2=4
      • 2^3=8
    • When you configure multicore virtual CPUs for a virtual machine, CPU hot Add/Remove is disabled (previously called CPU hot plug).
    • You must be in compliance with the requirements of the operating system EULA.

    This feature rocks and I think customers have been waiting a long time for it.  Duncan mentioned it quite some time ago but obvioulsy it was unsupported at that time.  I am a little puzzled by the implementation mechanisms, mainly the configuration of the .vmx to specify cores per CPU.  I suppose it lends itself to scriptability and thus automation, but in that sense, we lack the flexibility to configure cores per CPU with guest customization when deploying VMs from a template.  Essentially this means cores per CPU needs to be hard coded in each of my templates or cores per CPU needs to be manually tuned after deploying each VM from a template.  When I take a step back, I guess that’s no different than any other virtual hardware configuration stored in templates, but with the cores per CPU setting being buried in the .vmx as an advanced setting, it’s that much more of a manal/administrative burden to configure cores per CPU for each VM deployed than it is to simply change the number of CPUs or amount of RAM.  It would be nice if the guest customization process offered a quick way to configure cores per processor.