Archive for May, 2010

P2V Milestone

May 15th, 2010

If you’re reading this, that’s good news because it means last night’s P2V completed successfully.  I took the last remaining non-virtualized physical infrastructure server in the lab and made it a virtual machine.  Resource and role wise, this was the largest physical lab server next to the ESX hosts themselves.


  • HP Proliant DL380 G3
  • Dual Intel P4 2.8GHz processors
  • 6GB RAM
  • 1/2 TB  local storage
  • Dual Gb NICs
  • Dual fibre channel HBAs


  • Windows Server 2003 R2 Enterprise Edition SP2
  • File server
    • binaries
    • isos
    • my documents
    • thousands of family pictures
    • videos
  • Print server
  • IIS web server
    • WordPress blog
    • ASP.NET based family web site
    • other hosted sites
  • DHCP server
  • SQL 2005 server
    • vCenter
    • VUM
    • Citrix Presentation Server
  • MySQL server
    • WordPress blog
  • Backup Sever
  • SAN management

I’m shutting down this last remaining physical server as well as the tape library.  They’ll go in the pile of other physical assets which are already for sale or they will be donated as sales for 32-bit server hardware are slow right now.  This is a milestone because this server, named SKYWALKER – you may have heard me mention it from time to time, has been a physical staple in the lab for as long as the lab has existed (circa 1995).  Granted it has gone through several physical hardware platform migrations, its logical role is historic and its composition has always been physical.  To put it into perspective, at one point in time SKYWALKER was a Compaq Prosignia 300 server with a Pentium Pro processor and a single internal Barracuda 4.3GB SCSI drive.  Before my abilities to acquire server class hardware, it was hand-me-down whitebox parts from earlier gaming rigs.

The P2V (using VMware Converter) took a little over 5 hours for 500GB of storage.  So the only physical servers remaining in the lab are the ESX hosts themselves.  2 DL385 G2s and 2 DL385s which typically remain powered down, earmarked for special projects.  A successful P2V is a great start to a weekend if you ask me.  Now I’m off to my daughter’s T-ball game. 🙂

Microsoft Exchange 2003 to Exchange 2010 Upgrade Notes

May 14th, 2010

Last weekend I successfully upgraded, ahem, migrated the lab infrastructure from Microsoft Exchange 2003 to Exchange 2010.  This upgrade has been on my agenda for quite some time but I had been delaying it mainly due to lack of time and thorough knowledge of the steps.  I had a purchased the Microsoft Exchange Server 2010 Administrator’s Pocket Consultant (ISBN: 978-0-7356-2712-3) in January and marked up a few pages with a highlighter.  However, the deeper I got in the book, the more daunting the task seemed to have become, even for a simple one-server environment like mine.  In my mind, Exchange has always been somewhat of a beast, with increasing levels of difficulty as new editions emerged.  The pocket consultant series of books are wonderfully technical, but they haven’t been able to fit in my pocket for about a decade. They contain so much content that it has become difficult to rely on them as a CliffsNotes guide for platform upgrades, especially when it comes to Exchange.

Then two things happened miraculously at the same time.  First, I was invited to a private beta test of a virtualization related iPad application.  As part of this test, I needed to be able to send email from my iPad.  I had been unsuccessful thus far in getting Microsoft Exchange ActiveSync to work with the iPad (even after following Stephen Foskett’s steps) and could only assume that it was due to several years of wear and tear on my Exchange 2003 Server.  I needed to get that upgrade to Exchange 2010 done quickly.  Second, the May 2010 issue of Windows IT Pro magazine showed up in my mailbox.  To my delight, it was chock full of Exchange 2010 goodness, including a cover story of “Exchange 2003 to Exchange 2010 Step-by-Step Exchange Migration”. I’m pretty sure this was divine intervention with the message being “Get it done this weekend, you can do this.”

The upgrade article by Michael B. Smith started on page 26 and was 100% in scope.  The focus was a single server Exchange environment upgrade from 2003 to 2010.  I read the seven page artile in its entirety, marking up key “to-do” steps with a highlighter.  Following are some things I learned along the way:

  1. Naturally the Exchange server is virtualized on VMware vSphere.
  2. My Exchange environment is built upon a foundation that dates back as far as Exchange 5.5 (pre-Active Directory).  There would be no in place upgrades.  Exchange hasn’t provided an upgrade since Exchange 2003.  That suited me just fine as the Exchange 2003 server has been through so much neglect, although it had gotten pretty slow, it’s a miracle it was still functional.  The Exchange migration will consist of bringing up a fresh OS with a new installation of Exchange, and then migrating the mailboxes and services, and then retiring the old Exchange Server.  Microsoft calls this a migration rather than an upgrade.
  3. Exchange must be running in Native mode.  Not a problem, I was already there.
  4. Pre-migration, there exists a hotfix from Microsoft which is recommended to be installed on the Exchange 2003 server.
  5. The Schema Master mast be running Windows Server 2003 SP1 or higher.
  6. There needs to be at least one Global Catalog server at Windows Server 2003 SP1 or higher in the Exchange site.
  7. The AD forest needs to be at Server 2003 Forest Functional Level or higher.
  8. The AD domain needs to be at Server 2003 Domain Functional Level or higher.
  9. For migration flexibility purposes, Exchange 2003 and Exchange 2010 both support DFL and FFL up to Server 2008 R2.
  10. Exchange 2010 requires 64-bit hardware.  No problem, that requirement was met with vSphere .
  11. Exchange 2010 can be installed on Windows Server 2008 or Windows Server 2008 R2.  I naturally opted for R2.  No sense in deploying a two-year old OS when a more current one exists and is supported.  Plus, I personally need more exposure to 2008 and R2… 2003 is getting long in the tooth.
  12. Copy the Exchange DVD to a data/utility drive on the server.  Reason being, you can drop the most recent rollup available into the \Updates\ folder and basically perform a slipstream installation of Exchange with the most recent rollup applied out of the gate.  As of this writing, the most current is Rollup 3.
  13. Here’s a big time saver, install the server roles and features Exchange 2010 requires using the provided script on the DVD:
    \scripts\ServerManagerCmd -ip Exchange-Typical.xml -restart
    Other sample pre-requisite installer scripts can be found here.
  14. The 2007 Office System Converter: Microsoft Filter Pack (x64) is required to be installed.  This is downloadable from Microsoft’s website.  A little strange, but I’ll play along.  It’s required for the Exchange full-text search engine to search Office format documents.
  15. Run the following commands for good measure. It may or may not be required depending on what’s been done to the server so far:
    sc config NetTcpPortSharing start= auto
    net start NetTcpPortSharing
  16. Setup logs for Exchange are found in C:\ExchangeSetupLogs\  The main one is ExchangeSetup.log.  Hopefully you won’t have to rely on these logs and you are blessed with a trouble-free installation.
  17. There are the usual Active Directory preparatory steps to expand the Schema which seem to have increased in quantity but I could be hallucinating:
    1. /PrepareLegacyExchangePermissions
    2. /PrepareSchema
    3. /PrepareAD
    4. /PrepareAllDomains
  18. Installation can be invoked by CLI with /mode:install /roles:ca,ht,mb however, I chose a GUI installation which was more intuitive for me.
  19. The article stated the installation would take at least 20 minutes on fast hardware.  My installation took less than 15 minutes on a VM hosted by four year old servers attached to fibre channel EMC Celerra storage – bitchin.
  20. A Send connector is required before Exchange 2010 will route mailto the internet.
  21. Exchange 2010 ships with two Receive connectors but they must be configured before they will accept anonymous email from the internet.
  22. Exchange 2010 is managed by the Exchange Management Console which is called the EMC for short.  That will be easy to remember.
  23. Exchange 2010 is also managed by PowerShell scripts (also called an Exchange Management Shell, or EMS for short).  There are some configuration tasks which can only be made via PowerShell script and not via the EMC.
  24. Lend your end users and Helpdesk staff a hand by creating a meta-refresh document in C:\inetpub\wwwroot\ which points to https://<mail_server_fqdn>/owa effectively teleporting them into Outlook Web App (did you catch the name change? no more Outlook Web Access)
  25. Mailboxes are no longer moved online due to their potential size and problems which may occur if a mailbox is accessed during migration.  Mailbox migrations are now handled via EMC by way of a Move Request (either local [same org] or remote[different org]).  When a move request is submitted, the process begins immediately but may take some time to complete obviously based on the size of the mailbox as well as the quantity of mailboxes multiple selected for the move request.  Tony Redmond wrote a decent article on how this is done.  Scheduled move requests can be instantiated via PowerShell script.
  26. One of the final steps of a successful migration is properly decommissioning the old Exchange 2003 environment.  This is where things got a little hairy, and I half wasn’t surprised.  Upon attempting to uninstall Exchange 2003 to properly remove its tentacles from Active Directory and the Exchange organization, I was greeted by two errors in the following message:
    5-9-2010 9-16-31 PM
    In the legacy Exchange 2003 System Manager, there are two Recipient Update policies which exist.  Going from memory, one was for the domain which I was able to remove easily, and one was an Enterprise policy which cannot be removed via the System Manager.  Follow the instructions near the end of this article for the procedure to modify Active Directory with adsiedit.
    The second error message deals with removal of the legacy Routing Group Connector.  There were actually two which needed to be removed.  The only way to remove the Routing Group Connector is via PowerShell and it is also described towards the end of this article.
  27. After addressing the issues above, the uninstaller ran briefly and then failed for an unknown reason.  Upon attempting to re-run the uninstall, I noticed the ability uninstall Exchange 2003 via Add/Remove Programs in the Control Panel had disappeared, as if it was successfully uninstalled. Clearly it was not as the Exchange services still existed, were running, and I could launch System Manager and manage the organization.
  28. ActiveSync doesn’t work out of the box on privileged administrator level accounts due to security reasons.  If you accept the risk, this behavior can be changed by enabling the inheritance checkbox on the user account security property sheet.

I’m pretty happy with the results.  The process took took quite a few steps but I am nonetheless pleased.  Careful work following a very nicely outlined procedure by Michael B. Smith has yielded both a snappy-fast Exchange 2010 server on Windows Server 2008 R2 as well as ActiveSync integration with my iPad.  Exchange 2010 is a beast.  I can’t imagine tackling an Exchange project for anything larger than the smallest of environments.  I’m not sure how I can have so many years experience managing my own small Exchange environment yet still lack the confidence in the technology.  I guess it mostly runs itself and as I said earlier, it’s quite resilient meaning it doesn’t require much care and feeding from me.  And thank God for that.

NetApp disk replacement – so easy a caveman and his tech savvy neighbor can do it

May 13th, 2010

The NetApp filer in the lab recently encountered a failed disk.  With the failed disk confirmed dead and removed, and the replacement disk added, I made my first attempt at replacing a failed disk in a NetApp filer.

fas3050clow*> disk assign 0a.29
disk 0a.29 (S/N 3HY0T1GG00007342W9NJ) is already owned by system cr2conffd03 (ID
disk assign: Assign failed for one or more disks in the disk list.

Detour.  The following parsed output confirmed this disk had ownership information from a previous filer in its DNA:

fas3050clow*> disk show -a
  DISK       OWNER                  POOL   SERIAL NUMBER
———— ————-          —–  ————-
0a.29        cr2conffd03(84173417)   Pool0  3HY0T1GG00007342W9NJ

Quick help from the community set me in the right direction.  A few commands accomplished the required task:

fas3050clow*> priv set advanced
fas3050clow*> disk assign 0a.29 -s unowned -f
Note: Disks may be automatically assigned to this node, since option disk.auto_a
ssign is on.
fas3050clow*> disk assign 0a.29
Thu May 13 13:30:56 CDT [fas3050clow: diskown.changingOwner:info]: changing owne
rship for disk 0a.29 (S/N 3HY0T1GG00007342W9NJ) from unowned (ID -1) to fas3050c
low (ID 101175198)
Thu May 13 13:30:56 CDT [fas3050clow: HTTPPool00:warning]: HTTP XML Authenticati
on failed from
fas3050clow*> Thu May 13 13:30:56 CDT [fas3050clow: diskown.RescanMessageFailed:
warning]: Could not send rescan message to fas3050clow. Please type disk show on
 the console of fas3050clow for it to scan the newly inserted disks.
Thu May 13 13:30:56 CDT [fas3050clow: raid.assim.label.upgrade:info]: Upgrading
RAID labels.
Thu May 13 13:30:57 CDT [fas3050clow: disk.fw.downrevWarning:warning]: 1 disks h
ave downrev firmware that you need to update.
Thu May 13 13:31:00 CDT [fas3050clow: monitor.globalStatus.ok:info]: The system’
s global status is normal.

Shortly after, the firmware on the replacement disk was automatically upgraded:

Thu May 13 13:31:18 CDT [fas3050clow: dfu.firmwareDownloading:info]: Now downloa
ding firmware file /etc/disk_fw/X274_SCHT6146F10.NA16.LOD on 1 disk(s) of plex [

I confirmed via NetApp System Manager (my GUI crutch), that the replaced disk is now a spare for the two aggregates configured on/owned by the head.  I then updated the storage array spreadsheet I maintain which tracks disks, spares, arrays, luns, aggregates, volumes, exports, groups, pools, etc. for the various lab storage.

One additional item I learned from a NetApp Engineer is that spares are not to remain static.  Rather, the role is designed to float around to different disks as failures can and will occur.  This is a habit I’m learning to break which contradicts management of older storage arrays where spares instantiated to active duty were later deactivated when a failed disk was replaced.

As Erick Moore suggests in the comments, don’t forget to exit privileged mode when done:

fas3050clow*> priv set

Jason Langer, the spreadsheet is really nothing special. Merely a tool I use to keep track of the storage configurations. Following is a screenshot:

SnagIt Capture

NetApp Deduplication

May 10th, 2010

NetApp FAS 3050cQuick tip if you use NetApp filer storage and you’d like to enable Deduplication (dedupe) and actually have it work as it was designed: Size the volumes and aggregates according to NetApp Deduplication for FAS and V-Series Deployment and Implementation Guide (TR-3505). What happens if you don’t provide enough breathing room for dedupe to run? In my experience, it runs for a second or two and completes successfully, but it does not deduplicate any data.

The deduplication metadata overhead space required boils down to a few variables: Data ONTAP version, volumes, and data within the volumes.  All three make up the calculation.  Specifically look at the tail end of section 3.3 on pages 17-18. 

For Data ONTAP 7.3.x, which is what I have on the NetApp FAS 3050c filer, the following calculation applies:

1. Volume deduplication overhead – for each volume with deduplication enabled, up to 2% of the logical amount of data written to that volume will be required in order to store volume dedupe metadata.  This is free space needed in the volume.

2. Aggregate deduplication overhead – for each aggregate that contains any volumes with dedupe enabled, up to 4% of the logical amount of data contained in all of those volumes with dedupe enabled will be required in order to store the aggregate dedupe metadata.  This is free space needed in the aggregate.

An example used in the document:

If 100GB of data is to be deduplicated within a single volume, then there should be 2GB worth of available space within the volume and 4GB of space available within the aggregate.

Could be visualized as:

 5-10-2010 8-34-44 PM

A second example with multiple volumes:

Consider a 2TB aggregate with 4 volumes each 400GB’s in size within the aggregate where three volumes are to be deduplicated, with 100GB of data, 200GB of data and 300GB of data respectively. The volumes will need 2GB, 4GB, and 6GB of space within the respective volumes; and, the aggregate will need a total of 24GB ((4% of 100GB) + (4% of 200GB) + (4%of 300GB) = 4+8+12 = 24GB) of space available within the aggregate.

Could be visualized as:

5-10-2010 8-55-30 PM

If you’ve got a filer in which to carve out some storage which needs to be deduplicated, you can go about the calculation from a few different directions. 

  • You can start with the aggregate whose size will be determined by spindles and protection level, then plug the remaining numbers to come up with a volume size and maximum data set size. 
  • Or maybe you already have the size of the data set which needs to be deduplicated.  In this case, you can work the other way and determine the size of the volume required (leaving 2% available) as well as the size of the aggregate (leaving 4% available).

I take full credit for the MS Excel diagrams above. Eat your heart out Hany Michael 🙂

Update 5/13/10:  Here’s another item I stumbled on… Once dedupe completes, it may not reflect any savings in the “Space saved” field highlighted below.  In my case, this occurred because the iSCSI LUN carved out of the volume was not thin provisioned. 

5-13-2010 6-38-08 PM

Vaughn Stewart of NetApp explained it as follows:

With NetApp, thick provisioned LUNs reserve space in the FlexVol. In other words it is a storage accounting function and not a fully written out file (like a VMDK).

If data in the LUN is deduped, the savings cannot be displayed if the thick LUN reservation is in place. Toggle the LUN to thin and savings should magically appear.

There is absolutely no change in the data layout or performance with thick or thin LUNs (thus why you can toggle the function).

This was resolved by editing the LUN properties and unchecking the “Reserved” box, and then rerunning the deduplication process on the volume.

QuickPress – VMs Per…

May 7th, 2010

I’m trying out my frist QuickPress. Let’s see how this turns out.
Right off the bat, I’m missing the autocomplete feature for Tags. As it turns out, typing more than three lines in the small content box isn’t much fun.

On with the VMware content… This all comes from the VMware vSphere Configuration Maximums document.  I’ve bolded some of what I’d call core stats which capacity planners or architects would need to be aware of on a regular basis:

15,000 VMs registered per Linked-mode vCenter Server
10,000 powered on VMs per Linked-mode vCenter Server
4,500 VMs registered per 64-bit vCenter Server
4,000 VMs concurrently scanned by VUM (64-bit)
3,000 powered on VMs per 64-bit vCenter Server
3,000 VMs registered per 32-bit vCenter Server
3,000 VMs connected per Orchestrator
2,000 powered on VMs per 32-bit vCenter Server
1,280 powered on VMs per DRS cluster
320 VMs per host (standalone)
256 VMs per VMFS volume
256 VMs per host in a DRS cluster
200 VMs concurrently scanned by VUM (32-bit)
160 VMs per host in HA cluster with 8 or fewer hosts (vSphere 4.0 Update 1)
145 powered on Linux VMs concurrently scanned per host
145 powered on Linux VMs concurrently scanned per VUM server
145 VMs per host scanned for VMware Tools
145 VMs per host scanned for VMware Tools upgrade
145 VMs per host scanned for virtual machine hardware
145 VMs per host scanned for virtual machine hardware upgrade
145 VMs per VUM server scanned for VMware Tools
145 VMs per VUM server scanned for VMware Tools upgrade
100 VMs per host in HA cluster with 8 or fewer hosts (vSphere 4.0)
72 powered on Windows VMs concurrently scanned per VUM server
40 VMs per host in HA cluster with 9 or more hosts
10 powered off Windows VMs concurrently scanned per VUM server
6 powered on Windows VMs concurrently scanned per host
6 powered off Windows VMs concurrently scanned per host
5 VMs per host concurrently remediated

Got all that?

Update 5/10/10: Added the row 160 VMs per host in HA cluster with 8 or fewer hosts (vSphere 4.0 Update 1) – Thanks for the catch Matt & Joe!

VKernel Capacity Analyzer

May 6th, 2010

Last month, I attended Gestalt IT Tech Field Day in Boston.  This is an independent conference made up of hand selected delegates and sponsored by the technology vendors whom we were visiting.  All of the vendors boast products which tie into a virtualized datacenter which made the event particularly exciting for me!

One of the vendors we met with is VKernel.  If you’re a long time follower of my blog, you may recall a few of my prior VKernel posts including VKernel CompareMyVM.  Our VKernel briefing covered Capacity Analyzer.  This is a product I actually looked at in the lab well over a year ago, but it was time to take another peek to see what improvements have been made.

Before I get into the review, some background information on VKernel:

VKernel helps systems administrators manage server and storage capacity utilization in their virtualized datacenters so they can:

  • Get better utilization from existing virtualization resources
  • Avoid up to 1/2 the cost of expanding their virtualized datacenter
  • Find and fix or avoid capacity related performance problems

VKernel provides easy to use, highly affordable software for systems managers that:

  • Integrates with their existing VMware systems
  • Discovers their virtualized infrastructure and
  • Determines actual utilization vs. provisioned storage, memory, and CPU resources

And the VKernel Capacity Analyzer value proposition:

Capacity Analyzer proactively monitors shared CPU, memory, network, and disk (storage and disk I/O) utilization trends in VMware and Hyper-V environments across hosts, clusters, and resource pools enabling you to:

  • Find and fix current and future capacity bottlenecks
  • Safely place new VMs based on available capacity
  • Easily generate capacity utilizatino alerts

Capacity Analyzer lists for $299/socket, however, VKernel was nice enough to provide each of the delegates with a 10 socket/2 year license which was more than adequate for evaluation in the lab.  From this point forward, I will refer to Capacity Analyzer as CA.

One of the things which was noticed right away by another delegate and by myself was the quick integration and immediate results.  CA 4.2 Standard Edition ships as a virtual appliance in OVF or Converter format.  The 32-bit SLES VM is pre-built, pre-configured, and pre-optimized for the role which it was designed for in the virtual infrastructure.  The 600MB appliance deploys in just minutes.  The minimum deployment tasks consist of network configuration (including DHCP support), licensing, and pointing at a VI3 or vSphere virtual infrastructure.

CA is managed by HTTP web interface which has been the subject of noticable improvement and polishing since the last time I reviewed the product.  The management and reporting interface is presented in a dashboard layout which makes use of the familiar stoplight colors.  A short period of time after deployment, I was already seeing data being collected.  I should note that the product supports management of multiple infrastructures.  I pointed CA at VI3 and vSphere vCenters simultaneously.

5-5-2010 10-58-08 PM

One of the dashboard views in CA is the “Top VM Consumers” for metrics such as CPU, Memory, Storage, CPU Ready, Disk Bus Resets, Disk Commands Aborted, Disk Read, and Disk Write.  The dashboard view shows the top 5, however, detailed drilldown is available which lists all the VMs in my inventory.

5-5-2010 10-48-59 PM

Prior to deploying CA, I felt I had a pretty good feel for the capacity and utilization in the lab.  After letting CA digest the information available, I thought it would be interesting to compare results provided by CA with my own perception and experience.  I was puzzled by the initial findings.  Consider the following physical two node cluster information from vCenter.  Each node is configured identically with 2xQC AMD Opteron processors and 16GB RAM. Each host is running about 18 powered on VMs.  Host memory is and always has been my limiting resource, and it’s evident here, however, with HA admission control disabled, there is still capacity to register and power on several more “like” VMs.

5-5-2010 10-46-54 PM

So here’s where things get puzzling for me.  Looking at the Capacity Availability Map, CA is stating
1) Memory is my limiting resource – correct
2) There is no VM capacity left on the DL385 G2 Cluster – that’s not right

5-5-2010 10-46-01 PM

After further review, the discrepancy is revealed.  The Calculated VM Size (slot size if you will) for memory is at 3.5GB.  I’ not sure where CA is coming up with this number. It’s not the HA calculated slot size, I checked.  3.5GB is nowhere near the average VM memory allocation in the lab.  Most of my lab VMs are thinly provisioned from a memory standpoint since host memory is my limiting resource.  I’ll need to see if this can be adjusted because these numbers are not accurate, thus not reliable.  I wouldn’t want to base a purchasing decision on this information.

5-5-2010 10-59-20 PM

Here’s an example of a drilldown.  Again, I like the presentation, although this screen seems to have some justification inconsistencies (right vs. center).  Reports in CA can be saved in .PDF or .CSV format, making them ideal for sharing, collaboration, or archiving.  Another value add is a recommendation section which is stated in plain English in the event the reader is unable to interpret the numbers.  What I’m somewhat confused about is fact that the information provided in different areas is contradicting.  In this case, the summary reports VM backupexec “is not experiencing problems with memory usage… the VM is getting all required memory resources”.  However, it goes on to say there is a problem in that there exists a Memory usage bottleneck… the VM may experience performance degradation if memory usage increases.  Finally, it recommends incresaing the VM memory size to almost double the currently assigned value – and this Priority is ranked as High.

5-5-2010 10-42-01 PM

It’s not clear to me from the drilldown report if there is a required action here or not. With the high priority status, there is a sense of urgency, but to do what?  The analysis states performance could suffer if memory usage increases.  That typically will be the case for virtual or physical machines alike.  The problem as I see it is the analysis is concerned with a future event of which may or may not occur.  If the VM has shown no prior history of higher memory consumption and there is no change to the application running in the VM, I would expect the memory utilization to remain constant.  VKernel is on the right track, but I think the out-of-box logic needs tuning so that it is more intuitive.  Else this is a false alarm which would cause me to overutilize host capacity or I would learn to ignore which is dangerous and provides no return on investment in a management tool.

I’ve got more areas to explore with VKernel Capacity Analyzer and I welcome input, clarification, corrections from VKernel.  Overall I like the direction of the product and I think VKernel has the potential to service capacity planning needs for virtual infrastructures of all sizes.  The ease in deployment provides rapid return. As configuration maximums and VM densities increase, capacity planning becomes more challenging.  When larger VMs are deployed, significant dents are being made in the virtual infrastructure causing shared resources to deplete more rapidly per instance than in years past.  Additional capacity takes time to procure. We need to be able to lean on tools like these to provide the automated analysis and alarms to stay ahead of capacity requests and not be caught short on infrastructure resources.