Posts Tagged ‘3rd Party Apps’

Registered Storage Providers Missing After vCenter 5.5 Update 1 Upgrade

March 17th, 2014

Taking a look at my VM Storage Policies compliance in the vSphere Client, I was alerted to a situation that none of my configured virtual machines were compliant with their assigned VM Storage Policy named “Five Nines Compellent Storage”.  Oddly enough, the virtual machine home directories and virtual disks were in fact on the correct datastores and showed as compliant a few days earlier. None had been migrated via Storage vMotion or SDRS.

Snagit Capture

Now you see it, now you don’t

I then verified my VASA configuration by looking at the status of my registered storage provider.  The issue was not so much that the provider was malfunctioning, but rather it was missing completely from the registered storage providers list.  This indeed explains the resulting Not Compliant status of my virtual machines.

Snagit Capture

I checked another upgraded environment where I know I had a registered VASA storage provider.  It reflected the same symptom and confirmed my suspicion that the recent process of upgrading to vCenter Server 5.5 appliance to Update 1 (via the web repository method) may have unregistered the storage provider once the reboot of the appliance was complete.

I had one more similar environment remaining which I had not upgraded yet. I verified the storage provider was registered and functioning prior to the Update 1 upgrade. I proceeded with the upgrade and after the reboot completed the storage provider was no longer registered.

What remains a mystery at this point is the root cause of the unregistered storage provider.  I was unable to find any VMware KB articles related to this issue.

Not the end of the world

The workaround is straightforward: re-register each of the missing storage providers.  For Dell Compellent customers, the storage provider points to the CITV (Compellent Integration Tools for VMware) appliance and the URL is follows the format:

https://fqdn:8443/vasa/services/vasaService

Snagit Capture

Dell Compellent customers should also keep the following in mind for VASA integration:

  • the integration requires the CITV appliance and Enterprise Manager 6.1 and above.
  • the out of box Windows Server Firewall configuration which Enterprise Manager sits on will block the initial VASA configuration in the CITV appliance. TCP 3033 incoming must be allowed or alternatively disable the Windows Firewall (not highly recommended).

Once the applicable storage provider(s) are added back, no additional VM Storage Policy reconfiguration is required other than to check for compliance.  All VMs should fall back into compliance.

Snagit Capture

Once again, I am unsure at this point as to why applying vCenter 5.5 Update 1 to the appliance caused the registered storage providers to go missing or what that connection is.  I will also add that I deployed additional vCenter 5.5 appliances under vCloud Director with a default configuration, no registered vSphere hosts, registered a VASA storage provider, upgraded to Update 1, rebooted, and the storage provider remained. I’m not sure what element in these subsequent tests caused the outcome to change but the problem itself now presents itself as inconsistent.  If I do see it again and find a root cause, as per usual I will be sure to update this article. To reiterate, Update 1 was applied in this case via the web repository method.  There are a few other methods available to apply Update 1 to the vCenter Server appliance and of course there is also the Windows version of vCenter Server – it is unknown by me if these other methods and versions are impacted the same way.

Looks like someone has a case of the Mondays

On a somewhat related note, during lab testing I did find that VM Storage Profiles configured via the legacy vSphere Client do not show up as configured VM Storage Policies in the next gen vSphere Web Client.  Likewise, VM Storage Policies created in the next gen vSphere Web Client are missing in the legacy vSphere Client.  However, registered storage providers themselves carry over from one client to the other – no issue there.  I guess the lesson here is to stick with a consistent method of creating, applying, and monitoring Profile-Driven Storage in your vSphere environment from a vSphere Client perspective.  As of the release of vSphere 5.5 going forward, that should be the next gen vSphere Web Client.  However, this client still seems to lack the ability to identify VASA provided storage capabilities on any given datastore although the entire list of possible capability strings is available by diving into VM Storage Policy configuration.

Last but not least, VMware KB 2004098 vSphere Storage APIs – Storage Awareness FAQ provides useful bits of information about the VASA side of vSphere storage APIs.  One item in that FAQ that I’ve always felt was worded a bit ambiguously in the context of vSphere consolidation is:

The Vendor Provider cannot run on the same host as the vCenter Server.

In most cases, the vCenter Server as well as the VASA integration component(s) will run as virtual machines.  Worded above as is, it would seem the vCenter Server (whether that be Windows or appliance based) cannot reside on the same vSphere host as the VASA integration VM(s).  That’s not at all what that statement implies and moreover it wouldn’t make much sense.  What it’s talking about is the use case of a Windows based vCenter Server.  In this case, Windows based VASA integration components must not be installed on the same Windows server being used to host vCenter Server.  For Dell Compellent customers, the VASA integration comes by way of the CITV appliance which runs atop a Linux platform. However, the CITV appliances does communicate with the Windows based Enterprise Manager Data Collector for VASA integration.  Technically, EM isn’t the provider, the CITV appliance is.  Personally I’d keep the EM and vCenter Server installations separate.  Both appreciate larger amounts of CPU and memory in larger environments and for the sake of performance, we don’t want these two fighting for resources during times of contention.

VMware Releases vSphere PowerCLI 5.5 R2

March 12th, 2014

I stumbled across some interesting news shared by Alan Renouf on Facebook this morning – an R2 release of vSphere PowerCLI 5.5 (Build 1649237).  New in R2 per the release notes:

  • Access to the vCenter Server SRM public API (Connect-SRMServer and Disconnect-SRMServer cmdlets) – an exciting addition for sure
  • Support for adding and removing tags and tag categories found in the next generation vSphere web client
  • Configuration and reporting of EVC mode for vSphere clusters
  • Management of security policies for the vSS and its portgroups
  • New support for MS Windows PowerShell 4.0
  • Support for vSphere hosts configured for IPv6
  • Added migration priority support for vMotion (VMotionPriority parameter in conjunctionw ith the Move-VM cmdlet)
  • Get-Datastore cmdlet
    • RelatedObject paremeter extended to accept the Harddisk object
    • now allows filtering by cluster
  • Enhanced Get-Stat and Get-StatType cmdlets
  • Support added for e1000e vNICs
  • All values for DiskStorageFormat can be specified during VM cloning operations
  • 64-bit mode support for New-OSCustomizationSpec and Set-OSCustomizationSpec cmdlets
  • ToolsVersion property added to VMGuest which returns a string
  • Get-VirtualSwitch and Get-DVSwitch cmdlets support virtual port groups as a RelatedObject
  • Get-VM cmdlet enhanced to retrieve a list of VMs by virtual switch
  • Miscellaneous bug fixes

VMware vSphere PowerCLI 5.5 R2 supports vSphere 4.1 through vSphere 5.5 as well as Microsoft Windows PowerShell versions 2.0, 3.0, and new in R2 4.0.

Thank you Alan and thank you VMware!

VMTurbo’s Disruptive Software-Driven Control Expands Across Storage and Fabric To Realize Full Value of Virtualization

February 9th, 2014

Press Release

VMTurbo’s Disruptive Software-Driven Control Expands Across Storage and Fabric To Realize Full Value of Virtualization

VMTurbo Operations Manager Enables Customers to Realize 30%Improvement in Utilization while assuring application workload performance

BOSTON, MA – January 27, 2014 – VMTurbo, provider of the only Software-Driven Control for virtualized environments, today announced a new version of its flagship product, VMTurbo Operations Manager, enhanced with control modules for storage and fabric to drive virtualized environments to their desired state and maintain control in that state across the data center and IT stack.  These new solutions enable 30%  improvement in utilization while providing greater control over all aspects of the environment the application workload touches – from compute and storage to fabric and cloud. 

One of the major advancements in this release is management of the Converged Fabric layer with Cisco (CSCO) UCS support. Not only does VMTurbo provide unprecedented visibility into UCS from the fabric interconnect down to individual blades, it also enables control of UCS to manage real demand for UCS ports to maximize port utilization and avoid unnecessary port licensing costs.

“We’ve made a significant investment in UCS and are happy with it but it’s a challenge to manage,” said Jonathan Brown, Desktop Administrator at Beaufort Memorial Hospital (www.bmhsc.org). “VMTurbo is the only solution we’ve found on the market that helps us understand the inner workings of UCS so we can better manage it. We love VMTurbo, and are excited for all the new features to help us manage the future growth of our environment.”

VMTurbo is also disrupting enterprise software with its model of “easy to try, buy, deploy, and use”. Customers download VMTurbo Operations Manager and realize value instantly – unlike traditional management software which takes several months to install and perform after significant integration costs.  In fact, VMTurbo offers customers a free health check assessment of their virtual environments.  With VMTurbo, customers can break free from expensive, monitoring solutions that fail to eliminate reactive and labor-intensive IT fire fighting..  

“I learned more about my data center in 15 minutes with VMTurbo than I did in the last five years,” said Chuck Green, CIO of AlphaMaxx Healthcare, Inc., the premiere NCQA-accredited perinatal population health management firm.  “It’s truly a paradigm shift.”

90% of customers that have implemented VMTurbo’s Software-Driven control system to manage their virtualized data centers and cloud infrastructures report a return-on-investment of less than three months from purchase – an unparalleled breakthrough disrupting traditional enterprise management software (Source: TechValidate).

VMTurbo was recently recognized last week by Forbes as one of 2014 America’s Most Promising Companies.  Earlier in the year VMTurbo received the JP Morgan Hall of Innovation award, being named one of the most innovative technologies in the data center.

“VMTurbo’s technology is helping JPMorgan Chase optimize the utilization of virtual environments and thereby supporting a move from reactive to predictive workload management,” said George Sherman, Head of Compute Services at JPMorgan Chase. “Automation will enable our support teams to focus on higher value activity by preventing incidents and dynamically optimizing virtual environments.” 

VMTurbo Operations Manager

VMTurbo Operations Manager is the only product on the market understanding application workload performance, resource utilization and constraints in virtualized datacenter and cloud deployments to drive an organization’s environment to its desired state – that state of perpetual health where application performance is assured while maximizing efficiency – while providing control over all aspects of the environment the application workload touches, from compute and storage to fabric and cloud. While competitive solutions focus on viewing – monitoring systems to send alerts requiring operational staff to troubleshoot and remedy issues – VMTurbo Operations Manager ties the viewing with the doing, so IT Operations staff can elevate their focus from reactive to strategic. To try VMTurbo Operations Manager in your own environment, visit www.vmturbo.com/download or for a free health check assessment, call 1.877.978.8818 .

For more detailed information on VMTurbo Operations Manager, visit vmturbo.com/operations-manager.

VMTurbo Storage Control Module

VMTurbo’s Storage Control Module ensures applications get the storage performance they require to operate reliably while enabling efficient use of storage infrastructure – thus preventing unnecessary over provisioning.  This module helps users solve their pressing storage performance and cost challenges, maximize their existing storage investments and embrace the adoption of advanced features and packaging such as NetApp Clustered Data ONTAP (cluster mode) and FlexPod. For more detailed information on VMTurbo Storage Control Module, visit www.vmturbo.com/storage-resource-management.

VMTurbo Fabric Control Module

Modern compute platforms and blade servers have morphed to fabrics unifying compute, network, virtualization and storage access into a single integrated architecture.  Furthermore, fabrics like Cisco (CSCO) UCS form the foundation of a programmable infrastructure for today’s private clouds and virtualized datacenters, the backbone of converged infrastructure offerings from VCE vBlock and NetApp FlexPod. 

With the addition of this Fabric Control Module, VMTurbo’s software-driven control system ensures workloads get the compute and network resources they need to perform reliably while maximizing the utilization of underlying blades and ports. For more detailed information on VMTurbo Fabric Control Module, visit  www.vmturbo.com/ucs-management 

About VMTurbo

VMTurbo’s Software-Driven Control platform enables organizations to manage cloud and enterprise virtualization environments to maximize infrastructure investments while assuring application performance. VMTurbo’s patent-pending Economic Scheduling Engine dynamically adjusts configuration, resource allocation and workload placement to meet service levels and business goals, and is the only technology capable of closing the loop in IT operation by automating the decision-making process to maintain an environment in its desired state. The VMTurbo platform first launched in August 2010 and since that time more than 10,000 cloud service providers and enterprises worldwide have deployed the platform, including JP Morgan Chase, Colgate-Palmolive and Ingram Micro. Using VMTurbo, our customers ensure that applications get the resources they need to operate reliably, while utilizing their most valuable infrastructure and human resources most efficiently. For more information, visit www.vmturbo.com.

Storage Center 5.6 Released

November 25th, 2013

I don’t have the latest and greatest Dell Compellent SC8000 controllers or SC220 2.5″ drive enclosures in my home lab although I dream nightly about Santa unloading some on me this Christmas.  What I do have is an older Series 20 and I am thankful for that.  But having an older storage array doesn’t mean I cannot leverage some of the latest and greatest features and operating systems available for datacenters.

Storage Center 5.6 was released just a short time ago and it ushers in some feature and platform support currently built into Storage Center 6.x as well as a large number of bug fixes.  This is a big win for me and anyone with 32-bit system (Series 30 or below) needing these features because SCOS 6.x is 64-bit only for Series 40 and newer which today includes the SC8000.

So what are these new features in 5.6 and why am I so excited?  I’m glad you asked.  For this guy, and on top of the list, it’s full support of all VAAI primitives.  Storage Center 5.5 and older boasted support of the block zeroing primitive.  Space Reclamation was there as well although that primitve alone did not satisfy the other component of the thin provisioning primitive which was STUN.

Shown below a Storage Center 5.5 datastore where I lack Atomic Test and Set (aka Hardware Assisted Locking) and XCOPY.  I have block zeroing and Space Reclamation using the Free Space Recovery agent for vSphere guest VMs using physical RDMs. VAAI support status can be obtained in full using esxcli:

Snagit Capture

Or in part using the vSphere Client GUI:

Snagit Capture

After the Storage Center 5.6 upgrade, I’ve got additional VAAI primitive support where Clone in most cases is going to be the biggest one in terms of fabric and host efficiency and performance. Not shown is support for Thin Provisioning Stun but that has been added as well:

Snagit Capture

The vSphere Client GUI now reflects full VAAI support after the 5.6 upgrade:

Snagit Capture

What else? Added support for vSphere 5.5 as an operating system type:

Snagit Capture

Last but not least, added support for Windows 2012 and some of its features including Offloaded Data Transfer, Thin Provisioning, Space Reclamation, and Server Objects:

Snagit Capture

Storage Center 5.6 also adds new storage features which are storage host agnostic such as Background Media Scans (BMS) as well as improved disk and HBA management for server objects.  And the bug fixes I mentioned earlier – refer to the SCOS 5.6 Release Notes for details.

To wrap this up, if you’ve got an older Storage Center model and you want support for these new features while avoiding a forklift upgrade, Storage Center Operating System 5.6 is the way to go.

vSphere 5.5 UNMAP Deep Dive

September 13th, 2013

One of the features that has been updated in vSphere 5.5 is UNMAP which is one of two sub-components of what I’ll call the fourth block storage based thin provisioning VAAI primitive (the other sub-component is thin provisioning stun).  I’ve already written about UNMAP a few times in the past.  It was first introduced in vSphere 5.0 two years ago.  A few months later the feature was essentially recalled by VMware.  After it was re-released by VMware in 5.0 Update 1, I wrote about its use here and followed up with a short piece about the .vmfsBalloon file here.

For those unfamiliar, UNMAP is a space reclamation mechanism used to return blocks of storage back to the array after data which was once occupying those blocks has been moved or deleted.  The common use cases are deleting a VM from a datastore, Storage vMotion of a VM from a datastore, or consolidating/closing vSphere snapshots on a datastore.  All of these operations, in the end, involve deleting data from pinned blocks/pages on a volume.  Without UNMAP, these pages, albeit empty and available for future use by vSphere and its guests only, remain pinned to the volume/LUN backing the vSphere datastore.  The pages are never returned back to the array for use with another LUN or another storage host.  Notice I did not mention shrinking a virtual disk or a datastore – neither of those operations are supported by VMware.  I also did not mention the use case of deleting data from inside a virtual machine – while that is not supported, I believe there is a VMware fling for experimental use.  In summary, UNMAP extends the usefulness of thin provisioning at the array level by maintaining storage efficiency throughout the life cycle of the vSphere environment and the array which supports the UNMAP VAAI primitive.

On the Tuesday during VMworld, Cormac Hogan launched his blog post introducing new and updated storage related features in vSphere 5.5.  One of those features he summarized was UNMAP.  If you haven’t read his blog, I’d definitely recommend taking a look – particularly if you’re involved with vSphere storage.  I’m going to explore UNMAP in a little more detail.

The most obvious change to point out is the command line itself used to initiate the UNMAP process.  In previous versions of vSphere, the command issued on the vSphere host was:

vmkfstools -y x (where x represent the % of storage to unmap)

As Cormac points out, UNMAP has been moved to esxcli namespace in vSphere 5.5 (think remote scripting opportunities after XYZ process) where the basic command syntax is now:

esxcli storage vmfs unmap

In addition to the above, there are also three switches available for use; of first two listed below, one is required, and the third is optional.

-l|–volume-label=<str> The label of the VMFS volume to unmap the free blocks.

-u|–volume-uuid=<str> The uuid of the VMFS volume to unmap the free blocks.

-n|–reclaim-unit=<long> Number of VMFS blocks that should be unmapped per iteration.

Previously with vmkfstools, we’d change to VMFS folder in which we were going to UNMAP blocks from.  In vSphere 5.5, the esxcli command can be run from anywhere so specifying the the datastore name or the uuid is one of the required parameters for obvious reasons.  So using the datastore name, the new UNMAP command in vSphere 5.5 is going to look like this:

esxcli storage vmfs unmap -l 1tb_55ds

As for the optional parameter, the UNMAP command is an iterative process which continues through numerous cycles until complete.  The reclaim unit parameter specifies the quantity of blocks to unmap per each iteration of the UNMAP process.  In previous versions of vSphere, VMFS-3 datastores could have block sizes of 1, 2, 4, or 8MB.  While upgrading a VMFS-3 datastore to VMFS-5 will maintain these block sizes, executing an UNMAP operation on a native net-new VMFS-5 datastore results in working with a 1MB block size only.  Therefore, if a reclaim unit value of 100 is specified on a VMFS-5 datastore with a 1MB block size, then 100MB data will be returned to the available raw storage pool per iteration until all blocks marked available for UNAMP are returned.  Using a value of 100, the UNMAP command looks like this:

esxcli storage vmfs unmap -l 1tb_55ds -n 100

If the reclaim unit value is unspecified when issuing the UNMAP command, the default reclaim unit value is 200, resulting in 200MB of data returned to the available raw storage pool per iteration assuming a 1MB block size datastore.

One additional piece to to note on the CLI topic is that in a release candidate build I was working with, while the old vmkfstools -y command is deprecated, it appears to still exist but with newer vSphere 5.5 functionality published in the –help section:

vmkfstools vmfsPath -y –reclaimBlocks vmfsPath [–reclaimBlocksUnit #blocks]

The next change involves the hidden temporary balloon file (refer to my link at the top if you’d like more information about the balloon file but basically it’s a mechanism used to guarantee blocks targeted for UNMAP are not in the interim written to by an outside I/O request until the UNMAP process is complete).  It is no longer named .vmfsBalloon.  The new name is .asyncUnmapFile as shown below.

/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 # ls -l -h -A
total 998408
-r——–    1 root     root      200.0M Sep 13 10:48 .asyncUnmapFile
-r——–    1 root     root        5.2M Sep 13 09:38 .fbb.sf
-r——–    1 root     root      254.7M Sep 13 09:38 .fdc.sf
-r——–    1 root     root        1.1M Sep 13 09:38 .pb2.sf
-r——–    1 root     root      256.0M Sep 13 09:38 .pbc.sf
-r——–    1 root     root      250.6M Sep 13 09:38 .sbc.sf
drwx——    1 root     root         280 Sep 13 09:38 .sdd.sf
drwx——    1 root     root         420 Sep 13 09:42 .vSphere-HA
-r——–    1 root     root        4.0M Sep 13 09:38 .vh.sf
/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 #

As discussed in the previous section, use of the UNMAP command now specifies the the actual size of the temporary file instead of the temporary file size being determined by a percentage of space to return to the raw storage pool.  This is an improvement in part because it helps avoid the catastrophe if UNMAP tried to remove 2TB+ in a single operation (discussed here).

VMware has also enhanced the functionality of the temporary file.  A new kernel interface in ESXi 5.5 allows the user to ask for blocks beyond a a specified block address in the VMFS file system.  This ensures that the blocks allocated to the temporary file were never allocated to the temporary file previously.  The benefit realized in the end is that any size temporary file can be created and with UNMAP issued to the blocks allocated to the temporary file, we can rest assured that we can issue UNMAP on all free blocks on the datastore.

Going a bit deeper and adding to the efficiency, VMware has also enhanced UNMAP to support multiple block descriptors.  Compared to vSphere 5.1 which issued just one block descriptor per UNMAP command, vSphere 5.5 now issues up to 100 block descriptors depending on the storage array (these identifying capabilities are specified internally in the Block Limits VPD (B0) page).

A look at the asynchronous and iterative vSphere 5.5 UNMAP logical process:

  1. User or script issues esxcli UNMAP command
  2. Does the array support VAAI UNMAP?  yes=3, no=end
  3. Create .asyncUnmapFile on root of datastore
  4. .asyncUnmapFile created and locked? yes=5, no=end
  5. Issue 10CTL to allocate reclaim-unit blocks of storage on the volume past the previously allocated block offset
  6. Did the previous block allocation succeed? yes=7, no=remove lock file and retry step 6
  7. Issue UNMAP on all blocks allocated above in step 5
  8. Remove the lock file
  9. Did we reach the end of the datastore? yes=end, no=3

From a performance perspective, executing the UNMAP command in my vSphere 5.5 RC lab showed peak write I/O of around 1,200MB/s with an average of around 200IOPS comprised of a 50/50 mix of read/write.  The UNMAP I/O pattern is a bit hard to gauge because with the asynchronous iterative process, it seemed to do a bunch of work, rest, do more work, rest, and so on.  Sorry no screenshots because flickr.com is currently down.  Perhaps the most notable takeaway from the performance section is that as of vSphere 5.5, VMware is lifting the recommendation of only running UNMAP during a maintenance window.  Keep in mind this is just a recommendation.  I encourage vSphere 5.5 customers to test UNMAP in their lab first using various reclaim unit sizes.  While do this, examine performance impacts to the storage fabric, the storage array (look at both front end and back end), as well as other applications sharing the array.  Remember that fundamentally the UNMAP command is only going to provide a benefit AFTER its associated use cases have occurred (mentioned at the top of the article).  Running UNMAP on a volume which has no pages to be returned will be a waste of effort.  Once you’ve become comfortable with using UNMAP and understanding its impacts in your environment, consider running it on a recurring schedule – perhaps weekly.  It really depends on how much the use cases apply to your environment.  Many vSphere backup solutions leverage vSphere snapshots which is one of the use cases.  Although it could be said there are large gains to be made with UNMAP in this case, keep in mind backups run regularly and and space that is returned to raw storage with UNMAP will likely be consumed again in the following backup cycle where vSphere snapshots are created once again.

To wrap this up, customers who have block arrays supporting the thin provision VAAI primitive will be able to use UNMAP in vSphere 5.5 environments (for storage vendors, both sub-components are required to certify for the primitive as a whole on the HCL).  This includes Dell Compellent customers with current version of Storage Center firmware.  Customers who use array based snapshots with extended retention periods should keep in mind that while UNMAP will work against active blocks, it may not work with blocks maintained in a snapshot.  This is to honor the snapshot based data protection retention.

Veeam Launches Backup & Replication v7

August 22nd, 2013

Data protection, data replication, and data recovery are challenging.  Consolidation through virtualization has forced customers to retool automated protection and recovery methodologies in the datacenter and at remote DR sites.

For VMware environments, Veeam has been with customers helping them every step of the way with their flagship Backup & Replication suite.  Once just a simple backup tool, it has evolved into an end to end solution for local agentless backup and restore with application item intelligence as well as a robust architecture to fulfill the requirements of replicating data offsite and providing business continuation while meeting aggressive RPO and RTO metrics.  Recent updates have also bridged the gap for Hyper-V customers, rounding out the majority of x86 virtualized datacenters.

But don’t take their word for it.  Talk to one of their 200,000+ customers – for instance myself.  I’ve been using Veeam in the boche.net lab for well over five years to achieve nightly backups of not only my ongoing virtualization projects, but my growing family’s photos, videos, and sensitive data as well.  I also tested, purchased, and implemented in a previous position to facilitate the migration of virtual machines from one large datacenter to another via replication.  In December of 2009, I was also successful in submitting a VCDX design to VMware incorporating Veeam Backup & Replication, and followed up in Feburary 2010 successfully defending that design.

Veeam is proud to announce another major milestone bolstering their new Modern Data Protection campaign – version 7 of Veeam Backup & Replication.  In this new release, extensive R&D yields 10x faster performance as well as many new features such as built-in WAN acceleration, backup from storage snapshots, long requested support for tape, and a solid data protection solution for vCloud Director.  Value was added for Hyper-V environments as well – SureBackup automated verification support, Universal Application Item Recovery, as well as the on-demand Sandbox.  Aside from the vCD support, one of the new features I’m interested in looking at is parallel processing of virtual machine backups.  It’s a fact that with globalized business, backup windows have shrunk while data footprints have grown exponentially.  Parallel VM and virtual disk backup, refined compression algorithms, and 64-bit backup repository architecture will go a long way to meet global business challenges.

v7 available now.  Check it out!

This will likely be my last post until VMworld.  I’m looking forward to seeing everyone there!

Software Defined Single Sign On Database Creation

July 2nd, 2013

I don’t manage large scale production vSphere datacenters any longer but I still manage several smaller environments, particularly in the lab.  One of my pain points since the release of vSphere 5.1 has been the creation of SSO (Single Sign On) databases.  It’s not that creating an SSO database is incredibly difficult, but success does require a higher level of attention to detail.  There are a few reasons for this:

  1. VMware provides multiple MS SQL scripts to set up the back end database environment (rsaIMSLiteMSSQLSetupTablespaces.sql and rsaIMSLiteMSSQLSetupUsers.sql).  You have to know which scripts to run and in what order they need to be run in.
  2. The scripts VMware provides are hard coded in many places with things like database names, data file names, log file names, index file names, SQL login names, filegroup and tablespace information.

What VMware provides in the vCenter documentation is all well and good however it’s only good for installing a single SSO database per SQL Server instance.  The problem that presents itself is that when faced with having to stand up multiple SSO environments using a single SQL Server, one needs to know what to tweak in the scripts provided to guarantee instance uniqueness, and more importantly – what not to tweak.  For instance, we want to change file names and maybe SQL logins, but mistakenly changing tablespace or filegroup information will most certainly render the database useless for the SSO application.

So as I said, I’ve got several environments I manage, each needing a unique SSO database.  Toying with the VMware provided scripts was becoming time consuming and error prone and frankly has become somewhat of a stumbling block to deploying a vCenter Server – a task that had historically been pretty easy.

There are a few options to proactively deal with this:

  1. Separate or local SQL installation for each SSO deployment – not really what I’m after.  I’ve never been much of a fan of decentralized SQL deployments, particularly those that must share resources with vSphere infrastructure on the same VM.  Aside from that, SQL Server sprawl for this use case doesn’t make a lot of sense from a financial, management, or resource perspective.
  2. vCenter Appliance – I’m growing more fond of the appliance daily but I’m not quite there yet. I’d still like to see the MS SQL support and besides that I still need to maintain Windows based vCenter environments – it’s a constraint.
  3. Tweak the VMware provided scripts – Combine the two scripts into one and remove the static attributes of the script by introducing TSQL variables via SQLCMD Mode.

I opted for option 3 – modify the scripts to better suit my own needs while also making them somewhat portable for community use.  The major benefits in my modifications are that there’s just one script to run and more importantly anything that needs to be changed to provide uniqueness is declared as a few variables at the beginning of the script instead of hunting line by line through the body trying to figure out what can be changed and what cannot.  And really, once you’ve provided the correct path to your data, log, and index files (index files are typically stored in the same location as data files), the only variable needing changing going forward for a new SSO instance is the database instance prefix.  On a side note, I was fighting for a method to dynamically provide the file paths by leveraging some type of system variable to minimize the required.  While this is easy to do in SQL2012, there is no reliable method in SQL2008R2 and I wanted to keep the script consistent for both so I left it out.

Now I’m not a DBA myslef but I did test on both SQL2008R2 and SQL2012 and I got a little help along the way from a few great SMEs in the community:

  • Mike Matthews – a DBA in Technical Marketing at Dell Compellent
  • Jorge Segarra – better known as @sqlchicken on Twitter from Pragmatic Works (he’s got a blog here as well)

If you’d like to use it, feel free.  However, no warranties, use at your own risk, etc.  The body of the script is listed below and you can right-click and save the script from this location: SDSSODB.sql

Again, keep in mind the TSQL script is run in SQLCMD Mode which is enabled via the Query pulldown menu in the Microsoft SQL Server Management Studio.  The InstancePrefix variable, through concatenation, will generate the database name, logical and physical file names, SQL logins and their associated passwords.  Feel free to change any of this behavior to suit your preferences or the needs of your environment.

————————————————————————————-

— The goal of this script is to provide an easy, consistent, and repeatable

— process for deploying multiple vSphere SSO databases on a single SQL Server

— instance without having to make several modifications to the two VMware provided

— scripts each time a new SSO database is needed.

— The following script combines the VMware vSphere 5.1 provided

— rsaIMSLiteMSSQLSetupTablespaces.sql and rsaIMSLiteMSSQLSetupUsers.sql scripts

— into one script. In addition, it removes the static database and file names

— and replaces them with dynamically generated equivalants based on an

— InstancePrefix variable declared at the beginning of the script. Database,

— index, and log file folder locations are also defined with variables.

— This script meets the original goal in that it can deploy multiple iterations

— of the vSphere SSO database on a single SQL Server instance simply by modifying

— the InstancePrefix variable at the beginning of the script. The script then uses

— that prefix with concatenation to produce the database, .mdf, .ldf, .ndf, and

— two user logins and their required SQL permissions.

— The script must be run in SQLCMD mode (Query|SQLCMD Mode).

— No warranties provided. Use at your own risk.

— Jason Boche (@jasonboche, http://boche.net/blog/)

— with special thanks to:

— Mike Matthews (Dell Compellent)

— Jorge Segarra (Pragmatic Works, @sqlchicken, http://sqlchicken.com/)

— VMware, Inc.

————————————————————————————-

 

:setvar InstancePrefix “DEVSSODB”

:setvar PrimaryDataFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

:setvar IndexFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

:setvar LogFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

 

USE [master];

GO

 

————————————————————————————-

— Create database

— The database name can also be customized, but cannot contain

— reserved keywords like database or any characters other than letters, numbers,

— _, @ and #.

————————————————————————————-

CREATE DATABASE [$(InstancePrefix)_RSA] ON

PRIMARY(

NAME = N’$(InstancePrefix)_RSA_DATA’,

FILENAME = N’$(PrimaryDataFilePath)$(InstancePrefix)_RSA_DATA.mdf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10% ),

FILEGROUP RSA_INDEX(

NAME = N’$(InstancePrefix)_RSA_INDEX’,

FILENAME = N’$(IndexFilePath)$(InstancePrefix)_RSA_INDEX.ndf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10%)

LOG ON(

NAME = N’$(InstancePrefix)_translog’,

FILENAME = N’$(LogFilePath)$(InstancePrefix)_translog.ldf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10% );

GO

 

— Set recommended performance settings on the database

ALTER DATABASE [$(InstancePrefix)_RSA] SET AUTO_SHRINK ON;

GO

ALTER DATABASE [$(InstancePrefix)_RSA] SET RECOVERY SIMPLE;

GO

 

————————————————————————————-

— Create users

— Change the user’s passwords (CHANGE USER PASSWORD) below.

— The DBA account is used during installation and the USER account is used during

— operation. The user names below can be customised, but cannot contain

— reserved keywords like table or any characters other than letters, numbers, and _ .

— Please execute the scripts as a administrator with sufficient permissions.

————————————————————————————-

 

USE [master];

GO

 

CREATE LOGIN [$(InstancePrefix)_RSA_DBA] WITH PASSWORD = ‘$(InstancePrefix)_RSA_DBA’, DEFAULT_DATABASE = [$(InstancePrefix)_RSA];

GO

CREATE LOGIN [$(InstancePrefix)_RSA_USER] WITH PASSWORD = ‘$(InstancePrefix)_RSA_USER’, DEFAULT_DATABASE = [$(InstancePrefix)_RSA];

GO

 

USE [$(InstancePrefix)_RSA];

GO

 

ALTER AUTHORIZATION ON DATABASE::[$(InstancePrefix)_RSA] TO [$(InstancePrefix)_RSA_DBA];

GO

 

CREATE USER [$(InstancePrefix)_RSA_USER] FOR LOGIN [$(InstancePrefix)_RSA_USER];

GO