Posts Tagged ‘vSphere’

VCA-WM Exam Review

October 21st, 2013

Last Thursday while on vacation in Tucson, Arizona, I sat the VMware Certified Associate – Workforce Mobility exam (exam code VCAW510).  This is the third of three currently available VCA level exam I’ve attempted in the last three weeks.  I wrote about the previous VCA exam experiences here and here.

VMware’s take on VCA-WM preparation:

There is no training requirement, however there is a free, self-paced elearning class that can help you prepare.

Snagit Capture

VMware summarizes the VCA-WM certification as follows:

With the VCA-Workforce Mobility certification, you’ll have greater credibility when discussing workforce mobility and end-user computing, the business challenges that VMware Horizon Suite is designed to address, and how deploying the Horizon solution addresses those challenges. You will be able to define workforce mobility and provide use case scenarios of how Horizon and workforce mobility can deliver freedom, flexibility, and manageability while connecting people to their data, applications, and desktops.

VMware further explains that a successful candidate who passes the VCA-WM will realize the following benefits:

  • Recognition of your technical knowledge
  • Official transcripts
  • Use of VCA-WM logo
  • Access to the exclusive VCA portal & logo merchandise store
  • Invitation to beta exams and classes
  • Discounted admission to VMware events
  • Greater opportunities for career advancement

Once again, I recognize two additional benefits to this exam experience:

  • The exam can be taken online from any location with a compatible internet web browser and an internet connection
  • By virtue of the above, coffee is available in the exam room – those who know me know this is a perk

Chris Wahl has a new blog post introducing The New VMware Certified Associate (VCA) Exams.  His video includes VCA exam background, preparation, as well as step by step instructions covering exam registration.

Length for native English speaking vGeeks is 50 questions in 75 minutes.  Both multiple choice and multiple select style of questions.  VMware’s exam summary was consistent with my exam experience.  While I don’t design and manage VMware VDI environments on a daily or even semi-regular basis, I’ve managed one in my own lab for the past few years and I’m familiar with all of the components in the Horizon Suite.  I expected this exam to be slightly more difficult than the VCA-DCV exam but a little easier than the VCA-Cloud exam based on my overall experience with all technologies involved.  Basically, knowledge is needed about each of the features in the Horizon Suite and what is appropriate and where from a product/solution positioning standpoint.  In a word, this exam was frustrating.  I felt good about all of the straight forward questions addressing features and functionality and plowed through those quickly.  However, I encountered about a dozen questions of a particular style in which I wasn’t quite sure what was being asked based on the answers provided.  Without explicitly divulging test questions, the ask was to identify an infrastructure or business challenge that would need to be addressed in order to deploy a given VMware Horizon Suite solution.  That is the way each of the questions of this style was worded.  While that is all well and good, the answers provided seem to discuss areas that the resulting Horizon deployment would implicitly address, rather than areas that needed to be addressed prior to a deployment – and in many cases, each of the provided answers were correct to a degree.  For each, I chose the closest answer but again I didn’t feel any of the answers fit the wording of the question being asked.  I left comments on each of the questions I felt weren’t clear and I also grabbed screenshots of each of these questions which I may reference should VMware wish to contact me regarding my comments.

The laptop I was taking the exam suffered an Internet Explorer crash four times and I had to resume the exam each of the four times.  The test engine appeared to handle these scenarios well.

 

Snagit Capture

Now… I will move on to the VCAP5-DCA which I’ve been blowing off successfully since its launch.  That exam is scheduled for early November (earliest available slot at my nearby exam centers) with a 70% off voucher, again thanks to my friends on Twitter.

VCA-Cloud Exam Review

October 14th, 2013

Following up on my last post on the VCA-DCV Exam Review and in identical style, Friday evening I sat the VMware Certified Associate – Cloud (VCA-Cloud) exam (exam code VCAC510).

VMware’s take on VCA-Cloud preparation:

There is no training requirement, however there is a free, self-paced elearning class that can help you prepare.

Snagit Capture

VMware summarizes the VCA-Cloud certification as follows:

With the VCA-Cloud certification, you’ll have greater credibility when discussing cloud computing, the business challenges the vCloud Suite is designed to address, and how deploying the vCloud solution addresses those challenges. You’ll be able to define cloud computing and provide use case scenarios of how vCloud and cloud computing can take advantage of private and public clouds without changing existing applications and leverage a common management, orchestration, networking, and security model.

VMware further explains that a successful candidate who passes the VCA-Cloud will realize the following benefits:

  • Recognition of your technical knowledge
  • Official transcripts
  • Use of VCA-Cloud logo
  • Access to the exclusive VCA portal & logo merchandise store
  • Invitation to beta exams and classes
  • Discounted admission to VMware events
  • Greater opportunities for career advancement

Once again, I recognize two additional benefits to this exam experience:

  • The exam can be taken online from any location with a compatible internet web browser and an internet connection
  • By virtue of the above, coffee is available in the exam room – those who know me know this is a perk

Chris Wahl has a new blog post introducing The New VMware Certified Associate (VCA) Exams.  His video includes VCA exam background, preparation, as well as step by step instructions covering exam registration.

On to the exam.  Length for native English speaking vGeeks is 50 questions in 75 minutes.  Both multiple choice and multiple select style of questions.  VMware’s exam summary was consistent with my exam experience.  I’d also add that there was a pretty large focus on hybrid cloud solutions and connectivity.  I found this exam to be more difficult and outside the scope of my daily expertise than the VCA-DCV exam.  While I’ve had quite a bit of experience with vCloud Director and its operational use of storage and networking, and those discussion points weren’t much of a problem, I was at a clear disadvantage in areas covering vFabric Suite, Hyperic, and slightly deeper use of vCOPS.  All of these topics garnered significant focus making it clear that VMware is making a very strong and intentional push into the private/hybrid/public cloud spaces.

I took my time on this exam and did not provide any comments/feedback as I did on the VCA-DCV exam.  At one point I had to stop because lightning, thunder, and rain rolled up on the deck attached to the back of my house where I was taking the exam.  I had to take my laptop, coffee, and cigar to the front of the house which is covered by the stoop.  If the candidate has a basic understanding of VMware’s product portfolio as well as the fundamental features in vSphere, time management shouldn’t be an issue.

I was able to pass the exam adding VCA-Cloud to my suite of certifications.

Snagit Capture

I will now move on to the VCAP5-DCA which I’ve been blowing off successfully since its launch.  That exam is scheduled for early November (earliest available slot at my nearby exam centers) with a 70% off voucher, again thanks to my friends on Twitter.  In the interim, I may also take a look at the VMware Certified Associate – Workforce Mobility (VCA-WM) exam.

VCA-DCV Exam Review

October 11th, 2013

Last week I saw a tweet referring to a link to the Perfect Cloud virtualization blog which contained a free voucher for the VMware Certified Associate – Data Center Virtualization (VCA-DCV) exam (exam code VCAD510).  Admittedly, in the past I didn’t have much interest in sitting this exam but with the free voucher available, I thought I’d give it an impromptu shot (translated: I’ll be sitting the exam immediately with no preparation. Many test takers refer to this as ‘going in cold‘).  My reasoning was that having sat advanced level VMware certifications in the past, I wasn’t overly concerned with preparation on this one.

VMware’s take on VCA-DCV preparation:

There is no training requirement, however there is a free, self-paced elearning class that can help you prepare.

Snagit Capture

VMware summarizes the VCA-DCV certification as follows:

With the VCA-Data Center Virtualization certification, you’ll have greater credibility when discussing data center virtualization, the business challenges that vSphere is designed to address, and how virtualizing the data center with vSphere addresses those challenges. You’ll be able to define data center virtualization and provide use case scenarios of how vSphere and data center virtualization can provide cost and operational benefits.

VMware further explains that a successful candidate who passes the VCA-DCV will realize the following benefits:

  • Recognition of your technical knowledge
  • Official transcripts
  • Use of VCA-DCV logo
  • Access to the exclusive VCA portal & logo merchandise store
  • Invitation to beta exams and classes
  • Discounted admission to VMware events
  • Greater opportunities for career advancement

Personally, I would add two additional benefits to this exam:

  • The exam can be taken online from any location with a compatible internet web browser and an internet connection
  • By virtue of the above, coffee is available in the exam room – those who know me know this is a perk

Chris Wahl has a new blog post introducing The New VMware Certified Associate (VCA) Exams.  His video includes VCA exam background, preparation, as well as step by step instructions covering exam registration.

On to the exam.  Length for native English speaking vGeeks is 50 questions in 75 minutes.  Both multiple choice and multiple select style of questions.  VMware’s exam summary was spot on, at least for the latter parts (I’m still awaiting peer/industry feedback on the increase of my credibility part).  Most of the questions dealt with a variably complex business need revolving around… yep you guessed it – datacenter virtualization, and the requirement to recommend a corresponding VMware product or feature that meets the customer need.  Most of the Q & A was straightforward but there were a few I came across which either the question or answers provided were vague enough such that the resulting answer will be left to interpretation leading either to a correct or incorrect answer.  Having plenty of time to complete the exam, I left comments/feedback on these items.

I completed the exam in 20 minutes including the comments/feedback on a handful of questions.  If the candidate has a basic understanding of VMware’s product portfolio as well as the fundamental features in vSphere, time management shouldn’t be an issue.

And that wraps it up.  I’ve added VCA-DCV to my suite of certifications.

Snagit Capture

I will now move on to the VCAP5-DCA which I’ve been blowing off successfully since its launch.  That exam is scheduled for early November (earliest available slot at my nearby exam centers) with a 70% off voucher, again thanks to my friends on Twitter.

vSphere 5.5 UNMAP Deep Dive

September 13th, 2013

One of the features that has been updated in vSphere 5.5 is UNMAP which is one of two sub-components of what I’ll call the fourth block storage based thin provisioning VAAI primitive (the other sub-component is thin provisioning stun).  I’ve already written about UNMAP a few times in the past.  It was first introduced in vSphere 5.0 two years ago.  A few months later the feature was essentially recalled by VMware.  After it was re-released by VMware in 5.0 Update 1, I wrote about its use here and followed up with a short piece about the .vmfsBalloon file here.

For those unfamiliar, UNMAP is a space reclamation mechanism used to return blocks of storage back to the array after data which was once occupying those blocks has been moved or deleted.  The common use cases are deleting a VM from a datastore, Storage vMotion of a VM from a datastore, or consolidating/closing vSphere snapshots on a datastore.  All of these operations, in the end, involve deleting data from pinned blocks/pages on a volume.  Without UNMAP, these pages, albeit empty and available for future use by vSphere and its guests only, remain pinned to the volume/LUN backing the vSphere datastore.  The pages are never returned back to the array for use with another LUN or another storage host.  Notice I did not mention shrinking a virtual disk or a datastore – neither of those operations are supported by VMware.  I also did not mention the use case of deleting data from inside a virtual machine – while that is not supported, I believe there is a VMware fling for experimental use.  In summary, UNMAP extends the usefulness of thin provisioning at the array level by maintaining storage efficiency throughout the life cycle of the vSphere environment and the array which supports the UNMAP VAAI primitive.

On the Tuesday during VMworld, Cormac Hogan launched his blog post introducing new and updated storage related features in vSphere 5.5.  One of those features he summarized was UNMAP.  If you haven’t read his blog, I’d definitely recommend taking a look – particularly if you’re involved with vSphere storage.  I’m going to explore UNMAP in a little more detail.

The most obvious change to point out is the command line itself used to initiate the UNMAP process.  In previous versions of vSphere, the command issued on the vSphere host was:

vmkfstools -y x (where x represent the % of storage to unmap)

As Cormac points out, UNMAP has been moved to esxcli namespace in vSphere 5.5 (think remote scripting opportunities after XYZ process) where the basic command syntax is now:

esxcli storage vmfs unmap

In addition to the above, there are also three switches available for use; of first two listed below, one is required, and the third is optional.

-l|–volume-label=<str> The label of the VMFS volume to unmap the free blocks.

-u|–volume-uuid=<str> The uuid of the VMFS volume to unmap the free blocks.

-n|–reclaim-unit=<long> Number of VMFS blocks that should be unmapped per iteration.

Previously with vmkfstools, we’d change to VMFS folder in which we were going to UNMAP blocks from.  In vSphere 5.5, the esxcli command can be run from anywhere so specifying the the datastore name or the uuid is one of the required parameters for obvious reasons.  So using the datastore name, the new UNMAP command in vSphere 5.5 is going to look like this:

esxcli storage vmfs unmap -l 1tb_55ds

As for the optional parameter, the UNMAP command is an iterative process which continues through numerous cycles until complete.  The reclaim unit parameter specifies the quantity of blocks to unmap per each iteration of the UNMAP process.  In previous versions of vSphere, VMFS-3 datastores could have block sizes of 1, 2, 4, or 8MB.  While upgrading a VMFS-3 datastore to VMFS-5 will maintain these block sizes, executing an UNMAP operation on a native net-new VMFS-5 datastore results in working with a 1MB block size only.  Therefore, if a reclaim unit value of 100 is specified on a VMFS-5 datastore with a 1MB block size, then 100MB data will be returned to the available raw storage pool per iteration until all blocks marked available for UNAMP are returned.  Using a value of 100, the UNMAP command looks like this:

esxcli storage vmfs unmap -l 1tb_55ds -n 100

If the reclaim unit value is unspecified when issuing the UNMAP command, the default reclaim unit value is 200, resulting in 200MB of data returned to the available raw storage pool per iteration assuming a 1MB block size datastore.

One additional piece to to note on the CLI topic is that in a release candidate build I was working with, while the old vmkfstools -y command is deprecated, it appears to still exist but with newer vSphere 5.5 functionality published in the –help section:

vmkfstools vmfsPath -y –reclaimBlocks vmfsPath [–reclaimBlocksUnit #blocks]

The next change involves the hidden temporary balloon file (refer to my link at the top if you’d like more information about the balloon file but basically it’s a mechanism used to guarantee blocks targeted for UNMAP are not in the interim written to by an outside I/O request until the UNMAP process is complete).  It is no longer named .vmfsBalloon.  The new name is .asyncUnmapFile as shown below.

/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 # ls -l -h -A
total 998408
-r——–    1 root     root      200.0M Sep 13 10:48 .asyncUnmapFile
-r——–    1 root     root        5.2M Sep 13 09:38 .fbb.sf
-r——–    1 root     root      254.7M Sep 13 09:38 .fdc.sf
-r——–    1 root     root        1.1M Sep 13 09:38 .pb2.sf
-r——–    1 root     root      256.0M Sep 13 09:38 .pbc.sf
-r——–    1 root     root      250.6M Sep 13 09:38 .sbc.sf
drwx——    1 root     root         280 Sep 13 09:38 .sdd.sf
drwx——    1 root     root         420 Sep 13 09:42 .vSphere-HA
-r——–    1 root     root        4.0M Sep 13 09:38 .vh.sf
/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 #

As discussed in the previous section, use of the UNMAP command now specifies the the actual size of the temporary file instead of the temporary file size being determined by a percentage of space to return to the raw storage pool.  This is an improvement in part because it helps avoid the catastrophe if UNMAP tried to remove 2TB+ in a single operation (discussed here).

VMware has also enhanced the functionality of the temporary file.  A new kernel interface in ESXi 5.5 allows the user to ask for blocks beyond a a specified block address in the VMFS file system.  This ensures that the blocks allocated to the temporary file were never allocated to the temporary file previously.  The benefit realized in the end is that any size temporary file can be created and with UNMAP issued to the blocks allocated to the temporary file, we can rest assured that we can issue UNMAP on all free blocks on the datastore.

Going a bit deeper and adding to the efficiency, VMware has also enhanced UNMAP to support multiple block descriptors.  Compared to vSphere 5.1 which issued just one block descriptor per UNMAP command, vSphere 5.5 now issues up to 100 block descriptors depending on the storage array (these identifying capabilities are specified internally in the Block Limits VPD (B0) page).

A look at the asynchronous and iterative vSphere 5.5 UNMAP logical process:

  1. User or script issues esxcli UNMAP command
  2. Does the array support VAAI UNMAP?  yes=3, no=end
  3. Create .asyncUnmapFile on root of datastore
  4. .asyncUnmapFile created and locked? yes=5, no=end
  5. Issue 10CTL to allocate reclaim-unit blocks of storage on the volume past the previously allocated block offset
  6. Did the previous block allocation succeed? yes=7, no=remove lock file and retry step 6
  7. Issue UNMAP on all blocks allocated above in step 5
  8. Remove the lock file
  9. Did we reach the end of the datastore? yes=end, no=3

From a performance perspective, executing the UNMAP command in my vSphere 5.5 RC lab showed peak write I/O of around 1,200MB/s with an average of around 200IOPS comprised of a 50/50 mix of read/write.  The UNMAP I/O pattern is a bit hard to gauge because with the asynchronous iterative process, it seemed to do a bunch of work, rest, do more work, rest, and so on.  Sorry no screenshots because flickr.com is currently down.  Perhaps the most notable takeaway from the performance section is that as of vSphere 5.5, VMware is lifting the recommendation of only running UNMAP during a maintenance window.  Keep in mind this is just a recommendation.  I encourage vSphere 5.5 customers to test UNMAP in their lab first using various reclaim unit sizes.  While do this, examine performance impacts to the storage fabric, the storage array (look at both front end and back end), as well as other applications sharing the array.  Remember that fundamentally the UNMAP command is only going to provide a benefit AFTER its associated use cases have occurred (mentioned at the top of the article).  Running UNMAP on a volume which has no pages to be returned will be a waste of effort.  Once you’ve become comfortable with using UNMAP and understanding its impacts in your environment, consider running it on a recurring schedule – perhaps weekly.  It really depends on how much the use cases apply to your environment.  Many vSphere backup solutions leverage vSphere snapshots which is one of the use cases.  Although it could be said there are large gains to be made with UNMAP in this case, keep in mind backups run regularly and and space that is returned to raw storage with UNMAP will likely be consumed again in the following backup cycle where vSphere snapshots are created once again.

To wrap this up, customers who have block arrays supporting the thin provision VAAI primitive will be able to use UNMAP in vSphere 5.5 environments (for storage vendors, both sub-components are required to certify for the primitive as a whole on the HCL).  This includes Dell Compellent customers with current version of Storage Center firmware.  Customers who use array based snapshots with extended retention periods should keep in mind that while UNMAP will work against active blocks, it may not work with blocks maintained in a snapshot.  This is to honor the snapshot based data protection retention.

Veeam Launches Backup & Replication v7

August 22nd, 2013

Data protection, data replication, and data recovery are challenging.  Consolidation through virtualization has forced customers to retool automated protection and recovery methodologies in the datacenter and at remote DR sites.

For VMware environments, Veeam has been with customers helping them every step of the way with their flagship Backup & Replication suite.  Once just a simple backup tool, it has evolved into an end to end solution for local agentless backup and restore with application item intelligence as well as a robust architecture to fulfill the requirements of replicating data offsite and providing business continuation while meeting aggressive RPO and RTO metrics.  Recent updates have also bridged the gap for Hyper-V customers, rounding out the majority of x86 virtualized datacenters.

But don’t take their word for it.  Talk to one of their 200,000+ customers – for instance myself.  I’ve been using Veeam in the boche.net lab for well over five years to achieve nightly backups of not only my ongoing virtualization projects, but my growing family’s photos, videos, and sensitive data as well.  I also tested, purchased, and implemented in a previous position to facilitate the migration of virtual machines from one large datacenter to another via replication.  In December of 2009, I was also successful in submitting a VCDX design to VMware incorporating Veeam Backup & Replication, and followed up in Feburary 2010 successfully defending that design.

Veeam is proud to announce another major milestone bolstering their new Modern Data Protection campaign – version 7 of Veeam Backup & Replication.  In this new release, extensive R&D yields 10x faster performance as well as many new features such as built-in WAN acceleration, backup from storage snapshots, long requested support for tape, and a solid data protection solution for vCloud Director.  Value was added for Hyper-V environments as well – SureBackup automated verification support, Universal Application Item Recovery, as well as the on-demand Sandbox.  Aside from the vCD support, one of the new features I’m interested in looking at is parallel processing of virtual machine backups.  It’s a fact that with globalized business, backup windows have shrunk while data footprints have grown exponentially.  Parallel VM and virtual disk backup, refined compression algorithms, and 64-bit backup repository architecture will go a long way to meet global business challenges.

v7 available now.  Check it out!

This will likely be my last post until VMworld.  I’m looking forward to seeing everyone there!

Software Defined Single Sign On Database Creation

July 2nd, 2013

I don’t manage large scale production vSphere datacenters any longer but I still manage several smaller environments, particularly in the lab.  One of my pain points since the release of vSphere 5.1 has been the creation of SSO (Single Sign On) databases.  It’s not that creating an SSO database is incredibly difficult, but success does require a higher level of attention to detail.  There are a few reasons for this:

  1. VMware provides multiple MS SQL scripts to set up the back end database environment (rsaIMSLiteMSSQLSetupTablespaces.sql and rsaIMSLiteMSSQLSetupUsers.sql).  You have to know which scripts to run and in what order they need to be run in.
  2. The scripts VMware provides are hard coded in many places with things like database names, data file names, log file names, index file names, SQL login names, filegroup and tablespace information.

What VMware provides in the vCenter documentation is all well and good however it’s only good for installing a single SSO database per SQL Server instance.  The problem that presents itself is that when faced with having to stand up multiple SSO environments using a single SQL Server, one needs to know what to tweak in the scripts provided to guarantee instance uniqueness, and more importantly – what not to tweak.  For instance, we want to change file names and maybe SQL logins, but mistakenly changing tablespace or filegroup information will most certainly render the database useless for the SSO application.

So as I said, I’ve got several environments I manage, each needing a unique SSO database.  Toying with the VMware provided scripts was becoming time consuming and error prone and frankly has become somewhat of a stumbling block to deploying a vCenter Server – a task that had historically been pretty easy.

There are a few options to proactively deal with this:

  1. Separate or local SQL installation for each SSO deployment – not really what I’m after.  I’ve never been much of a fan of decentralized SQL deployments, particularly those that must share resources with vSphere infrastructure on the same VM.  Aside from that, SQL Server sprawl for this use case doesn’t make a lot of sense from a financial, management, or resource perspective.
  2. vCenter Appliance – I’m growing more fond of the appliance daily but I’m not quite there yet. I’d still like to see the MS SQL support and besides that I still need to maintain Windows based vCenter environments – it’s a constraint.
  3. Tweak the VMware provided scripts – Combine the two scripts into one and remove the static attributes of the script by introducing TSQL variables via SQLCMD Mode.

I opted for option 3 – modify the scripts to better suit my own needs while also making them somewhat portable for community use.  The major benefits in my modifications are that there’s just one script to run and more importantly anything that needs to be changed to provide uniqueness is declared as a few variables at the beginning of the script instead of hunting line by line through the body trying to figure out what can be changed and what cannot.  And really, once you’ve provided the correct path to your data, log, and index files (index files are typically stored in the same location as data files), the only variable needing changing going forward for a new SSO instance is the database instance prefix.  On a side note, I was fighting for a method to dynamically provide the file paths by leveraging some type of system variable to minimize the required.  While this is easy to do in SQL2012, there is no reliable method in SQL2008R2 and I wanted to keep the script consistent for both so I left it out.

Now I’m not a DBA myslef but I did test on both SQL2008R2 and SQL2012 and I got a little help along the way from a few great SMEs in the community:

  • Mike Matthews – a DBA in Technical Marketing at Dell Compellent
  • Jorge Segarra – better known as @sqlchicken on Twitter from Pragmatic Works (he’s got a blog here as well)

If you’d like to use it, feel free.  However, no warranties, use at your own risk, etc.  The body of the script is listed below and you can right-click and save the script from this location: SDSSODB.sql

Again, keep in mind the TSQL script is run in SQLCMD Mode which is enabled via the Query pulldown menu in the Microsoft SQL Server Management Studio.  The InstancePrefix variable, through concatenation, will generate the database name, logical and physical file names, SQL logins and their associated passwords.  Feel free to change any of this behavior to suit your preferences or the needs of your environment.

————————————————————————————-

— The goal of this script is to provide an easy, consistent, and repeatable

— process for deploying multiple vSphere SSO databases on a single SQL Server

— instance without having to make several modifications to the two VMware provided

— scripts each time a new SSO database is needed.

— The following script combines the VMware vSphere 5.1 provided

— rsaIMSLiteMSSQLSetupTablespaces.sql and rsaIMSLiteMSSQLSetupUsers.sql scripts

— into one script. In addition, it removes the static database and file names

— and replaces them with dynamically generated equivalants based on an

— InstancePrefix variable declared at the beginning of the script. Database,

— index, and log file folder locations are also defined with variables.

— This script meets the original goal in that it can deploy multiple iterations

— of the vSphere SSO database on a single SQL Server instance simply by modifying

— the InstancePrefix variable at the beginning of the script. The script then uses

— that prefix with concatenation to produce the database, .mdf, .ldf, .ndf, and

— two user logins and their required SQL permissions.

— The script must be run in SQLCMD mode (Query|SQLCMD Mode).

— No warranties provided. Use at your own risk.

— Jason Boche (@jasonboche, http://boche.net/blog/)

— with special thanks to:

— Mike Matthews (Dell Compellent)

— Jorge Segarra (Pragmatic Works, @sqlchicken, http://sqlchicken.com/)

— VMware, Inc.

————————————————————————————-

 

:setvar InstancePrefix “DEVSSODB”

:setvar PrimaryDataFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

:setvar IndexFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

:setvar LogFilePath “D:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\”

 

USE [master];

GO

 

————————————————————————————-

— Create database

— The database name can also be customized, but cannot contain

— reserved keywords like database or any characters other than letters, numbers,

— _, @ and #.

————————————————————————————-

CREATE DATABASE [$(InstancePrefix)_RSA] ON

PRIMARY(

NAME = N’$(InstancePrefix)_RSA_DATA’,

FILENAME = N’$(PrimaryDataFilePath)$(InstancePrefix)_RSA_DATA.mdf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10% ),

FILEGROUP RSA_INDEX(

NAME = N’$(InstancePrefix)_RSA_INDEX’,

FILENAME = N’$(IndexFilePath)$(InstancePrefix)_RSA_INDEX.ndf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10%)

LOG ON(

NAME = N’$(InstancePrefix)_translog’,

FILENAME = N’$(LogFilePath)$(InstancePrefix)_translog.ldf’,

SIZE = 10MB,

MAXSIZE = UNLIMITED,

FILEGROWTH = 10% );

GO

 

— Set recommended performance settings on the database

ALTER DATABASE [$(InstancePrefix)_RSA] SET AUTO_SHRINK ON;

GO

ALTER DATABASE [$(InstancePrefix)_RSA] SET RECOVERY SIMPLE;

GO

 

————————————————————————————-

— Create users

— Change the user’s passwords (CHANGE USER PASSWORD) below.

— The DBA account is used during installation and the USER account is used during

— operation. The user names below can be customised, but cannot contain

— reserved keywords like table or any characters other than letters, numbers, and _ .

— Please execute the scripts as a administrator with sufficient permissions.

————————————————————————————-

 

USE [master];

GO

 

CREATE LOGIN [$(InstancePrefix)_RSA_DBA] WITH PASSWORD = ‘$(InstancePrefix)_RSA_DBA’, DEFAULT_DATABASE = [$(InstancePrefix)_RSA];

GO

CREATE LOGIN [$(InstancePrefix)_RSA_USER] WITH PASSWORD = ‘$(InstancePrefix)_RSA_USER’, DEFAULT_DATABASE = [$(InstancePrefix)_RSA];

GO

 

USE [$(InstancePrefix)_RSA];

GO

 

ALTER AUTHORIZATION ON DATABASE::[$(InstancePrefix)_RSA] TO [$(InstancePrefix)_RSA_DBA];

GO

 

CREATE USER [$(InstancePrefix)_RSA_USER] FOR LOGIN [$(InstancePrefix)_RSA_USER];

GO

 

The .vmfsBalloon File

July 1st, 2013

One year ago, I wrote a piece about thin provisioning and the role that the UNMAP VAAI primitive plays in thin provisioned storage environments.  Here’s an excerpt from that article:

When the manual UNMAP process is run, it balloons up a temporary hidden file at the root of the datastore which the UNMAP is being run against.  You won’t see this balloon file with the vSphere Client’s Datastore Browser as it is hidden.  You can catch it quickly while UNMAP is running by issuing the ls -l -a command against the datastore directory.  The file will be named .vmfsBalloonalong with a generated suffix.  This file will quickly grow to the size of data being unmapped (this is actually noted when the UNMAP command is run and evident in the screenshot above).  Once the UNMAP is completed, the .vmfsBalloon file is removed.

Has your curiosity ever got you wondering about the technical purpose of the .vmfsBalloon file?  It boils down to data integrity and timing.  At the time the UNMAP command is run, the balloon file is immediately instantiated and grows to occupy (read: hog) all of the blocks that are about to be unmapped.  It does this so that during the unmap process, none of the blocks are allocated during the process of new file creation elsewhere.  If you think about it, it makes sense – we just told vSphere to give these blocks back to the array.  If during the interim one or more of these blocks were suddenly allocated for a new file or file growth purposes, then we purge the block, we have a data integrity issue.  More accurately, newly created data will be missing as its block or blocks were just flushed back to the storage pool on the array.