Posts Tagged ‘VMware’

VCA-DCV Exam Review

October 11th, 2013

Last week I saw a tweet referring to a link to the Perfect Cloud virtualization blog which contained a free voucher for the VMware Certified Associate – Data Center Virtualization (VCA-DCV) exam (exam code VCAD510).  Admittedly, in the past I didn’t have much interest in sitting this exam but with the free voucher available, I thought I’d give it an impromptu shot (translated: I’ll be sitting the exam immediately with no preparation. Many test takers refer to this as ‘going in cold‘).  My reasoning was that having sat advanced level VMware certifications in the past, I wasn’t overly concerned with preparation on this one.

VMware’s take on VCA-DCV preparation:

There is no training requirement, however there is a free, self-paced elearning class that can help you prepare.

Snagit Capture

VMware summarizes the VCA-DCV certification as follows:

With the VCA-Data Center Virtualization certification, you’ll have greater credibility when discussing data center virtualization, the business challenges that vSphere is designed to address, and how virtualizing the data center with vSphere addresses those challenges. You’ll be able to define data center virtualization and provide use case scenarios of how vSphere and data center virtualization can provide cost and operational benefits.

VMware further explains that a successful candidate who passes the VCA-DCV will realize the following benefits:

  • Recognition of your technical knowledge
  • Official transcripts
  • Use of VCA-DCV logo
  • Access to the exclusive VCA portal & logo merchandise store
  • Invitation to beta exams and classes
  • Discounted admission to VMware events
  • Greater opportunities for career advancement

Personally, I would add two additional benefits to this exam:

  • The exam can be taken online from any location with a compatible internet web browser and an internet connection
  • By virtue of the above, coffee is available in the exam room – those who know me know this is a perk

Chris Wahl has a new blog post introducing The New VMware Certified Associate (VCA) Exams.  His video includes VCA exam background, preparation, as well as step by step instructions covering exam registration.

On to the exam.  Length for native English speaking vGeeks is 50 questions in 75 minutes.  Both multiple choice and multiple select style of questions.  VMware’s exam summary was spot on, at least for the latter parts (I’m still awaiting peer/industry feedback on the increase of my credibility part).  Most of the questions dealt with a variably complex business need revolving around… yep you guessed it – datacenter virtualization, and the requirement to recommend a corresponding VMware product or feature that meets the customer need.  Most of the Q & A was straightforward but there were a few I came across which either the question or answers provided were vague enough such that the resulting answer will be left to interpretation leading either to a correct or incorrect answer.  Having plenty of time to complete the exam, I left comments/feedback on these items.

I completed the exam in 20 minutes including the comments/feedback on a handful of questions.  If the candidate has a basic understanding of VMware’s product portfolio as well as the fundamental features in vSphere, time management shouldn’t be an issue.

And that wraps it up.  I’ve added VCA-DCV to my suite of certifications.

Snagit Capture

I will now move on to the VCAP5-DCA which I’ve been blowing off successfully since its launch.  That exam is scheduled for early November (earliest available slot at my nearby exam centers) with a 70% off voucher, again thanks to my friends on Twitter.

vSphere 5.5 UNMAP Deep Dive

September 13th, 2013

One of the features that has been updated in vSphere 5.5 is UNMAP which is one of two sub-components of what I’ll call the fourth block storage based thin provisioning VAAI primitive (the other sub-component is thin provisioning stun).  I’ve already written about UNMAP a few times in the past.  It was first introduced in vSphere 5.0 two years ago.  A few months later the feature was essentially recalled by VMware.  After it was re-released by VMware in 5.0 Update 1, I wrote about its use here and followed up with a short piece about the .vmfsBalloon file here.

For those unfamiliar, UNMAP is a space reclamation mechanism used to return blocks of storage back to the array after data which was once occupying those blocks has been moved or deleted.  The common use cases are deleting a VM from a datastore, Storage vMotion of a VM from a datastore, or consolidating/closing vSphere snapshots on a datastore.  All of these operations, in the end, involve deleting data from pinned blocks/pages on a volume.  Without UNMAP, these pages, albeit empty and available for future use by vSphere and its guests only, remain pinned to the volume/LUN backing the vSphere datastore.  The pages are never returned back to the array for use with another LUN or another storage host.  Notice I did not mention shrinking a virtual disk or a datastore – neither of those operations are supported by VMware.  I also did not mention the use case of deleting data from inside a virtual machine – while that is not supported, I believe there is a VMware fling for experimental use.  In summary, UNMAP extends the usefulness of thin provisioning at the array level by maintaining storage efficiency throughout the life cycle of the vSphere environment and the array which supports the UNMAP VAAI primitive.

On the Tuesday during VMworld, Cormac Hogan launched his blog post introducing new and updated storage related features in vSphere 5.5.  One of those features he summarized was UNMAP.  If you haven’t read his blog, I’d definitely recommend taking a look – particularly if you’re involved with vSphere storage.  I’m going to explore UNMAP in a little more detail.

The most obvious change to point out is the command line itself used to initiate the UNMAP process.  In previous versions of vSphere, the command issued on the vSphere host was:

vmkfstools -y x (where x represent the % of storage to unmap)

As Cormac points out, UNMAP has been moved to esxcli namespace in vSphere 5.5 (think remote scripting opportunities after XYZ process) where the basic command syntax is now:

esxcli storage vmfs unmap

In addition to the above, there are also three switches available for use; of first two listed below, one is required, and the third is optional.

-l|–volume-label=<str> The label of the VMFS volume to unmap the free blocks.

-u|–volume-uuid=<str> The uuid of the VMFS volume to unmap the free blocks.

-n|–reclaim-unit=<long> Number of VMFS blocks that should be unmapped per iteration.

Previously with vmkfstools, we’d change to VMFS folder in which we were going to UNMAP blocks from.  In vSphere 5.5, the esxcli command can be run from anywhere so specifying the the datastore name or the uuid is one of the required parameters for obvious reasons.  So using the datastore name, the new UNMAP command in vSphere 5.5 is going to look like this:

esxcli storage vmfs unmap -l 1tb_55ds

As for the optional parameter, the UNMAP command is an iterative process which continues through numerous cycles until complete.  The reclaim unit parameter specifies the quantity of blocks to unmap per each iteration of the UNMAP process.  In previous versions of vSphere, VMFS-3 datastores could have block sizes of 1, 2, 4, or 8MB.  While upgrading a VMFS-3 datastore to VMFS-5 will maintain these block sizes, executing an UNMAP operation on a native net-new VMFS-5 datastore results in working with a 1MB block size only.  Therefore, if a reclaim unit value of 100 is specified on a VMFS-5 datastore with a 1MB block size, then 100MB data will be returned to the available raw storage pool per iteration until all blocks marked available for UNAMP are returned.  Using a value of 100, the UNMAP command looks like this:

esxcli storage vmfs unmap -l 1tb_55ds -n 100

If the reclaim unit value is unspecified when issuing the UNMAP command, the default reclaim unit value is 200, resulting in 200MB of data returned to the available raw storage pool per iteration assuming a 1MB block size datastore.

One additional piece to to note on the CLI topic is that in a release candidate build I was working with, while the old vmkfstools -y command is deprecated, it appears to still exist but with newer vSphere 5.5 functionality published in the –help section:

vmkfstools vmfsPath -y –reclaimBlocks vmfsPath [–reclaimBlocksUnit #blocks]

The next change involves the hidden temporary balloon file (refer to my link at the top if you’d like more information about the balloon file but basically it’s a mechanism used to guarantee blocks targeted for UNMAP are not in the interim written to by an outside I/O request until the UNMAP process is complete).  It is no longer named .vmfsBalloon.  The new name is .asyncUnmapFile as shown below.

/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 # ls -l -h -A
total 998408
-r——–    1 root     root      200.0M Sep 13 10:48 .asyncUnmapFile
-r——–    1 root     root        5.2M Sep 13 09:38 .fbb.sf
-r——–    1 root     root      254.7M Sep 13 09:38 .fdc.sf
-r——–    1 root     root        1.1M Sep 13 09:38 .pb2.sf
-r——–    1 root     root      256.0M Sep 13 09:38 .pbc.sf
-r——–    1 root     root      250.6M Sep 13 09:38 .sbc.sf
drwx——    1 root     root         280 Sep 13 09:38 .sdd.sf
drwx——    1 root     root         420 Sep 13 09:42 .vSphere-HA
-r——–    1 root     root        4.0M Sep 13 09:38 .vh.sf
/vmfs/volumes/5232dd00-0882a1e4-e918-0025b3abd8e0 #

As discussed in the previous section, use of the UNMAP command now specifies the the actual size of the temporary file instead of the temporary file size being determined by a percentage of space to return to the raw storage pool.  This is an improvement in part because it helps avoid the catastrophe if UNMAP tried to remove 2TB+ in a single operation (discussed here).

VMware has also enhanced the functionality of the temporary file.  A new kernel interface in ESXi 5.5 allows the user to ask for blocks beyond a a specified block address in the VMFS file system.  This ensures that the blocks allocated to the temporary file were never allocated to the temporary file previously.  The benefit realized in the end is that any size temporary file can be created and with UNMAP issued to the blocks allocated to the temporary file, we can rest assured that we can issue UNMAP on all free blocks on the datastore.

Going a bit deeper and adding to the efficiency, VMware has also enhanced UNMAP to support multiple block descriptors.  Compared to vSphere 5.1 which issued just one block descriptor per UNMAP command, vSphere 5.5 now issues up to 100 block descriptors depending on the storage array (these identifying capabilities are specified internally in the Block Limits VPD (B0) page).

A look at the asynchronous and iterative vSphere 5.5 UNMAP logical process:

  1. User or script issues esxcli UNMAP command
  2. Does the array support VAAI UNMAP?  yes=3, no=end
  3. Create .asyncUnmapFile on root of datastore
  4. .asyncUnmapFile created and locked? yes=5, no=end
  5. Issue 10CTL to allocate reclaim-unit blocks of storage on the volume past the previously allocated block offset
  6. Did the previous block allocation succeed? yes=7, no=remove lock file and retry step 6
  7. Issue UNMAP on all blocks allocated above in step 5
  8. Remove the lock file
  9. Did we reach the end of the datastore? yes=end, no=3

From a performance perspective, executing the UNMAP command in my vSphere 5.5 RC lab showed peak write I/O of around 1,200MB/s with an average of around 200IOPS comprised of a 50/50 mix of read/write.  The UNMAP I/O pattern is a bit hard to gauge because with the asynchronous iterative process, it seemed to do a bunch of work, rest, do more work, rest, and so on.  Sorry no screenshots because flickr.com is currently down.  Perhaps the most notable takeaway from the performance section is that as of vSphere 5.5, VMware is lifting the recommendation of only running UNMAP during a maintenance window.  Keep in mind this is just a recommendation.  I encourage vSphere 5.5 customers to test UNMAP in their lab first using various reclaim unit sizes.  While do this, examine performance impacts to the storage fabric, the storage array (look at both front end and back end), as well as other applications sharing the array.  Remember that fundamentally the UNMAP command is only going to provide a benefit AFTER its associated use cases have occurred (mentioned at the top of the article).  Running UNMAP on a volume which has no pages to be returned will be a waste of effort.  Once you’ve become comfortable with using UNMAP and understanding its impacts in your environment, consider running it on a recurring schedule – perhaps weekly.  It really depends on how much the use cases apply to your environment.  Many vSphere backup solutions leverage vSphere snapshots which is one of the use cases.  Although it could be said there are large gains to be made with UNMAP in this case, keep in mind backups run regularly and and space that is returned to raw storage with UNMAP will likely be consumed again in the following backup cycle where vSphere snapshots are created once again.

To wrap this up, customers who have block arrays supporting the thin provision VAAI primitive will be able to use UNMAP in vSphere 5.5 environments (for storage vendors, both sub-components are required to certify for the primitive as a whole on the HCL).  This includes Dell Compellent customers with current version of Storage Center firmware.  Customers who use array based snapshots with extended retention periods should keep in mind that while UNMAP will work against active blocks, it may not work with blocks maintained in a snapshot.  This is to honor the snapshot based data protection retention.

Did You <3 VMware at VMworld?

September 2nd, 2013

This large whiteboard was made available for VMworld 2013 attendees in the back of the VMware booth.  The photo was taken on Monday afternoon.

Click on the image for a larger version.

Did you <3 VMware and tell them why?  Show other readers where you signed in the comments section below.

Can you accurately identify where I signed?  If so, send an email to jason@boche.net with the subject Mastering vSphere 5.5 Book.  The first five correct answers will receive a paperback copy of the new Mastering VMware vSphere 5.5 book by authors Scott Lowe, Nick Marshall, Forbes Guthrie, Matt Liebowitz, and Josh Atwell.  Availability of the book is late October or early November according to Scott Lowe.

 

telluswhyyoulovevmware

Update 11/1/13:  The winners have been announced in this blog post.  Thank you to all who participated!

A Look At vCenter 5.5 SSO RC Installation

August 30th, 2013

This week at VMworld 2013, I attended a few sessions directly related to vCenter 5.5 as well as its components, one of which is vCenter Single Sign On (SSO):

  • VSVC5234 – Extreme Performance Series: vCenter of the Universe
  • VSVC4830 – vCenter Deep Dive

First of all, both sessions were excellent and I highly recommend viewing them if you have access to the post conference recordings. 

If you followed my session tweets or if perhaps you’ve read half a dozen or more already available blog posts on the subject, you know that several improvements have been made to vCenter SSO for the vSphere 5.5 release.  For instance:

  • Completely re-written from the ground
  • Multi-master architecture
  • Native replication mechanism
  • SSO now has site awareness (think of the possibilities for HA stretched clusters)
  • MMC based diagnostic suite available as a separately maintained download
  • The external database and its preparation dependency has been removed
  • Database patitioning to improve both scalability and performance (this was actually added in 5.1 but I wanted to call it out)
  • Revamped multi-site deployment architecture
  • Full Mac OS X web client support including remote console
  • Improved certificate management
  • Multi-tenant capabilities
  • Drag ‘n’ Drop in the 5.5 web client

With some of the new features now identified and VMware’s blessing, have a look at the installation screens and see if you can spot the differences as compared to a vCenter 5.1 SSO installation.  These stem from a manual installation of SSO, not an automated installation of all vCenter components (by the way, the next gen web client is now installed as part of an automated vCenter 5.5 installation whereas it was not in 5.1).  Keep in mind these were pulled from a release candidate version and may change when vCenter 5.5 GAs at a future date.

I noticed one subtle change here – clicking on the Microsoft .NET 3.5 SP1 link in Windows 2008R2 actually installs the feature rather than just throwing up a dialogue box asking you to install the feature yourself.

Snagit Capture

As this is a manual installation, we have the option to use the default or specify the installation location.  Best practice is to install all vCenter components together so that they can communicate at server bus speed and won’t be impacted by network latency.  However, for larger scale environments, SSO should be isolated on a separate server with five or more vCenter Servers in the environment.  On a somewhat related note, the Inventory Service may benefit from an installation on SSD, again in large infrastructures.

Snagit Capture

We won’t likely see this in the GA version.

Snagit Capture

We’re going through the process of installing vCenter version 5.5 but in terms of the SSO component, again this is a complete re-write and bears the respective version of 2.0.

Snagit Capture

We always read the EULA in full and agree to the license terms and conditions.

Snagit Capture

 

Snagit Capture

Big changes here.  Note the differences in the deployment models compared to the previous 5.1 version – previous deployment models are honored through an upgrade to 5.5.  Again, this is where the VMworld sessions noted above really go into detail. 

Snagit Capture

the System-Domain namespace has been replaced with vsphere.local.

Snagit Capture

The new site awareness begins here.

Snagit Capture

Snagit Capture

Snagit Capture

Snagit Capture

I hope you agree that SSO installation in vCenter 5.5 has been simplified while many new features have been added at the same time.

As always, thank you for reading and it was a pleasure to meet and see everyone again this year at VMworld.

 

Veeam Launches Backup & Replication v7

August 22nd, 2013

Data protection, data replication, and data recovery are challenging.  Consolidation through virtualization has forced customers to retool automated protection and recovery methodologies in the datacenter and at remote DR sites.

For VMware environments, Veeam has been with customers helping them every step of the way with their flagship Backup & Replication suite.  Once just a simple backup tool, it has evolved into an end to end solution for local agentless backup and restore with application item intelligence as well as a robust architecture to fulfill the requirements of replicating data offsite and providing business continuation while meeting aggressive RPO and RTO metrics.  Recent updates have also bridged the gap for Hyper-V customers, rounding out the majority of x86 virtualized datacenters.

But don’t take their word for it.  Talk to one of their 200,000+ customers – for instance myself.  I’ve been using Veeam in the boche.net lab for well over five years to achieve nightly backups of not only my ongoing virtualization projects, but my growing family’s photos, videos, and sensitive data as well.  I also tested, purchased, and implemented in a previous position to facilitate the migration of virtual machines from one large datacenter to another via replication.  In December of 2009, I was also successful in submitting a VCDX design to VMware incorporating Veeam Backup & Replication, and followed up in Feburary 2010 successfully defending that design.

Veeam is proud to announce another major milestone bolstering their new Modern Data Protection campaign – version 7 of Veeam Backup & Replication.  In this new release, extensive R&D yields 10x faster performance as well as many new features such as built-in WAN acceleration, backup from storage snapshots, long requested support for tape, and a solid data protection solution for vCloud Director.  Value was added for Hyper-V environments as well – SureBackup automated verification support, Universal Application Item Recovery, as well as the on-demand Sandbox.  Aside from the vCD support, one of the new features I’m interested in looking at is parallel processing of virtual machine backups.  It’s a fact that with globalized business, backup windows have shrunk while data footprints have grown exponentially.  Parallel VM and virtual disk backup, refined compression algorithms, and 64-bit backup repository architecture will go a long way to meet global business challenges.

v7 available now.  Check it out!

This will likely be my last post until VMworld.  I’m looking forward to seeing everyone there!

Unleash The VCDX In You

August 17th, 2013

VCDX certification – for anyone who is on the fence about going through with it, you may want to take a look at some short video clips shot at VMworld 2012 last year.  VCDXs who have gone through the certification process talk about what it has done for them in terms of opportunity, benefits, and perhaps life in general.  Up until last year, growth in the program was fairly modest.  I know through conversations there were a lot of people interested in VCDX certification but at the same time they were hesitant for a variety of reasons, most of which I feel stem from being confident in themselves.  In the past year or so, there has been a surge of candidates who have successfully completed the journey and I hope those who are still on the fence have noticed and it gives them a shot of confidence to unlock their true potential and show the panel and community what they are capable of.  As I mention in my video, making it to the defense stage shows incredible integrity and pass or fail it is still a great learning experience.  Now don’t get me wrong here, shelling out $300 for a design submission and $900 for a defense slot does not buy over-the-counter confidence and guarantee a pass.  It may serve as motivation but candidates will need to search within themselves to find what it is that will pave the road to success for them.  I was talking to a VCDX panelist one night and one thing that he mentioned is that successful candidates had one thing in common:  Confidence.  It made sense to me.  When I went through the process it was still relatively new.  Not knowing what exactly to expect or train for was the source of some anxiety but there’s more training resources available now to VCDX candidates than ever before including bootcamps and books from VMware Press which should help build confidence.  And by the way, being confident doesn’t mean you won’t be nervous going into your defense – you wouldn’t be human if you weren’t – or you may be over confident which can work against you in your defense.  Keep in mind there’s a good chance the panelists are smarter and more well prepared than you are.

I’m looking forward to seeing everyone at VMworld next week and hopefully I’ll meet some new VCDXs!

 

Jason Boche VCDX #34

 

Mark Gabryjelski VCDX #23

 

Doug Baer VCDX #19 and Randy Stanley VCDX #94

Updated 9/3/13:  Congratulations to the New VCDXs from VMworld San Francisco

  • Mike Tellinghuisen, VCDX 111
  • Timothy Antonowicz, VCDX 112
  • Jason Horn, VCDX 113
  • Tim Curless, VCDX 114
  • Kenneth Garreau, VCDX 115
  • Jonathan Kohler, VCDX 116
  • David Martin Hosken, VCDX 117
  • Brian Suhr, VCDX 118
  • James Galdes, VCDX 119

vCloud Director, RHEL 6.3, and Windows Server 2012 NFS

July 16th, 2013

One of the new features introduced in vCloud Director 5.1.2 is cell server support on the RHEL 6 Update 3 platform (you should also know that cell server support on RHEL 5 Update 7 was silently removed in the recent past – verify the version of RHEL in your environment using cat /etc/issue).  When migrating your cell server(s) to RHEL 6.3, particularly from 5.x, you may run into a few issues.

First is the lack of the libXdmcp package (required for vCD installation) which was once included by default in RHEL 5 versions.  You can verify this at the RHEL 6 CLI with the following command line:

yum search libXdmcp

or

yum list |grep libXdmcp

Not to worry, the package is easily installable by inserting/mounting the RHEL 6 DVD or .iso, copying the appropriate libXdmcp file to /tmp/ and running either of the following commands:

yum install /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

or

rpm -i /tmp/libXdmcp-1.0.3-1.el6.x86_64.rpm

Update 6/22/18: It is really not necessary to point to a package file location or a specific version (this overly complicates the task) when a YUM repository is created. Also… RHEL7 Infrastructure Server base environment excludes the following packages required by vCloud Director 9.1 for Service Providers:

  • libICE
  • libSM
  • libXdmcp
  • libXext
  • libXi
  • libXt
  • libXtst
  • redhat-lsb

If the YUM DVD repository has been created and the RHEL DVD is mounted, install the required packages with the following one liner:

yum install -y libICE libSM libXdmcp libXext libXi libXt libXtst redhat-lsb

Next up is the use of Windows Server 2012 (or Windows 8) as NFS for vCloud Transfer Server Storage in conjunction with the newly supported RHEL 6.3.  Creating the path and directory for the Transfer Server Storage is performed during the initial deployment of vCloud Director using the command mkdir -p /opt/vmware/vcloud-director/data/transfer. When mounting the NFS export for Transfer Server Storage (either manually or via /etc/fstab f.q.d.n:/vcdtransfer/opt/vmware/vcloud-director/data/transfer nfs rw 0 0 ), the mount command fails with error message mount.nfs: mount system call failed. I ran across this in one particular environment and my search turned up Red Hat Bugzilla – Bug 796352.  In the bug documentation, the problem is identified as follows:

On Red Hat Enterprise Linux 6, mounting an NFS export from a Windows 2012 server failed due to the fact that the Windows server contains support for the minor version 1 (v4.1) of the NFS version 4 protocol only, along with support for versions 2 and 3. The lack of the minor version 0 (v4.0) support caused Red Hat Enterprise Linux 6 clients to fail instead of rolling back to version 3 as expected. This update fixes this bug and mounting an NFS export works as expected.

Further down in the article, Steve Dickson outlines the workarounds:

mount -o v3 # to use v3

or

Set the ‘Nfsvers=3’ variable in the “[ Server “Server_Name” ]”
section of the /etc/nfsmount.conf file
An Example will be:
[ Server “nfsserver.lab.local” ]
Nfsvers=3

The first option works well at the command line but doesn’t lend itself to /etc/fstab syntax so I opted for the second option which is to establish a host name and NFS version in the /etc/nfsmount.conf file.  With this method, the mount is attempted as called for in /etc/fstab and by reading /etc/nfsmount.conf, the mount operation succeeds as desired instead of failing at negotiation.

There is a third option which would be to avoid the use of /etc/fstab and /etc/nfsmount altogether and instead establish a mount -o v3 command in /etc/rc.local which is executed at the end of each RHEL boot process.  Although this may work, it feels a little sloppy in my opinion.

Lastly, one could install the kernel update (Red Hat reports as being fixed in kernel-2.6.32-280.el6). The kernel package update is located here.

Update 5/27/18: See also http://www.boche.net/blog/2012/07/03/creating-vcloud-director-transfer-server-storage-on-nfs/ for other new requirements when trying to mount NFS exports with RHEL 7.5.