VMware VCAP4-DCD BETA Exam Experience

November 10th, 2010 by jason No comments »

The ink is still wet on a new chapter in the certification treadmill as I wrote the VCAP4-DCD BETA exam this morning.  Unlike the VCAP-DCA exam, I was able to take this exam locally in Eagan, MN which is where I took both of the VCDX3 written exams last year.  It’s close to both my office and home and therefore it is convenient. 

I was a little fired up this morning and playfully gave the VUE testing center staff a hard time for not allowing coffee in the exam room for a nearly 4 hour exam.  I had arrived to the test site early and I used the spare time to fully read testing center code of conduct.  It does not say food and drinks are not allowed in the exam room.  What it says is food, drinks, gum, and other things are not allowed to distract other test takers.  My argument was that I’m a quiet coffee drinker – let me in.  They wouldn’t budge and I suspect the person I was talking to was not in authority to make her own decision anyway.  I used to be able to take coffee in at an exam center in Bloomington but those days are gone I guess.  But I digress…

So the exam.. much better experience this time compared to the VCAP4-DCA BETA.  The interface felt polished and I felt there was a 300% improvement with the Visio-like tool.  As stated in the blueprint (which was updated in late October), there three types of exam question interfaces used in the testing engine:

  1. Traditional multiple choice (select one) or multiple select (select many)
  2. Use of a GUI tool to match answers to questions
  3. Use of a Visio-like tool to assemble architecture drawings

There were 131 questions to be answered and an exam duration of 3h 45m (I’m a native English speaking candidate).  There was also a brief survey at the beginning.  The time spent in the survey doesn’t count against actual exam time.  This is an opportunity to get a few notes or formulas written on the dry erase board before formally starting the real exam.  I knew from reading Chris Dearden’s experience that time management would be critical.  I used this insight to cruise through questions as swiftly as possible without getting caught up in deep thought like I have on my past few written exams.  Although I didn’t manage the time as well in the first half of the exam, I got progressively better.  I was able to get through most of the questions with a reasonable amount of thought.  There were some easier questions and due to the time constraint, my approach for those was to blow through them with quick answers to regain valuable time for other questions.  Hopefully I didn’t miss any small details which would change the nature of the question.  The Visio tool was pretty solid, no major complaints on usability (you really do have to have experience with the old VCDX3 Design exam to appreciate the improvement made), but it is easy to get sucked into spending way too much time on architecture drawings for the sake of 100% accuracy.  There were a few design drawings which I was somewhat comfortable with but had to give up and move on in the interest of time.  Completing all questions in the allotted time is a significant challenge with this exam.  I did run run out of time so I had to quickly guess answers for the last two or three items.  One other test engine item to note which Chris Dearden first highlighted is that there was no ability to mark questions or to go back to questions once reaching the end of the exam.

For study materials, I used the exam blueprint referenced above, a few white papers, as well as the VMware vSphere Design training class I sat a few weeks ago.  Some of the information carried over word for word to the exam.  The vSphere Design classroom training won’t cover it all as some exam questions were specific to vSphere 4.1 whereas the class covered 4.0.  There are some differences which you’ll need to compare and contrast.  I also used vCalendar tips – there was a vCalendar entry from the past few days which applied directly to the exam.  Experience and knowledge gained throughout the VCDX3 process also contributed to preparation.

The difficulty of the exam didn’t disappoint but I felt better and more confident walking out of the testing center this time than I did for the VCAP-DCA BETA which stunned me.  How the different types of questions in this exam are graded is anyone’s guess.  I’m particularly curious on the Visio tool vs. multiple choice weighting.  I’m hoping for a pass which will give me both the VCAP4-DCD as well as VCDX4 (upgrade) certifications.  With any luck, I’ll see results within a few months. 

I’m looking forward to what others have to say about their experience with this test.  In addition, I’m curious as to why the cost of the VCAP-DCD BETA exam ($200) was twice that of the VCAP-DCA BETA exam ($100).  For that matter, why the $400 fee to sit the live VCAP exam when comparable exams from other vendors such as Microsoft and Citrix are significantly less?  However I or any other candidate feels about the BETA exam, it’s important to not lose sight that it IS a BETA exam.  The BETA exam process assists VMware in developing a quality and consistent exam experience. Due to the time constraint, I was only able to leave about five individual question comments where I saw issues.  Hopefully my exam results along with the comments were of value to VMware and I am thankful that VMware invited me.

Updated 11/11/10:  A VMTN forum discussion on the exam has broken out at http://communities.vmware.com/message/1645177.  You’ll find some helpful tips from others here.  One thing I wanted to point out from the thread dealing with the Visio tool to make sure others aren’t tripped up by this:

Issue:

…never thought I’d long for Visio, my main issue being getting finnished up only to realise some lines didn’t go where I wanted, but the only way to move them was to click ‘Start Over’

Response:

If you put diagram connectors in the wrong place, you didn’t have to “Start Over”. There’s a scissors tool in the lower right corner of the Visio tool which “cuts” individual connectors. I figured that out on my first diagram after running into the same trouble you did. It would have been helpful for Jon Hall of VMware to point that out in his most excellent Flash demo of the Visio tool.

Update 1/11/11:  I passed.

Update 8/18/11:  No VCDX4 certificate or welcome kit received yet.

Q: What’s your Windows template approach?

November 7th, 2010 by jason No comments »

Once upon a time, I was a Windows Server administrator.  Most of my focus was on Windows Server deployment and management. VMware virtualization was a large interest but my Windows responsibilities dwarfed the amount of time I spent with VMware.  One place where these roads intersect is Windows templates.  Because a large part of my job was managing the Windows environment, I spent time maintaining “the perfect Windows template”.  Following were the ingredients I incorporated:

Applications    
Adobe Acrobat Reader Advanced Find & Replace Beyond Compare
Diskeeper MS Network Monitor MS Resource Kits
NTSEC Tools Latest MS RDP Client Symantec Anti-Virus CE
MS UPHClean VMware Tools Windows Admin Pack
Windows Support Tools Winzip Pro Sysinternals Suite
Windows Command Console BGINFO CMDHERE
Windows Perf Advisor MPS Reports GPMC
SNMP    

 

Tweaks    
Remote Desktop enabled Remote Assistance disabled Pagefile
Complete memory dump DIRCMD=/O env. variable PATH tweaks
taskmgr.exe in startup, run minimized SNMP Desktop prefs.
Network icon in System Tray Taskbar prefs.  
C: 12GB D: 6GB  
Display Hardware acceleration to Full*    
     
* = if necessary    

 

VMware virtualization is now and has been my main focus going on two years.  By title, I’m no longer a Windows Server administrator and I don’t care to spend a lot of time worrying about what’s in my templates.  I don’t have to worry about keeping several applications up to date.  In what I do now, it’s actually more important to consistently work with the most generic Windows template as possible.  This is to ensure that projects I’m working with on the virtualization side of things aren’t garfed up by any of the 30+ changes made above.  Issues would inevitably appear and each time I’d need to counter productively deal with the lists above as possible culprits.  As such, I now take a minimalist approach to Windows templates as follows:

Applications
VMware Tools

 

Tweaks    
C: 20GB VMXNET3 vNIC Activate Windows
wddm_video driver* Disk Alignment Display Hardware acceleration to Full*
     
* = if necessary    

 

In large virtualized environments, templates may be found in various repositories due to network segmentation, firewalls, storage placement, etc.  As beneficial as templates are, keeping them up to date can become a significant chore and the time spent doing so eats away at the time savings benefit which they provide.  Deployment consistency is key in reducing support and incident costs but making sure templates in distributed locations are consistent is not only a chore, but it is of paramount importance.  If this is the scenario you’re fighting, automated template and/or storage replication is needed.  Another solution might be to get away from templates altogether and adopt a scripted installation which is another tried and true approach which provides automation and consistency, but without the hassle of maintaining templates.  The hassle in this case isn’t eliminated completely.  It’s shifted into other areas such as maintaining PXE boot services, maintaining PXE images, and maintaining post build/application installation scripts.  I’ve seen large organizations go the scripted route in lieu of templates.  One reason could simply be that scripted virtual builds are strategically consistent with the organization’s scripted physical builds.  Another could be the burden of maintaining templates as I discussed earlier.  Is this a hint that templates don’t scale in large distributed environments?

Do you use templates and if so, what is your approach in comparison to what I’ve written about?

EMC Celerra Network Server Documentation

November 6th, 2010 by jason No comments »

EMC has updated their documentation library for the Celerra to version 6.0.  If you work with the Celerra or the UBER VSA, this is good reference documentation to have.  The updated Celerra documentation library on EMC’s Powerlink site is here: Celerra Network Server Documentation (User Edition) 6.0 A01.  The document library includes the following titles:

  • Celerra Network Server User Documents
    • Celerra CDMS Version 2.0 for NFS and CIFS
    • Celerra File Extension Filtering
    • Celerra Glossary
    • Celerra MirrorView/Synchronous Setup on CLARiiON Backends
    • Celerra Network Server Command Reference Manual
    • Celerra Network Server Error Messages Guide
    • Celerra Network Server Parameters Guide
    • Celerra Network Server System Operations
    • Celerra Security Configuration Guide
    • Celerra SMI-S Provider Programmer’s Guide
    • Configuring and Managing CIFS on Celerra
    • Configuring and Managing Celerra Network High Availability
    • Configuring and Managing Celerra Networking
    • Configuring Celerra Events and Notifications
    • Configuring Celerra Naming Services
    • Configuring Celerra Time Services
    • Configuring Celerra User Mapping
    • Configuring iSCSI Targets on Celerra
    • Configuring NDMP Backups on Celerra
    • Configuring NDMP Backups to Disk on Celerra
    • Configuring NFS on Celerra
    • Configuring Standbys on Celerra
    • Configuring Virtual Data Movers for Celerra
    • Controlling Access to Celerra System Objects
    • Getting Started with Celerra Startup Assistant
    • Installing Celerra iSCSI Host Components
    • Installing Celerra Management Applications
    • Managing Celerra for a Multiprotocol Environment
    • Managing Celerra Statistics
    • Managing Celerra Volumes and File Systems Manually
    • Managing Celerra Volumes and File Systems with Automatic Volume Management
    • Problem Resolution Roadmap for Celerra
    • Using Celerra AntiVirus Agent
    • Using Celerra Data Deduplication
    • Using Celerra Event Enabler
    • Using Celerra Event Publishing Agent
    • Using Celerra FileMover
    • Using Celerra Replicator (V2)
    • Using EMC Utilities for the CIFS Environment
    • Using File-Level Retention on Celerra
    • Using FTP on Celerra
    • Using International Character Sets with Celerra
    • Using MirrorView Synchronous with Celerra for Disaster Recovery
    • Using MPFS on Celerra
    • Using Multi-Protocol Directories with Celerra
    • Using NTMigrate with Celerra
    • Using ntxmap for Celerra CIFS User Mapping
    • Using Quotas on Celerra
    • Using SnapSure on Celerra
    • Using SNMPv3 on Celerra
    • Using SRDF/A with Celerra
    • Using SRDF/S with Celerra for Disaster Recovery
    • Using TFTP on Celerra Network Server
    • Using the Celerra nas_stig Utility
    • Using the Celerra server_archive Utility
    • Using TimeFinder/FS, NearCopy, and FarCopy with Celerra
    • Using Windows Administrative Tools with Celerra
    • Using Wizards to Configure Celerra
  • NS-120
    • Celerra NS-120 System (Single Blade) Installation Guide
    • Celerra NS-120 System (Dual Blade) Installation Guide
  • NS-480
    • Celerra NS-480 System (Dual Blade) Installation Guide
    • Celerra NS-480 System (Four Blade) Installation Guide
  • NS20
    • Celerra NS20 Read Me First
    • Setting Up the EMC Celerra NS20 System
    • Celerra NS21 Cabling Guide
    • Celerra NS21FC Cabling Guide
    • Celerra NS22 Cabling Guide
    • Celerra NS22FC Cabling Guide
    • Celerra NS20 System (Single Blade) Installation Guide
    • Celerra NS20 System (Single Blade with FC Option Enabled) Installation Guide
    • Celerra NS20 System (Dual Blade) Installation Guide
    • Celerra NS20 System (Dual Blade with FC Option Enabled) Installation Guide
  • NX4
    • Celerra NX4 System Single Blade Installation Guide
    • Celerra NX4 System Dual Blade Installation Guide
  • Regulatory Documents
    • C-RoHS HS/TS Substance Concentration Chart Technical Note

If you’re looking for more Celerra documentation, check out the Celerra Network Server General Reference page.

Performance charts fail after Daylight Savings changes are applied

November 5th, 2010 by jason No comments »

Daylight savings changes this weekend allow many folks to get an extra hour of sleep.  However, a VMware vSphere 4.1 bug has surfaced which may spoil the fun. 

VMware has published KB 1030305 (Performance charts fail after Daylight Savings changes are applied) which serves as a reminder that the pitfalls and treachery of mixing daylight savings changes and million dollar datacenters are not behind us yet.  Those who are on vSphere 4.1 and observe the weekend time change will run into problems come Sunday morning:

After Daylight Savings settings are applied:

  • Performance charts do not display data
  • Past week, month, and year performance overview charts are not displayed
  • Datastore performance/space data charts are not displayed
  • You receive the error: The chart could not be loaded
  • This occurs when clocks are set back 1 hour from Daylight Savings Time to Standard Time

VMware offers the following workaround:

Use Advanced Chart Options:

  1. Click Performance
  2. Click Advanced
  3. Click Chart Options and then choose the chart you want to review

Use a custom time range when viewing performance charts after clocks are set back:

  1. Click Performance
  2. Click the Time Range dropdown
  3. Choose Custom
  4. Specify From and To options that exclude the hours for when the time change occurred

For example:

If Standard Time settings were applied on November 7, at 01:00 AM, you could use these ranges:
Before the time change:
From 1/11/2010 12:00 AM To 7/11/2010 12:00 AM
After the time change:
From 7/11/2010 03:00 AM To 8/11/2010 15:00 PM

Have a great weekend!

Hyper9 Pulse Check

November 5th, 2010 by jason No comments »

SnagIt Capture

It has been several months since I’ve written about Austin, TX based Hyper9. I know they’ve been hard at work with continuous development of their flagship Hyper9 management product in addition to a strong nationwide marketing campaign.  Just last spring they presented at a quarterly Minneapolis VMUG meeting here locally.

What’s the latest news in the Hyper9 camp?  I’m glad you asked…

Leadership Updates:

  • 27-year veteran Bob Quillin joins Hyper9 as Chief Marketing Officer.  Quillin was formerly with VMware where he led vCenter virtualization and cloud management solution marketing for the configuration and compliance management product lines.  Bob has a background comprised of enterprise, systems, application, and network management, including tenure at storage giant EMC.
  • Greg Barone joins Hyper9 as Vice President of Worldwide Sales.  Greg’s 18-year background is in sales and sales management for tech companies ranging from startup to multi-billion dollar enterprises.  He held the role of V.P of Worldwide Sales at Cognio (a Cisco acquisition).

Q3 2010 Highlights:

  • Q3 revenue doubled over the prior period and represents Hyper9’s best performance to date.
  • 400% year-over-year revenue growth with revenue doubling each quarter this year.
  • A broad range of new business from customers of various sizes and verticals.
  • Successfully road tested at large scale in enterprise deployments consisting of tens of thousands of VMs.
  • Introduction of Cloud Cost Estimator Lens which provides comparisons to providers such as Amazon EC2.
  • New partner alliances designed to catapult Hyper9’s growth into global markets

This sounds like great news for Hyper9.  As someone who has been involved with Hyper9 product development in the earlier stages, it has been fascinating to watch this company evolve into a successful ecosystem partner.

Do you use Hyper9?  What do you think of the product?  I’d like hear your honest opinion of the product and so would Hyper9.

SexyBookmarks WordPress Plugin and RSS Feeds

November 4th, 2010 by jason No comments »

Wednesday night I wrote up a blog post on Veeam Backup and Recovery 5.0 and scheduled its release for this morning at 9am.  I’ve been swamped at work but I did get a chance to validate mid-morning that the post was up.  Shortly after I realized the blog’s RSS Feed had stopped working as of last night’s Veeam post.

Once home from work, I started the troubleshooting.  Due to the self-hosting aspect, IIS, MySQL, the number of plugins in use (I do try to keep to an absolute minimum), the theme mods, the monetizing, Feedburner, etc., my blog has several moving parts and is a bit of a pain sometimes.  WordPress itself is solid but with the add-ons and hacks, it can become a house of cards (it’s a lot like running a game server).  The more time that is invested in a blog, the less of an option it is to firebomb and start over.  I crossed that point of no return a long time ago.  Themes, monetization, and all that stuff aside, the content (and to some degree the comments) is by far the most valuable piece to NOT be lost.  In retrospect, solving the technical issues as they arise is satisfying and a slight boost to the ego, but often times there just aren’t enough hours in the day for these types of problems.

Troubleshooting a WordPress blog is best approached from a chronological standpoint.  Think of the blog as one long timeline of sequentially serial events.  You’ve got post history, comment history, WordPress code history, theme history, plugin history, integration history, hack history, platform infrastructure history (Windows/Linux, MySQL, IIS or Apache), etc.  Blog problems are usually going to be tied back to changes to any one of these components.  If the blog sees a lot of action, malfunctions will usually surface quickly.  “It was working yesterday, but something broke today”.  Such was the case when my blog’s RSS Feeds stopped updating this morning.  As I stated earlier, the best approach is to think about the timeline and go backwards from the point of broken identifying each change that was made to the blog.  Historically for me, it’s usually a plugin or a recent post which has some sort of nasty formatting embedded in it somewhere.

So.. the problem: RSS Feeds broken; no longer updating at Feedburner.  Impact: 3009 RSS subscribers are unaware I’ve written new content – bad for me, bad for sponsors.

Solving this problem was a treat because I had made multiple changes to the blog last night:

  1. Upgraded the theme to a new rev. (finally!)
  2. Applied existing hack functionality to new theme files
  3. Installed two new plugins
  4. Made changes to sidebar widgets
  5. Wrote a Veeam blog post which had some special characters copied/pasted in it

I started by testing my blog’s RSS feed with a syntax/format checker at Feedburner.  It failed.  There’s bad code embedded somewhere which can stem from any of the above changes.

Next I shut down Feedburner integration to help isolate the problem.  With the Feedburner plugin disabled, my blog now supplies its native RSS feed capability which is built into WordPress.  Hitting the URL for the feed showed failure.  Long hourglass followed by a blank browser page with a bit of information, again, about bad code in the feed which cannot be handled.  So now I know Feedburner is indeed not updating because of bad content in the feed (that behavior is by design which is why the internet makes feed checkers to aid in troubleshooting).

Good progress, however I’m still left with identifying which change above caused the RSS feed to stop working.  The next step is to start backing out the above changes.

  1. I started by unpublishing the Veeam post.  No dice.
  2. I then rolled back to the old version of the theme.  Problem still exists.
  3. Then I disabled the SexyBookmarks 3.2.4.2 plugin.  B-I-N-G-O

Shortly after, I found specifically what in the plugin was causing the RSS feed issue.  There’s an option in the plugin settings called Show in RSS feed? (displayed in the image below)  This feature is designed to show the little social media information sharing buttons in RSS feeds when set to Yes.  Whether or not I had configured this option, it was configured as Yes.  When set to Yes, it embeds code in the RSS feed which RSS readers don’t understand, which then leads to RSS feed failure.  With this find, I could disable the feature but at the same time,  keep the plugin enabled.

SnagIt Capture

I can’t say I learned a great deal here.  It was more reinforcement of what I’ve learned in the past.  I’ve been through blog troubleshooting exercises like this before and they were solved using the same or similar techniques.  Blog plugins and modifications ship with the implied warranty of “buyer beware”.  When something goes wrong with your blog, you should be able to tie the problem back to recent changes or events.  In an environment such as mine where I’m the only one making changes and writing content, I’m accountable for what broke and I can isolate the problem to something that I did to cause impact fairly quickly.  Larger blogs hosted somewhere else with multiple owners and authors introduces troubleshooting complexity, particularly if changes aren’t documented.  I guess that’s why change management was invented.  My lab is the last environment I’m aware of where changes can be made without a CR.  That’s one of the reasons why the lab remains so sexy and is such a great escape.

Veeam Backup & Replication 5.0

November 4th, 2010 by jason No comments »

Back in July of this year, I attended Gestalt IT Tech Field Day in Seattle.  You may recall that I was the recipient of a presentation from Veeam and wrote about their upcoming Backup & Replication product and new vPower technology. 

Backup & Replication 4.0 had already been known as a category finalist at last year’s VMworld conference.  Veeam was not content in the runner up position. In September, the Columbus, Reading, and Sydney based organization showcased their new development at VMworld 2010 San Francisco and walked away with Best of Show and Best New Technology.  I can tell you as an attendee of the conference for several years that the competition in the Solutions Exchange is rivaled by nothing else that I’ve ever seen before.  For Veeam to win as they did in these categories is a pretty big deal.  I know they are both excited and proud about this year’s results.

In October, Veeam released Backup & Replication 5.0 to the public.  Companies of all sizes can now leverage the technology and realize the features, efficiency, and savings Veeam brings to the table.

So what’s baked into 5.0?  Veeam B&R alumni will find a significant portion of what made previous versions so great.  At a high level,  2-in-1 data protection for VMware virtual infrastructure: Backup and Replication features consolidated into a single no-nonsense solution.  New in version 5 you’ll find the following:

1. Instant VM Recovery: Restore an entire virtual machine from a backup file in minutes. Users remain productive while IT troubleshoots the issue.

2. U-AIR™ (Universal Application-Item Recovery): Recover individual items from any virtualized application, on any operating system, without additional backups, agents or software tools. Eliminates the need for expensive special-purpose tools and extends granular recovery to all applications and users.

3. SureBackup™ Recovery Verification: Automatically verify the recoverability of every backup, of every virtual machine, every time. Eliminates uncertainty and sets a new standard in data protection.

4. On-demand Sandbox: Create test VMs from any point in time to troubleshoot problems or test workarounds, software patches and new application code. Eliminates the need for dedicated test labs and the overhead that extra VMware snapshots place on VMs.

5. Instant File-level Recovery for any file system: Recover an entire VM or an individual file from the same image-level backup. Extends instant file-level recovery to all VMs.

SnagIt Capture

Personally, I’ve been accumulating quite a bit of experience with Veeam Backup & Replication.  Over the past year, I have been using it to provide various levels of protection for tiers of data in my lab which myself and my family are immensely dependent on.  Last year at this time I was backing up to tape.  Those with growing data sets know that the tape model isn’t sustainable long term.  Not to mention, tape is a datacenter fashion faux pas, even for a lab environment I was catching hell about it in the social media streams.  Now the tape library is gone, the lab is 100% virtualized and Veeam backs up all of it.  Included in that figure is data which I cannot put a price on: well over a decade of my technical work, financial and tax data, and 81GB of family pictures and videos which are irreplaceable.  Today, Veeam is the provider which I trust as the sole safety net between peace of mind and utter disaster.  Beyond its intrinsic functionality,  I used Veeam Backup & Replication as a solution in my VCDX design submission and successfully defended it last February.  Assuming it fits the business requirements and design constraints, it’s a solid choice in a VMware virtualized datacenter.

As I said before, version 5 is currently shipping.  List price starts at $599 USD per socket for Standard Edition and $899 USD per socket for Enterprise Edition.  If you’ve been carrying B&R maintenance as of 6/30/10, you’re already eligible for a free upgrade to Enterprise Edition.

If that weren’t enough to tantalize your tentacles, Veeam is honoring a Competitive Upgrade program through 12/24/10.  New customers receive a 25% discount on the price of Standard or Enterprise Edition with proof of purchase of another backup product.

Download a free 30-day trial of Veeam Backup & Replication 5.0 here.