Posts Tagged ‘Microsoft’

network bandwidth transfer.xlsx

March 19th, 2011

SnagIt CaptureMany years ago, before I got involved with VMware, before VMware existed in fact, I was a Systems Engineer supporting Microsoft Windows Servers.  I also dabbled in technology related things such as running game servers like Quake II and Half-Life Counter-Strike on the internet.  One area where these responsibilities intersected was the need to know the rate at which data could traverse a rated network segment in addition to the amount of time it would take for said data to travel from point A to point B. 

At that point in time, there wasn’t half a dozen free web based calculators which could be found via Google search.  As a result, I started an Excel spreadsheet.  It started out as a tool which would allow me to enter a value in KiloBytes, MegaBytes, or GigaBytes.  From there, it would calculate the amount of time it would take that data to travel across the wire.  This data was useful in telling me how many players the Counter-Strike could scale to, and it would provide an estimate for how much the bandwidth utilization was going to cost me per month.  I also used this information in the office to plan backup strategies, data transfer, and data replication.

I’ve expanded its capabilities slightly over the years as well as scaled it up to handle the volume of data we deal which has increased exponentially.  In addition to the functions it performed in the past, I added a data conversion section which translates anything to anything within the range of bits to YottaBytes.  It performs both Base 2 (binary) and Base 10 (decimal) calculations which are maintained on their own respective worksheet tabs.  I prefer to work with Base 2 because it’s old school and I believe it is the most accurate measure of data and conversion.  To this point, WikiPedia explains:

The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to over 20% for the yotta prefix.  This chart shows the growing percentage of the shortfall of decimal interpretations from binary interpretations of the unit prefixes plotted against the logarithm of storage size.

SnagIt Capture

However, Base 10 is much easier for the human brain to work with as the numbers are nice and round.  I believe this is how and why Base 10 became known as “Salesman Bytes” way back when.  I’ll be darned if I can find a reference to this term any longer in Google.

Long boring story short, this is a handy storage/network data conversion tool I still use from time to time today when working with large or varying numbers.  For those who don’t have a preferred tool for whatever use case, you’re welcomed to use the one I created.  A few notes:

  • Due to the extreme length of two of the formulas in the workbook, I had to upgrade it to Excel 2007 format at a minimum which is the reason for the file extension of .xlsx.
  • The data transfer section assumes the most optimal of conditions, no latency, etc.

Download network bandwidth transfer.xlsx (22.6KB)

Twin Cities Powershell Users Group Meeting March 8th

March 7th, 2011

The next Twin Cities Powershell Users Group will convene on March 8th at 4:30 pm (THAT’S TOMORROW!) at the Microsoft Office in Bloomington. There are three reasons I am encouraging as many people as possible to attend this event.

Date:           March 08, 2011
Time:           4:30-6:00 p.m.
Location:     8300 Norman Center Drive, 9th Floor, Bloomington, MN 55437

Please attend if you are able, and forward this invite to anybody else that you feel might be interested in attending. RSVP at this link.

The content being presented is focused on leveraging PowerCLI to manage and monitor your VMware environment. PowerCLI is an extremely powerful set of capabilities which will allow you to automate and manage your environment in a very efficient manner. Being able to leverage PowerCLI will save you time and make you a better VMware administrator. Additionally, this skill set is applicable to many other aspects of IT.

The presenter at this event is Ryan Grendahl from Datalink. For those of you who don’t know Ryan, he is extremely strong around VMware, storage, and automation. In fact, Ryan recently attained his VCDX, becoming one of only 66 people in the world to earn this very highly regarded certification. Ryan is very proficient and knowledgeable around PowerCLI and I believe that you will learn a lot by attending.

This event is at the Microsoft office in Bloomington. I would love to see a HUGE turnout to this event so that the Microsoft staff can see how interested people are in VMware based solutions. I’m hoping that we can make this a standing room only turnout.

WordPress 3.1 Upgrade Issues

February 27th, 2011

I noticed this evening that WordPress 3.1 was available and my blog’s dasboard was coaxing me to upgrade.  Every single time I have upgraded, I have made a backup before hand.  At the end of a long week, my logic was shot and I proceeded with the upgrade without a backup.  As luck would have it, my Windows Server 2003 and IIS based blog no longer worked.  Page loads were an endless hourglass, no 404 or any other web browser errors.   However, another symptom included the w3wp.exe process (this is IIS) on my server consuming extremely heavy CPU utilization during the endless page loads.  When cancelling the page load, the CPU utilization goes back down to normal.

As I have an ongoing obligation to blog sponsors, not to mention I was mentally drained, I was feeling pretty screwed at this point, but was prepared to restore from the previous night’s Veeam file level backups.  I turned to Google looking for other WordPress upgrade experiences.  Search results quickly lead me to this thread which provided a ton of users having the same issue.  A chap by the moniker of jarnez had the solution, or at least workaround which worked for me as well as others.  Open the blog’s admin dashboard (thankfully this is still functional) and install the Permalink Fix & Disable Canonical Redirects Pack plugin and all is back to normal again. 

Thank you jarnez!!!

Q: What’s your Windows template approach?

November 7th, 2010

Once upon a time, I was a Windows Server administrator.  Most of my focus was on Windows Server deployment and management. VMware virtualization was a large interest but my Windows responsibilities dwarfed the amount of time I spent with VMware.  One place where these roads intersect is Windows templates.  Because a large part of my job was managing the Windows environment, I spent time maintaining “the perfect Windows template”.  Following were the ingredients I incorporated:

Adobe Acrobat Reader Advanced Find & Replace Beyond Compare
Diskeeper MS Network Monitor MS Resource Kits
NTSEC Tools Latest MS RDP Client Symantec Anti-Virus CE
MS UPHClean VMware Tools Windows Admin Pack
Windows Support Tools Winzip Pro Sysinternals Suite
Windows Command Console BGINFO CMDHERE
Windows Perf Advisor MPS Reports GPMC


Remote Desktop enabled Remote Assistance disabled Pagefile
Complete memory dump DIRCMD=/O env. variable PATH tweaks
taskmgr.exe in startup, run minimized SNMP Desktop prefs.
Network icon in System Tray Taskbar prefs.  
C: 12GB D: 6GB  
Display Hardware acceleration to Full*    
* = if necessary    


VMware virtualization is now and has been my main focus going on two years.  By title, I’m no longer a Windows Server administrator and I don’t care to spend a lot of time worrying about what’s in my templates.  I don’t have to worry about keeping several applications up to date.  In what I do now, it’s actually more important to consistently work with the most generic Windows template as possible.  This is to ensure that projects I’m working with on the virtualization side of things aren’t garfed up by any of the 30+ changes made above.  Issues would inevitably appear and each time I’d need to counter productively deal with the lists above as possible culprits.  As such, I now take a minimalist approach to Windows templates as follows:

VMware Tools


C: 20GB VMXNET3 vNIC Activate Windows
wddm_video driver* Disk Alignment Display Hardware acceleration to Full*
* = if necessary    


In large virtualized environments, templates may be found in various repositories due to network segmentation, firewalls, storage placement, etc.  As beneficial as templates are, keeping them up to date can become a significant chore and the time spent doing so eats away at the time savings benefit which they provide.  Deployment consistency is key in reducing support and incident costs but making sure templates in distributed locations are consistent is not only a chore, but it is of paramount importance.  If this is the scenario you’re fighting, automated template and/or storage replication is needed.  Another solution might be to get away from templates altogether and adopt a scripted installation which is another tried and true approach which provides automation and consistency, but without the hassle of maintaining templates.  The hassle in this case isn’t eliminated completely.  It’s shifted into other areas such as maintaining PXE boot services, maintaining PXE images, and maintaining post build/application installation scripts.  I’ve seen large organizations go the scripted route in lieu of templates.  One reason could simply be that scripted virtual builds are strategically consistent with the organization’s scripted physical builds.  Another could be the burden of maintaining templates as I discussed earlier.  Is this a hint that templates don’t scale in large distributed environments?

Do you use templates and if so, what is your approach in comparison to what I’ve written about?

Gestalt IT Tech Field Day – Nimble Storage

July 15th, 2010

7-15-2010 11-31-48 AMNext up at Gestalt IT Tech Field Day is Nimble Storage who comes out of stealth mode and officially launches today.  Nimble Storage provides a unique iSCSI storage platform by eliminating traditional backup windows using efficient snapshot technology coupled with high performance flash drives.  A handful of use cases have already been identified for both virtualized and bare metal OS and application platforms.  I’m baffled as to how much competitive room there is in the storage realm, particularly with giants like NetApp, EMC, Hitachi, and others.  I believe this is a compliment to each of the players as it takes incredibly bright minds and innovation to stake and maintain a claim.

The secret sauce is in Nimble’s CASL (pronounced “castle” Cache-Accelerated Sequential Layout) Architecture which can be thought of as a reincarnation of VMware co-founder Mendel Rosenblum’s Log-Structured File System.

  • Inline Compression
  • Large Adaptive Flash Cache
  • High-Capacity Disk Storage
  • Integrated Backup

Resulting advantages provided are:

  • Inline compression (2:1 – 4:1 ratio)
  • High performance
  • Low cost SATA disk stores both primary data as well as 90 day snapshot retention
  • WAN-efficient offsite replication for cost-effective DR
  • Storage and Backup Optimized for VMware/Microsoft environments
  • Benefits for Sharepoint, SQL, and Exchange as well

From the Nimble Storage website:

Storing, accessing, and protecting your data shouldn’t be so complicated and expensive. Nimble’s breakthrough CASL™ architecture combines flash memory with high-capacity disk to converge storage, backup, and disaster recovery for the first time. The bottom line: High-performance iSCSI storage, instant backups and restores, and full-featured disaster recovery — all in one cost-effective, easy-to-manage solution.

Benefits for VMware Deployments

•Dramatic VM Consolidation and Cost Reduction
Groundbreaking CASL architecture includes innovations that enable dramatic consolidation of Virtual Servers and desktops. The hybrid flash and low-cost HDD-based architecture deliver very high random performance for demanding workloads at very low cost. Built-in capacity optimization and block sharing capabilities provide large capacity savings for both flash and disk. The net result is a single array that can easily serve the performance and capacity requirements for hundreds of high performance virtual servers, dramatically reducing cost, rackspace, power, and management expense. Further consolidation and cost savings come from the built-in capacity optimized backup capability, which eliminates dedicated disk backup devices, while enabling 90 days of efficient backup.

•Backup and Restore VMs Instantly
Nimble arrays enable instant Hypervisor consistent backup and restore of datastores and VMs, while eliminating backup windows. Nimble Protection Manager integrates with vCenter APIs to simplify management of Hypervisor-consistent backups, replicas and restores for VMware environments by leveraging Nimble’s instant, capacity optimized array-based snapshots. This converged solution enables dramatically better RPOs and RTOs compared with traditional solutions.

•Automated, Fast Offsite Disaster Recovery
WAN-efficient replication and fast failover enable quick, cost effective disaster recovery. Combined with instant backup capabilities, this enables rapid restore and very granular recovery points in the event of a site disaster. The entire failover process can be automated via management tools such as VMware Site Recovery Manager (SRM) which leverages a Nimble SRA to control the storage level failover capabilities.

•Simplified Virtual Infrastructure Management
Using predefined ESX performance and data protection policies, storage for new datastores can be provisioned and protected in just three steps. The Nimble Protection Manager integrates with vCenter APIs to simplify management of Hypervisor-consistent backups, replicas and restores for VMware environments, by leveraging Nimble’s instant, capacity optimized array based snapshots. A vCenter plugin simplifies and accelerates the task of cloning datastore or VM templates, by leveraging Nimble’s instant, high space efficient zero copy clones.

Two 3U capacity offerings available, both of which are served by an identical configuration of Active/Passive controllers, large flash layer, multicore Intel Xeon processors, and 2x quad GbE NICs (10GbE ready and available soon):

  1. CS220: 9TB primary + 108TB backup
  2. CS240: 18TB primrary + 216TB backup

7-15-2010 1-24-01 PM

Follow them on Twitter at @NimbleStorage.

Introduction to Nimble Storage at Tech Field Day Seattle from Stephen Foskett on Vimeo.

Note : Tech Field Day is a sponsored event. Although I receive no direct compensation and take personal leave to attend, all event expenses are paid by the sponsors through Gestalt IT Media LLC. No editorial control is exerted over me and I write what I want, if I want, when I want, and how I want.

Windows 7 Launch Multiple Program Instances Shortcut

June 22nd, 2010

I don’t pretend to know all of the Windows keyboard shortcuts but I do maintain an arsenal of frequently used aka useful ones.  Here’s one that I discovered by accident which is helpful for applications which multiple instances can typically be spawned simultaneously.  Applications like the vSphere Client, PuTTY, Remote Desktop Connection, Command Prompt, maybe a web browser if you dislike browser tabs.

The shortcut:

With one instance of the desired application already launched (and visible on the Windows 7 taskbar), SHIFT + LEFT MOUSE CLICK on the application on the taskbar:

6-21-2010 10-05-36 PM

VIOLA!  An additional instance is spawned:

6-21-2010 10-06-36 PM

I’ve found immediate use for this with launching multiple vSphere Client instances.  Sure I have these frequently used applications pinned to my taskbar for one click launch efficiency but when the application already has one instance launched, the target to click on is ergonomically larger and thus easier to find.

This UI enhancement may also work with Vista.  I didn’t use that OS long enough to find out.  I’m not sure if Microsoft has an official name for this technology – surely there must be an acronym for it.  I’ll pay attention during the “Windows 7 was my idea” commercials as this was obviously someone’s idea and this trick could surface there.

ps. On the subject of Windows 7 enhancements.  While I do like and use the feature where an application is snapped to one of the four edges of the screen, at the same time I’ve developed a phobia about carefully navigating my mouse while dragging an application where I DO NOT want it to snap and take up a huge chunk of display real estate.  I’m passive aggressive particular about the dimensions of my application windows relative to everything else in the shared area.  The four edges of a Windows 7 display have tractor beams and when your mouse comes close to the edge, it sucks you the rest of the way in and before you know it, an app is maximized.  I’d bet *nix people don’t have these types of issues.

Active Directory Problems

June 13th, 2010

I’ll borrow an introduction from a blog post I wrote a few days ago titled NFS and Name Resolution because it pretty much applies to this blog post as well:

Sometimes I take things for granted. For instance, the health and integrity of the lab environment. Although it is “lab”, I do run some workloads which are key to keep online on a regular basis. Primarily the web server which this blog is served from, the email server which is where I do a lot of collaboration, and the Active Directory Domain Controllers/DNS Servers which provide the authentication mechanisms, mailbox access, external host name resolution to fetch resources on the internet, and internal host name resolution.

The workloads and infrastructure in my lab are 100% virtualized. The only “physical” items I have are type 1 hypervisor hosts, storage, and network. By this point I’ll assume most are familiar with the benefits of consolidation. The downside is that when the wheels come off in a highly consolidated environment, the impacts can be severe as they fan out and tip over down stream dependencies like dominos.

Due to my focus on VMware virtualization, the Microsoft Active Directory Domain Controllers hadn’t been getting the care and feeding they needed.  Quite honestly, there have several “lights out” situations in the lab due to one reason or another.  The lab infrastructure VMs and their underlying operating systems have taken quite a beating but continued running.  Occassionally a Windows VM would detect a need for a CHKDSK .  Similarly, Linux VMs wanted an FSCK.  But they would faithfully return to a login prompt.

A week ago today, the DCs succumbed to the long term abuse.  Symptoms were immediately apparent in that I could not connect to the Exchange 2010 server to access my email and calendar.  In addtion, I had lost access to the network drives on the file server.  Given the symptoms, I knew the issue was Active Diriectory related, however, I quickly found out the typcal short term remedies weren’t working.  I looked at the Event Logs for both DCs.  Both were a disaster and looking at the history, they had been ill for quite a long time.  I was going to have to really dig in to resolve this problem.

I spent several of the following evenings trying to resolve the problem.  As each day passed, anxiety was building because I was lacking email which is where I do a lot of work out of.  I had cleaned up AD meta data on both DCs, I had removed DCs to narrow the problem down, I examined DNS checking the integrity of AD integrated SRV records.  I had restored the DCs to an isolated network from prior backups to no avail.  Although AD was performing some base authentication, there were a handful of symptoms remaining which would indicate AD was still not happy.  A few of the big ones were:

  1. Exchange Services would either not start or would hang on starting
  2. SYSVOL and NETLOGON shares were not online on the DCs
  3. NETDIAG and DCDIAG tests on the DCs both had major failures, primarily inability to locate any DCs, Global Catalog Servers, time servers, or domain names

All of these problems utlimately tied to an error in the File Replication Service log on the DCs:

Event Type: Warning
Event Source: NtFrs
Event Category: None
Event ID: 13566
Date: 6/10/2010
Time: 9:15:56 PM
User: N/A
Computer: OBIWAN
File Replication Service is scanning the data in the system volume. Computer OBIWAN cannot become a domain controller until this process is complete. The system volume will then be shared as SYSVOL. 

To check for the SYSVOL share, at the command prompt, type:
net share 

When File Replication Service completes the scanning process, the SYSVOL share will appear.

The initialization of the system volume can take some time. The time is dependent on the amount of data in the system volume.

I had waited a long period of time for the scan to complete, but it had become apprent that the scan was never going to complete on its own.  After quite a bit of searching, I came up with Microsoft KB Article 263532 How to perform a disaster recovery restoration of Active Directory on a computer with a different hardware configuration.  Specifically, step 3j provided the answer to solving the root cause of the problem.  There is a registry value called BurFlags located in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NtFrs\Parameters\
Backup/Restore\Process at Startup\
.  The value needs to be set to d4 to allow SYSVOL to be shared out.

 Once this registry value was set, all of the problems I was experiencing went away. Exchange services started and I had access to my Email after a four day inbox vacation.  I had been through a few instances of AD meta data cleanup but this turned out to be a more complex problem than that.  I am thankful for internet search engines because I probably would have never solved this problem without the MS KB Article.  I was actually coming close to wiping my current AD and starting over, although I knew that would be pretty painful considering the integration of other components like Exchange, SQL, Certificate Services, DNS, Citrix, etc. that was tied to it.