Veeam FastSCP “Agents failed to start” During Copy

February 8th, 2011 by jason 3 comments »

Quick fix here for an operational task error I encountered in Veeam FastSCP 3.0.3.  I was trying to copy a file from the VMware vMA 4.1 appliance to a Windows folder using Veeam FastSCP 3.0.3.  In Veeam, the vMA appliance is registered as a Linux server & is recognized in the interface as the server object with the penguin.  In this example, I’m trying to copy /etc/motd to my local C: drive on Windows 7 Ultimate 64 bit:

SnagIt Capture

After a delay of several seconds, the error message is displayed:
Agents failed to start, server “vma41.boche.mcse”, client “localhost” Cannot connect to server [x.x.x.x:2500].

SnagIt Capture

The problem is an iptables daemon which is responsible for blocking communication on port 2500.  The workaround I used is to temporarily disable the iptables daemon as follows:

[vi-admin@vma41 etc]$ sudo service iptables stop
Flushing firewall rules: [ OK ]
Setting chains to policy ACCEPT: filter [ OK ]
Unloading iptables modules: [ OK ]

SnagIt Capture

Immediately after the iptables daemon is stopped, I’m able to copy the file:

SnagIt Capture

Now that my file is copied, I’ll undo the workaround, ensuring the vMA appliance is left in the state I had found it with its firewall rules applied:

[vi-admin@vma41 etc]$ sudo service iptables start
Applying iptables firewall rules: [ OK ]
Loading additional iptables modules: ip_conntrack_netbios_n[ OK ]

SnagIt Capture

Left alone, the workaround would persist until the next reboot.  Other workarounds to deal with this issue in a more permanent fashion would be to open port 2500 or use chkconfig to permanently disable the iptables daemon as follows:

sudo chkconfig iptables off
sudo service iptables save
sudo service iptables stop

VCP4 Exam Cram: VMware Certified Professional (2nd Edition)

February 7th, 2011 by jason 3 comments »


Tonight I was grateful to have received from Pearson Education the book VCP4 Exam Cram: VMware Certified Professional (2nd Edition).  The book is authored by Elias Khnaser (Twitter WWW) along with Technical Editors Brian Atkinson (WWW) and my friend Gabrie van Zanten (Twitter WWW).  This 2nd edition is 340 pages in length and ships with a cardboard fact filled cram sheet in the front as well as a CD in the back which contains VCP4 practice exams and an electronic versions of the cardboard cram sheet in case your friends are jealous of your intimate VMware vSphere knowledge and decide to swipe yours.

The book (ISBN-10: 0789740567) includes 10 chapters along with an appendix.  The chapter layout is as follows:

  1. Introducing vSphere 4
  2. Planning, Installing, and Configuring ESX/ESXi 4.1
  3. vNetworking Operations
  4. vStorage Operations
  5. Administration with vCenter
  6. Virtual Machine Operations
  7. vSphere Security and Web Access
  8. Managing vSphere Resources
  9. Monitoring vSphere Resources
  10. Backup and High Availability

There are a very few number of books which focus specifically on the VCP-410 exam.  This one is hot off the presses; published on January 31, 2011.  As such I would expect to find the most recent and relevant vSphere as well as exam information contained within.  Admittedly, I have not read this book cover to cover and have no plans to having passed the VCP4 17 months ago.  However, I have thumbed through several sections admiring the quality and thorough coverage of exam blueprint objectives.  This one looks pretty good and based on my positive experience with Exam Cram books in the past (Active Directory Services Design), I would recommend this book.  As a certification focused text, it contains practice questions at the end of each chapter and a comprehensive 75 question practice exam at the end of the book to test your knowledge.  Elias is open in that this book is recommended as supplemental learning in addition to the VMware ICM class (required for the certification) and a few other sources such as the VMware PDF documentation, but to be honest I think he sells himself a little short.  At well over 300 pages of exam focused material, I think it will go a long way towards passing the VCP-410 written test.

The book is available at Amazon in paperback for $28.40 or the Kindle version for $25.56.  Pearson was kind enough to ship me several copies which I will make available an the upcoming Minneapolis VMware User Group meeting.

What are you still doing here?  You should be reading this book.  Go.

StarWind Partners with OnApp Cloud Hosting

February 3rd, 2011 by jason 1 comment »

Press Release

Burlington, Mass. – February 02, 2010StarWind Software Inc., a global leader in developing iSCSI SAN software for small and midsize companies and OnApp, a leading developer of cloud management software for hosts, have joined forces to provide OnApp’s hosting cloud customers with an affordable and highly available SAN, free for one year and for a low monthly fee after the 1st year. Storage in the cloud helps overcome the burden of purchasing an expensive SAN solution, delaying or preventing IT’s migration to the cloud.

OnApp cloud software enables hosting providers to deploy clouds on commodity hardware, and manage cloud resources, failover, users and billing through a simple point-and-click interface. Cost-efficient and reliable storage is essential for stable business continuity in the cloud. The StarWind iSCSI SAN’s architecture and rich feature set provide an ideal solution for OnApp’s hosting customers. Since StarWind software can be installed on commodity servers the price point enables cloud services to be affordable, and eliminates vendor lock in associated with proprietary SAN hardware vendors.

“There’s a huge need for cost-effective storage in the hosting mass market, and especially in the fast-growing cloud hosting market,” said Carlos Rego, MD and Chief Architect of OnApp. “With StarWind we’re making it easy for hosts to add high performance cloud storage without huge up-front investment in SANs. Our special free for the 1st year  licensing offer helps reduce entry costs to cloud hosting, and with OnApp’s monthly licensing, another vital component of cloud hosting infrastructure is moved from CAPEX to OPEX.”

The StarWind partnership allows OnApp customers to deploy High Availability storage with a free one-year license for StarWind Enterprise HA 16TB or Unlimited TB editions, for up to two servers. After the 1st year, OnApp customers can license StarWind’s HA 16TB or Unlimited TB editions for a low monthly fee.

“Cloud hosting is the future of hosting and high availability storage is critical to provide server and application redundancy and uptime. In cooperation with our partner OnApp, we are pleased to contribute to the growing the cloud space. OnApp provides cost-effective and flexible cloud platforms and StarWind Software guarantees affordable and highly available SAN storage in the cloud,” said Art Berman, CEO of StarWind Software, Inc.

About OnApp

OnApp develops cloud management software for the hosting industry. OnApp software was developed from the ground up to enable mass-market hosts to build their own cloud hosting services. It enables hosts to deploy clouds in the datacenter using commodity hardware; provides rich functionality for cloud deployment, resource management, user management, failover and utility billing; has a high density design to maximize a host’s margins; and features pre-built integration to leading hosting billing engines, including WHMCS, Ubersmith and HostBill.

OnApp launched in July 2010 after two years of development. OnApp has offices in the US and Europe, employs more than 40 staff and can be found online at

For more information about OnApp, please contact:

Robert van der Meulen
+44 208 846 0855


About StarWind Software Inc.

StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software is focused on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated Storage Node Failover and Failback, Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

Since 2003 StarWind has pioneered the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in over 100 countries, from small and midsize companies to governments and Fortune 1000 companies.

Press Contacts:
StarWind Software Inc.
+1 (617) 449-7717


Social Media Links




Jumbo Frames Comparison Testing with IP Storage and vMotion

January 24th, 2011 by jason 50 comments »

Are you thinking about implementing jumbo frames with your IP storage based vSphere infrastructure?  Have you asked yourself why or thought about the guaranteed benefits? Various credible sources discuss it (here’s a primer).  Some will highlight jumbo frames as a best practice but the majority of what I’ve seen and heard talk about the potential advantages of jumbo frames and what the technology might do to make your infrastructure more efficient.  But be careful to not interpret that as an order of magnitude increase in performance for IP based storage.  In almost all cases, that’s not what is being conveyed, or at least, that shouldn’t be the intent.  Think beyond SPEED NOM NOM NOM.  Think efficiency and reduced resource utilization which lends itself to driving down overall latency.  There are a few stakeholders when considering jumbo frames.  In no particular order:

  1. The network infrastructure team: They like network standards, best practices, a highly performing and efficient network, and zero down time.  They will likely have the most background knowledge and influence when it comes to jumbo frames.  Switches and routers have CPUs which will benefit from jumbo frames because processing less frames but more payload overall makes the network device inherently more efficient while using less CPU power and consequently producing less heat.  This becomes increasingly important on 10Gb networks.
  2. The server and desktop teams: They like performance and unlimited network bandwidth provided by magic stuff, dark spirits, and friendly gnomes.  These teams also like a postive end user experience.  Their platforms, which include hardware, OS, and drivers, must support jumbo frames.  Effort required to configure for jumbo frames increases with a rising number of different hardware, OS, and driver combinations.  Any systems which don’t support network infrastructure requirements will be a showstopper.  Server and desktop network endpoints benefit from jumbo frames much of the same way network infrastructure does: efficiency and less overhead which can lead to slightly measurable amounts of performance improvement.  The performance gains more often than not won’t be noticed by the end users except for process that historically take a long amount of time to complete.  These teams will generally follow infrastructure best practies as instructed by the network team.  In some cases, these teams will embark on an initiative which recommends or requires a change in network design (NIC teaming, jumbo frames, etc.).
  3. The budget owner:  This can be a project sponsor, departmental manager, CIO, or CEO.  They control the budget and thus spending.  Considerable spend thresholds require business justification.  This is where the benefit needs to justify the cost.  They are removed from the most of the technical persuasions.  Financial impact is what matters.  Decisions should align with current and future architectural strategies to minimize costly rip and replace.
  4. The end users:  Not surprisingly, they are interested in application uptime, stability, and performance.  They could care less about the underlying technology except for how it impacts them.  Reduction in performance or slowness is highly visible.  Subtle increases in performance are rarely noticed.  End user perception is reality.

The decision to introduce jumbo frames should be carefully thought out and there should be a compelling reason, use case, or business justification which drives the decision.  Because of the end to end requirements, implementing jumbo frames can bring with it additional complexity and cost to an existing network infrastructure.  Possibly the single best one size fits all reason for a jumbo frames design is a situation where jumbo frames is already a standard in the existing network infrastructure.  In this situation, jumbo frames becomes a design constraint or requirement.  The evangelistic point to be made is VMware vSphere supports jumbo frames across the board.  Short of the previous use case, jumbo frames is a design decision where I think it’s important to weigh cost and benefit.  I can’t give you the cost component as it is going to vary quite a bit from environment to environment depending on the existing network design.  This writing speaks more to the benefit component.  Liberal estimates claim up to 30% performance increase when integrating jumbo frames with IP storage.  The numbers I came up with in lab testing are nowhere close to that.  In fact, you’ll see a few results where IO performance with jumbo frames actually decreased slightly.  Not only do I compare IO with or without jumbo frames, I’m also able to compare two storage protocols with and without jumbo frames which could prove to be an interesting sidebar discussion.

I’ve come across many opinions regarding jumbo frames.  Now that I’ve got a managed switch in the lab which supports jumbo frames and VLANs, I wanted to see some real numbers.  Although this writing is primarily regarding jumbo frames, by way of the testing regimen, it is in some ways a second edition to a post I created one year ago where I compared IO performance of the EMC Celerra NS-120 among its various protocols. So without further ado, let’s get onto the testing.


Lab test script:

To maintain as much consistency and integrity as possible, the following test criteria was followed:

  1. One Windows Server 2003 VM with IOMETER was used to drive IO tests.
  2. A standardized IOMETER script was leveraged from the VMTN Storage Performance Thread which is a collaboration of storage performance results on VMware virtual infrastructure provided by VMTN Community members around the world.  The thread starts here, was locked due to length, and continues on in a new thread here.  For those unfamiliar with the IOMETER script, it basically goes like this: each run consists of a two minute ramp up followed by five minutes of disk IO pounding.  Four different IO patterns are tested independently.
  3. Two runs of each test were performed to validate consistent results.  A third run was performed if the first two were not consistent.
  4. One ESXi 4.1 host with a single IOMETER VM was used to drive IO tests.
  5. For the mtu1500 tests, IO tests were isolated to one vSwitch, one vmkernel portgroup, one vmnic, one pNIC (Intel NC360T PCI Express), one Ethernet cable, and one switch port on the host side.
  6. For the mtu1500 tests, IO tests were isolated to one cge port, one datamover, one Ethernet cable, and one switch port on the Celerra side.
  7. For the mtu9000 tests, IO tests were isolated to the same vSwitch, a second vmkernel portgroup configured for mtu9000, the same vmnic, the same pNIC (Intel NC360T PCI Express), the same Ethernet cable, and the same switch port on the host side.
  8. For the mtu9000 tests, IO tests were isolated to a second cge port configured for mtu9000, the same datamover, a second Ethernet cable, and a second switch port on the Celerra side.
  9. Layer 3 routes to between host and storage were removed to lessen network burden and to isolate storage traffic to the correct interfaces.
  10. 802.1Q VLANs were used isolate traffic and categorize standard traffic versus jumbo frame traffic.
  11. RESXTOP was used to validate storage traffic was going through the correct vmknic.
  12. Microsoft Network Monitor and Wireshark were used to validate frame lengths during testing.
  13. Activities known to introduce large volumes of network or disk activity were suspended such as backup jobs.
  14. Dedupe was suspended on all Celerra file systems to eliminate datamover contention.
  15. All storage tests were performed on thin provisioned virtual disks and datastores.
  16. The same group of 15 spindles were used for all NFS and iSCSI tests.
  17. The uncached write mechanism was enabled on the NFS file system for all NFS tests.  You can read more about that in the following EMC best practices document VMware ESX Using EMC Celerra Storage Systems

Lab test hardware:

SERVER TYPE: Windows Server 2003 R2 VM on ESXi 4.1
CPU TYPE / NUMBER: 1 vCPU / 512MB RAM (thin provisioned)
HOST TYPE: HP DL385 G2, 24GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K / 3x RAID5 5×146
SAN TYPE: / HBAs: NFS / swiSCSI / 1Gb datamover ports (sorry, no FCoE)
OTHER: 3Com SuperStack 3 3870 48x1Gb Ethernet switch


Lab test results:

NFS test results.  How much better is NFS performance with jumbo frames by IO workload type?  The best result seen here is about a 7% performance increase by using jumbo frames, however, 100% read is a rather unrealistic representation of a virtual machine workload.  For NFS, I’ll sum it up as a 0-3% IOPS performance improvement by using jumbo frames.

SnagIt Capture

SnagIt Capture

iSCSI test results.  How much better is iSCSI performance with jumbo frames by IO workload type?  Here we see that iSCSI doesn’t benefit from the move to jumbo frames as much as NFS.  In two workload pattern types, performance actually decreased slightly.  Discounting the unrealistic 100% read workload as I did above, we’re left with a 1% IOPS performance gain at best by using jumbo frames with iSCSI.

SnagIt Capture

SnagIt Capture

NFS vs iSCSI test results.  Taking the best results from each protocol type, how do the protocol types compare by IO workload type?  75% of the best results came from using jumbo frames.  The better performing protocol is a 50/50 split depending on the workload pattern.  One interesting observation to be made in this comparison is how much better one protocol performs over the other.  I’ve heard storage vendors state that the IP protocol debate is a snoozer, they preform roughly the same.  I’ll grant that in two of the workload types below, but in the other two, iSCSI pulls a significant performance lead over NFS. Particularly in the Max Throughput-50%Read workload where iSCSI blows NFS away.  That said, I’m not outright recommending iSCSI over NFS.  If you’re going to take anything away from these comparisons, it should be “it depends”.  In this case, it depends on the workload pattern, among a handful of other intrinsic variables.  I really like the flexibility in IP based storage and I think it’s hard to go wrong with either NFS or iSCSI.

SnagIt Capture

SnagIt Capture

vMotion test results.  Up until this point, I’ve looked at the impact of jumbo frames on IP based storage with VMware vSphere.  For curiosity sake, I wanted to to address the question “How much better is vMotion performance with jumbo frames enabled?”  vMotion utilizes a VMkernel port on ESXi just as IP storage does so the ground work has already been established making this a quick test.  I followed roughly the same lab test script outlined above so that the most consistent and reliable results could be produced.  This test wasn’t rocket science.  I simply grabbed a few different VM workload types (Windows, Linux) with varying sizes of RAM allocated to them (2GB, 3GB, 4GB).  I then performed three batches of vMotions of two runs each on non jumbo frames (mtu1500) and jumb frames (mtu9000).  Results varied.  The first two batches showed that jumbo frames provided a 7-15% reduction in elapsed vMotion time.  But then the third and final batch contrasted previous results with data revealing a slight decrease in vMotion efficiency with jumbo frames.  I think there’s more variables at play here and this may be a case where more data sampling is needed to form any kind of reliable conclusion.  But if you want to go by these numbers, vMotion is quicker on jumbo frames more often than not.

SnagIt Capture

SnagIt Capture

The bottom line:

So what is the bottom line on jumbo frames, at least today?  First of all my disclaimer:  My tests were performed on an older 3Com network switch.  Mileage may vary on newer or different network infrastructure.  Unfortunately I did not have access to a 10Gb lab network to perform this same testing.  However, I believe my findings are consistent with the majority of what I’ve gathered from the various credible sources.  I’m not sold on jumbo frames as a provider of significant performance gains.  I wouldn’t break my back implementing the technology without an undisputable business justification.  If you want to please the network team and abide by the strategy of an existing jumbo frames enabled network infrastructure, then use jumbo frames with confidence.  If you want to be doing everything you possibly can to boost performance from your IP based storage network, use jumbo frames.  If you’re betting the business on IP based storage, use jumbo frames.  If you need a piece of plausible deniability when IP storage performance hits the fan, use jumbo frames. If you’re looking for the IP based storage performance promise land, jumbo frames doesn’t get you there by itself.  If you come across a source telling you otherwise, that jumbo frames is the key or sole ingredient to the Utopia of incomprehendable speeds, challenge the source.  Ask to see some real data.  If you’re in need of a considerable performance boost of your IP based storage, look beyond jumbo frames.  Look at optimizing, balancing, or upgrading your back end disk array.  Look at 10Gb.  Look at fibre channel.  Each of these alternatives are likely to get you better overall performance gains than jumbo frames alone.  And of course, consult with your vendor.

I’m a VCAP4-DCD and VCDX4

January 11th, 2011 by jason 17 comments »

Bless me readers for I have sinned.  It has been nearly five weeks since my last blog entry.  Since then I’ve acquired numerous electronic distractions in the house and took a little vacation from work and virtualization.  I also randomly and unprovoked received a Microsoft Hyper-V sticker in the mail from Stephen Foskett and I have been thinking about revenge almost daily.  (no, not really)

Yes it has been a while and I did take a break from work and all things virtual for some family time and “me time” over the holidays.  So let me get the needfully ubiquitous out of the way by saying Happy New Year to all!  I hope 2011 brings continued health to your family, joy into your life, success into your career, agility into your VMware virtualized datacenter/private cloud, and uncontested Secure Multi-Tenancy into your public cloud.

For me personally, success starts early in 2011.  As you probably guessed by the title, I got some great news from VMware tonight in that.. well you can read it verbatim:

Congratulations on passing the VMware Certified Advanced Professional on vSphere4 – Datacenter Design exam!

I’m now a VCAP4-DCD.  I a few weeks later I was notified by VMware that I was assigned VCAP4-DCD #35.

Passing this exam is great on a few levels.  I now have the VCAP4-DCD certification to go along with the VCAP4-DCA credentials I picked up a few months ago.  In addition to that, the VCAP4-DCD pass also upgrades my VCDX3 certification to VCDX4.  I haven’t received the official word on that from VMware yet but I’ve met the requirements and notification which I expect within the next few months is more or less a formality.  I’m pretty happy to have these achievements, and particularly early on.  With no other certifications currently in sight, I can conitnue forward working on various projects and initiatives.  If you thought I ran out of things to write about, that’s not the case.  I’ve got plenty in the queue.  One benefit I value working with products in an enterprise environment as opposed to strictly working on the education/instructor side of the fence is that there is no shortage of experience gained and lessons learned while working in the trenches.  I work in a blend of VMware vSphere design and operations which I think is an exceptional example of Yin and Yang because they perpetually stregthen each other through experience.

By the way.. on the electronic distractions.. my family (this was a collaborative decision) picked up our first gaming console.  We got the PS3.  My PlayStation Network ID is VCDX034.  Hit me up if you’re interested in a game of NHL 11 or Madden 11.  I’m not very good but I’ll take a beating and I’m a good sport about it.  VCAP4-DCD achievement unlocked. VCDX4 achievement unlocked.  See what I did there? 8-)

Update 8/18/11:  No VCDX4 certificate or welcome kit received yet.

IBM x3850 M2 shows 8 sockets on ESXi 4.1

December 9th, 2010 by jason 1 comment »

Working with an IBM x3850 M2, I noticed VMware ESXi 4.1 was reporting 8 processor sockets when I know this model has only 4 sockets.  It was easily noticable as I ran out of ESX host licensing prematurely.  The problem is also reported with the IBM x3950 M2 in this thread.

SnagIt Capture

SnagIt Capture

Here’s the fix:  Reboot the host and flip a setting in the BIOS.

POST -> F1 -> Advanced Setup -> CPU Options -> Clustering Technology. Toggle the Clustering Technology configuration from Logical Mode to Physical Mode.

After the above change is made, sanity is restored in that ESXi 4.1 will properly see 4 sockets and licenses will be consumed appropriately.

SnagIt Capture

SnagIt Capture

Memory Compression Video

December 9th, 2010 by jason 4 comments »

Vladan SEGET created a blog post on VMware ESX(i) 4.1 Memory Compression.  In his post, he linked to a fantastically simple vmwaretv video demonstration  of memory compression in action compared to a hypervisor with no memory compression enabled.  For anyone looking for the tool used in the video to perform your own memory compression testing but cannot find it, it’s “around”.  Let me know and I might be able to help you find it.

I was going to update my memory compression blog post crediting Vladan and embedding the video, but sadly, I have no memory compression blog post yet!  So instead, I send you to Vladan’s ESX Virtualization blog using the link above.

Note to self: create a memory compression blog post.