Posts Tagged ‘ESXi’

StarWind and Cirrus Tech Partner to Deliver Cutting Edge Technologies to the Cloud Computing Market

August 12th, 2012

Press Release

StarWind Solutions Become Available Through a Leading Canadian Web Hosting Company

Burlington, MA – 6 August 2012StarWind Software Inc., an innovative provider of storage virtualization software and VM backup technology, announced today a new partnership agreement with Cirrus Tech Ltd., a Canadian web hosting company specializing in VPS, VM and cloud hosting services. Companies collaborate to deliver best-in-breed Cloud services that help customers accelerate their businesses.

According to the agreement, Cirrus Tech extends its portfolio with StarWind storage virtualization software and will offer it to their customers as a dedicated storage platform that delivers a highly available and high performance scalable storage infrastructure that is capable of supporting heterogeneous server environments; as Cloud storage for private clouds as well as a robust solution for building Disaster Recovery (DR) plans.

StarWind SAN solutions deliver a wide variety of enterprise-class features, such as High Availability (HA), Synchronous Data Mirroring, Remote Asynchronous Replication, CDP/Snapshots, Thin Provisioning, Global Deduplication, etc., that make the stored data highly available, simplify storage management, and ensure business continuity and disaster recovery.

“Companies are increasingly turning to cloud services to gain efficiencies and respond faster to today’s changing business requirements.” said Artem Berman, Chief Executive Officer of StarWind Software, Inc. “We are pleased to combine our forces with Cirrus Tech in order to deliver our customers a wide range of innovative cloud services that will help their transition to a flexible and efficient shared IT infrastructure.”

“Every business needs to consider what would happen in the event of a disaster,” shares Cirrus CEO Ehsan Mirdamadi. “By bringing StarWind’s SAN solution to our customers, we are helping them to ease the burden of disaster recovery planning by offering powerful and affordable storage options. You never want to think of the worst, but when it comes to your sensitive data and business critical web operations, it’s always better to be safe than sorry. Being safe just got that much easier for Cirrus customers.”

To find out more about Cirrus’ web hosting services visit http://www.cirrushosting.com or call 1.877.624.7787.
For more information about StarWind, visit www.starwindsoftware.com

About Cirrus Hosting
Cirrus Tech Ltd. has been a leader in providing affordable, dependable VHS and VPS hosting services in Canada since 1999. They have hosted and supported hundreds of thousands of websites and applications for Canadian businesses and clients around the world. As a BBB member with an A+ rating, Cirrus Tech is a top-notch Canadian web hosting company with professional support, rigorous reliability and easily upgradable VPS solutions that grow right alongside your business. Their Canadian data center is at 151 Front Street in Toronto.

About StarWind Software Inc.
StarWind Software is a global leader in storage management and SAN software for small and midsize companies. StarWind’s flagship product is SAN software that turns any industry-standard Windows Server into a fault-tolerant, fail-safe iSCSI SAN. StarWind iSCSI SAN is qualified for use with VMware, Hyper-V, XenServer and Linux and Unix environments. StarWind Software focuses on providing small and midsize companies with affordable, highly availability storage technology which previously was only available in high-end storage hardware. Advanced enterprise-class features in StarWind include Automated HA Storage Node Failover and Failback (High Availability), Replication across a WAN, CDP and Snapshots, Thin Provisioning and Virtual Tape management.

Since 2003, StarWind has pioneered the iSCSI SAN software industry and is the solution of choice for over 30,000 customers worldwide in more than 100 countries and from small and midsize companies to governments and Fortune 1000 companies.

For more information on StarWind Software Inc., visit: www.starwindsoftware.com

Storage: Starting Thin and Staying Thin with VAAI UNMAP

June 28th, 2012

For me, it’s hard to believe nearly a year has elapsed since vSphere 5 was announced on July 12th.  Among the many new features that shipped was an added 4th VAAI primitive for block storage.  The primitive itself revolved around thin provisioning and was the sum of two components: UNMAP and STUN.  At this time I’m going to go through the UNMAP/Block Space Reclamation process in a lab environment and I’ll leave STUN for a later discussion.

Before I jump into the lab, I want frame out a bit of a chronological timeline around the new primitive.  Although this 4th primitive was formally launched with vSphere 5 and built into the corresponding platform code that shipped, a few months down the road VMware issued a recall on the UNMAP portion of the primitive due to a discovery made either in the field or in their lab environment.  With the UNMAP component recalled, the Thin Provisioning primitive as a whole (including the STUN component) was not supported by VMware.  Furthermore, storage vendors could not be certified for the Thin Provisioning VAAI primitive although the features may have been functional if their respective arrays supported it.  A short while later, VMware released a patch which, once installed on the ESXi hosts, disabled the UNMAP functionality globally.  In March of this year, VMware released vSphere 5.0 Update 1.  With this release, VMware implemented the necessary code to resolve the performance issues related to UNMAP.  However, VMware did not re-enable the automatic UNMAP mechanism.  Instead and in the interim, VMware implemented a manual process for block space reclamation on a per datastore basis regardless of the global UNMAP setting on the host.  I believe it is VMware’s intent to bring back “automatic” UNMAP long term but that is purely speculation.  This article will walk through the manual process of returning unused blocks to a storage array which supports both thin provisioning and the UNMAP feature.

I also want to point out some good information that already exists on UNMAP which introduces the feature and provides a good level of detail.

  • Duncan Epping wrote this piece about a year ago when the feature was launched.
  • Cormac Hogan wrote this article in March when vSphere 5.0 Update 1 was launched and the manual UNMAP process was re-introduced.
  • VMware KB 2014849 Using vmkfstools to reclaim VMFS deleted blocks on thin-provisioned LUNs

By this point, if you are unaware of the value of UNMAP, it is simply keeping thin provisioned LUNs thin.  By doing so, raw storage is consumed and utilized in the most efficient manner yielding cost savings and better ROI for the business. Arrays which support thin provisioning have been shipping for years.  What hasn’t matured is just as important as thin provisioning itself: the ability to stay thin where possible.  I’m going to highlight this below in a working example but basically once pages are allocated from a storage pool, they remain pinned to the volume they were originally allocated for, even after the data written to those pages has been deleted or moved.  Once the data is gone, the free space remains available to that particular LUN and the storage host which owns it and will continue to manage it – whether or not that free space will ever be needed again in the future for that storage host.  Without UNMAP, the pages are never released back to the global storage pool where they may be allocated to some other LUN or storage host whether it be virtual or physical.  Ideal use cases for UNMAP:  Transient data, Storage vMotion, SDRS, data migration. UNMAP functionality requires the collaboration of both operating system and storage vendors.  As an example, Dell Compellent Storage Center has supported the T10 UNMAP command going back to early versions of the 5.x Storage Center code, however there has been very little adoption on the OS platform side which is responsible for issuing the UNMAP command to the storage array when data is deleted from a volume.  RHEL 6 supports it, vSphere 5.0 Update 1 now supports it, and Windows Server 2012 is slated to be the first Windows platform to support UNMAP.

UNMAP in the Lab

So in the lab I have a vSphere ESXi 5.0 Update 1 host attached to a Dell Compellent Storage Center SAN.  To demonstrate UNMAP, I’ll Storage vMotion a 500GB virtual machine from one 500GB LUN to another 500GB LUN.  As you can see below from the Datastore view in the vSphere Client, the 500GB VM is already occupying lun1 and an alarm is thrown due to lack of available capacity on the datastore:

Snagit Capture

Looking at the volume in Dell Compellent Storage Center, I can see that approximately 500GB of storage is being consumed from the storage page pool. To keep the numbers simple, I’ll ignore actual capacity consumed due to RAID overhead.

Snagit Capture

After the Storage vMotion

I’ve now performed a Storage vMotion of the 500GB VM from lun1 to lun2.  Again looking at the datastores from a vSphere client perspective, I can see that lun2 is now completely consumed with data while lun1 is no longer occupied – it now has 500GB  capacity available.  This is where operating systems and storage arrays which do not support UNMAP fall short of keeping a volume thin provisioned.

Snagit Capture

Using the Dell Compellent vSphere Client plug-in, I can see that the 500GB of raw storage originally allocated for lun1 remains pinned with lun1 even though the LUN is empty!  I’m also occupying 500GB of additional storage for the virtual machine now residing on lun2.  The net here is that as a result of my Storage vMotion, I’m occupying nearly 1TB of storage capacity for a virtual machine that’s half the size.  If I continue to Storage vMotion this virtual machine to other LUNs, the problem is compounded and the available capacity in the storage pool continues to drain, effectively raising the high watermark of consumed storage.  To add insult to injury, this will more than likely be stranded Tier 1 storage – backed by the most expensive spindles in the array.

Snagit Capture

Performing the Manual UNMAP

Using a PuTTY connection to the ESXi host, I’ll start with identifying the naa ID of my datastore using esxcli storage core device list |more

Snagit Capture

Following the KB article above, I’ll make sure my datastore supports the UNMAP primitive using esxcli storage core device vaai status get -d <naa ID>.  The output shows UNMAP is supported by Dell Compellent Storage Center, in addition to the other three core VAAI primitives (Atomic Test and Set, Copy Offload, and Block Zeroing).

Snagit Capture

I’ll now change to the directory of the datastore and perform the UNMAP using vmkfstools -y 100.  It’s worth pointing out here that using a value of 100, although apparently supported, ultimately fails.  I reran the command using a value of 99% which successfully unmapped 500GB in about 3 minutes.

Snagit Capture

Also important to note is VMware recommends the reclaim be run after hours or during a maintenance window with maximum recommended reclaim percentage of 60%.  This value is pointed out by Duncan in the article I linked above and it’s also noted when providing a reclaim value outside of the acceptable parameters of 0-100.  Here’s the reasoning behind the value:  When the manual UNMAP process is run, it balloons up a temporary hidden file at the root of the datastore which the UNMAP is being run against.  You won’t see this balloon file with the vSphere Client’s Datastore Browser as it is hidden.  You can catch it quickly while UNMAP is running by issuing the ls -l -a command against the datastore directory.  The file will be named .vmfsBalloon along with a generated suffix.  This file will quickly grow to the size of data being unmapped (this is actually noted when the UNMAP command is run and evident in the screenshot above).  Once the UNMAP is completed, the .vmfsBalloon file is removed.  For a more detailed explanation behind the .vmfsBalloon file, check out this blog article.

Snagit Capture

The bottom line is that the datastore needs as much free capacity as what is being unmapped.  VMware’s recommended value of 60% reclaim is actually a broad assumption that the datastore will have at least 60% capacity available at the time UNMAP is being run.  For obvious reasons, we don’t want to run the datastore out of capacity with the .vmfsBalloon file, especially if there are still VMs running on it.  My recommendation if you are unsure or simply bad at math: start with a smaller percentage of block reclaim initially and perform multiple iterations of UNMAP safely until all unused blocks are returned to the storage pool.

To wrap up this procedure, after the UNMAP step has been run with a value of 99%, I can now see from Storage Center that nearly all pages have been returned to the page pool and 500gbvol1 is only consuming a small amount of raw storage comparatively – basically the 1% I wasn’t able to UNMAP using the value of 99% earlier.  If I so chose, I could run the UNMAP process again with a value of 99% and that should return just about all of the 2.74GB still being consumed, minus the space consumed for VMFS-5 formatting.

Snagit Capture

The last thing I want to emphasize is that today, UNMAP works at the VMFS datastore layer and isn’t designed to work inside the encapsulated virtual machine.  In other words, if I delete a file inside a guest operating system running on top of the vSphere hypervisor with attached block storage, that space can’t be liberated with UNMAP.  As a vSphere and storage enthusiast, for me that would be next on the wish list and might be considered by others as the next logical step in storage virtualization.  And although UNMAP doesn’t show up in Windows platforms until 2012, Dell Compellent has developed an agent which accomplishes the free space recovery on earlier versions of Windows in combination with a physical raw device mapping (RDM).

Update 7/2/12: VMware Labs released its latest fling – Guest Reclaim.

From labs.vmware.com:

Guest Reclaim reclaims dead space from NTFS volumes hosted on a thin provisioned SCSI disk. The tool can also reclaim space from full disks and partitions, thereby wiping off the file systems on it. As the tool deals with active data, please take all precautionary measures understanding the SCSI UNMAP framework and backing up important data.

Features

  • Reclaim space from Simple FAT/NTFS volumes
  • Works on WindowsXP to Windows7
  • Can reclaim space from flat partitions and flat disks
  • Can work in virtual as well as physical machines

Whats a Thin provisioned (TP) SCSI disks? In a thin provisioned LUN/Disk, physical storage space is allocated on demand. That is, the storage system allocates space as and when a client (example a file system/database) writes data to the storage medium. One primary goal of thin provisioning is to allow for storage overcommit. A thin provisioned disk can be a virtual disk, or a physical LUN/disk exposed from a storage array that supports TP. Virtual disks created as thin disks are exposed as TP disks, starting with virtual Hardware Version 9. For more information on this please refer http://en.wikipedia.org/wiki/Thin_provisioning. What is Dead Space Reclamation?Deleting files frees up space on the file system volume. This freed space sticks with the LUN/Disk, until it is released and reclaimed by the underlying storage layer. Free space reclamation allows the lower level storage layer (for example a storage array, or any hypervisor) to repurpose the freed space from one client for some other storage allocation request. For example:

  • A storage array that supports thin provisioning can repurpose the reclaimed space to satisfy allocation requests for some other thin provisioned LUN within the same array.
  • A hypervisor file system can repurpose the reclaimed space from one virtual disk for satisfying allocation needs of some other virtual disk within the same data store.

GuestReclaim allows transparent reclamation of dead space from NTFS volumes. For more information and detailed instructions, view the Guest Reclaim ReadMe (pdf)

Update 5/14/13: Excerpt from Cormac Hogan’s vSphere storage blog: “We’ve recently been made aware of a limitation on our UNMAP mechanism in ESXi 5.0 & 5.1. It would appear that if you attempt to reclaim more than 2TB of dead space in a single operation, the UNMAP primitive is not handling this very well.” Read more about it here: Heads Up! UNMAP considerations when reclaiming more than 2TB s

Update 9/13/13: vSphere 5.5 UNMAP Deep Dive

Using vim-cmd To Power On Virtual Machines

June 21st, 2012

I’ve been pretty lucky in that since retiring the UPS equipment in the lab, the flow of electricity to the lab has been both clean and consistent.  We get some nasty weather and high winds in this area but I’ll bet there hasn’t been an electrical outage in close to 2+ years.  Well, early Tuesday morning we had a terrible storm with hail and winds blowing harder than ever.  I ended up losing some soffits, window screens, and a two stall garage door.  A lot of mature trees were also lost in the surrounding area.  I was pretty sure we’d be losing electricity and the lab would go down hard – and it did.

If you’re familiar with my lab, you might know that it’s 100% virtualized.  Every infrastructure service, including DHCP, DNS, and Active Directory, resides in a virtual machine.  This is great when the environment is stable, but recovering this type of environment from a complete outage can be a little hairy.  After bringing the network, storage, and ESXi hosts online, I still have no DHCP on the network from which I’d leverage to open the vSphere Client and connect to an ESXi host.  What this means is that I typically will bring up a few infrastructure VMs from the ESXi host TSM (Tech Support Mode) console.  No problem, I’ve done this many times in the past using vmware-cmd.

Snagit Capture

Well, on ESXi 5.0 Update 1, vmware-cmd no longer brings joy.  The command has apparently been deprecated and replaced by /usr/bin/vim-cmd.

Snagit Capture

Before I can start my infrastructure VMs using vim-cmd, I need to find their corresponding Vmid using vim-cmd vmsvc/getallvms (add |more at the end to pause at each page of a long list of registered virtual machines):

Snagit Capture

Now that I have the Vmid for the infrastructure VM I want to power on, I can power it on using vim-cmd vmsvc/power.on 77.  At this point I’ll have DHCP and I can use the vSphere Client on my workstation to power up the remaining virtual machines in order.  Or, I can continue using vim-cmd to power on virtual machines.

Snagit Capture

As you can see from the output below, there is much more that vim-cmd can accomplish within the virtual machine vmsvc context:

Snagit Capture

Take a quick look at this in your lab. Command line management is popular on the VCAP-DCA exams. Knowing this could prove useful in the exam room or the datacenter the next time you experience an outage.

Invitation to Dell/Sanity Virtualization Seminar

May 22nd, 2012

I know this is pretty short notice but I wanted to make local readers aware of a lunch event taking place tomorrow between 11:00am and 1:30pm.  Dell and Sanity Solutions will be discussing storage technologies for your vSphere virtualized datacenter, private, public, or  hybrid cloud.  I’ll be on hand as well talking about some of the key integration points between the vSphere and Storage Center.  You can find full details in the brochure below.  Click on it or this text to get yourself registered and we’ll hope to see you tomorrow.

Snagit Capture

AppAssure Webinar – The Top 3 Backup Technologies for 2012

April 24th, 2012

Webinar Announcement:

What: The Top 3 Backup Technologies for 2012

When: Wednesday, April 25, 12:00 PM CST

Where: http://go.appassure.com/webinar-top-3-backup-technologies-042512-gmt

Details: FEATURED SPEAKER: Joseph Hand, Sr. Director of Product Strategy, AppAssure Software.  Learn about the 3 Backup & Recovery technologies you need for 2012!  Instant Restore with near-zero RTO, Backup and Recovery auto-verification, and Cross-platform Data Recovery – P2V, V2V, V2P.

For more on this and other AppAssure webinars, please visit http://www.appassure.com/live-webinars/

vSphere 5.0 Update 1 and Related Product Launches

March 16th, 2012

VMware has unveiled a point release update to several of their products tied to the vSphere 5 virtual cloud datacenter platform plus a few new product launches.

vCenter 5.0 Update 1 – Added support for new guest operating systems such as Windows 8, Ubuntu, and SLES 11 SP2, the usual resolved issues and bug fixes, plus some updates around vRAM limits licensing.  One other notable – no compatibility at this time with vSphere Data Recovery (vDR) 2.0 according to the compatibility matrix.

ESXi 5.0 Update 1 – Added support for new AMD and Intel processors, Mac OS X Server Lion, updated chipset drivers, resolved issues and bug fixes.  One interesting point to be made here is that according to the compatibility matrix, vCenter 5.0 supports ESXi 5.0 Update 1.  I’m going to stick with the traditional route of always upgrading vCenter before upgrading hosts as a best practices habit until something comes along to challenge that logic.

vCloud Director 1.5.1 – Added support for vSphere 5.0 Update 1 and vShield 5.0.1, plus RHEL 5 Update 7 as a supported server cell platform.  Enhancements were made around firewall rules, AMQP system notifications, log collection, chargeback retention, resolved issues, and added support for AES-256 encryption on Site-to-Site VPN tunnels (unfortunately no vSphere 5.0 Update 1 <-> vCloud Connector 1.5 support).  Oh yes, sometime over the past few months, VMware Marketing has quietly changed the acronym for vCloud Director from vCD to VCD.  We’ll just call that a new feature for 1.5.1 going forward.  I <3 the Marketing team.

Site Recovery Manager 5.0.1 – Added support for vSphere 5.0 Update 1 plus a “Forced Failover” feature which allows VM recovery in cases where storage arrays fail at the protected site which, in the past, lead to unmanageable VMs which cannot be shut down, powered off, or unregistered.  Added IP customization for some Ubuntu platforms.  Many bug fixes, oh yes.  VMware brought back an advanced feature which hasn’t been seen since SRM 4.1 which provided a configurable option, storageProvider.hostRescanCnt, allowing repeated host scans during testing and recovery. This option was removed from SRM 5.0 but has been restored in the Advanced Settings menu in SRM 5.0.1 and can be particularly useful in troubleshooting a failed Recovery Plan. Right-click a site in the Sites view, select Advanced Settings, then select storageProvider. See KB 1008283.  Storage arrays certified on SRM 5.0 (ie. Dell Compellent Storage Center) are automatically certified on SRM 5.0.1.

View 5.0.1 – Added support for vSphere 5.0 Update 1, new Connection Server, Agent, Clients, fixed known issues.  Ahh.. let’s go back to that new clients bit.  New bundled Mac OS X client with support for PCoIP!  I don’t have a Mac so those who would admit to calling me a friend will have to let me know how sharp that v1.4 Mac client is.  As mentioned in earlier release notes, Ubuntu got a plenty of love this week.  Including a new View PCoIP version 1.4 client for Ubuntu Linux.  I might just have to deploy an Ubuntu desktop somewhere to test this client.  But wait, there’s more.  New releases of the View client for Android and iPad tablets.  The Android client adds fixes for Ice Cream Sandwich devices, security stuff, and updates for the Kindle Fire (I need to get this installed on my wife’s Fire).  The updated iPad client improves both connection times as well as external display support but for the most part Apple fans are flipping out simply over something shiny and new.  Lastly, VMware created a one stop shop web portal for all client downloads which can be fetched at http://www.vmware.com/go/viewclients/

vShield 5.0.1 – Again, added support for vSphere 5.0 Update 1, enhanced reporting and export options, new REST API calls, improved audit logs, simplified troubleshooting, improved vShield App policy management as well as HA enhancements, and enablement of Autodeploy through vShield VIB host modules downloadable from vShield Manager.

So… looking at the compatibility matrix with all of these new code drops, my lab upgrade order will look something like this:

1a. View 5.0 –> View 5.0.1

1b. vCD 1.5 –> VCD 1.5.1

1c. SRM 5.0 –> SRM 5.0.1

1d. vShield App/Edge/Endpoint 5.0 –> 5.0.1

1e. vDR 2.0 –> Go Fish

2. vSphere Client 5.0.1 (it’s really not an upgrade, installs parallel with other versions)

3. vCenter Server 5.0 –> vCenter Server 5.0 Update 1

4. Update Manager 5.0 –> Update Manager 5.0 Update 1

5. ESXi 5.0 –> ESXi 5.0 Update 1

There are a lot of versions in play here which weaves somewhat of a tangled web of compatibility touch points to identify before diving head first into upgrades.  I think VMware has done a great job this time around with releasing products that are, for the most part, compatible with other currently shipping products which provides more flexibility in tactical approach and timelines.  Add to that, some time ago they’ve migrated a two dimensional .PDF based compatibility matrix into an online portal offering interactive input making the set of results customized for the end user.  The only significant things missing in the vSphere 5.0U1 compatibility picture IMO are vCloud Connector, vDR, and based on the results from the compatibility matrix portal – vCenter Operations (output showed no compatibility with vSphere 5.x, didn’t look right to me).  I’ve taken a liberty in creating a component compatibility visual roadmap including most of the popular and currently shipping products vSphere 5.0 and above.  If you’ve got a significant amount of infrastructure to upgrade, this may help you get the upgrade order sorted out quickly.  One last thing – Lab Manager and ESX customers should pay attention to the Island of Misfit Toys.  In early 2013 the Lab Manager ride comes coasting to a stop.  Lab Manager and ESX customers should be formulating solid migration plans with an execution milestone coming soon.

Snagit Capture

Jobs

February 25th, 2012

I receive a lot of communication from recruiters, some of which I’m allowed to share, so I’ve decided to try something.  On the Jobs page, I’ll pass along virtualization and cloud centric opportunities – mostly US based but in some cases throughout the globe.  Only recruiter requests will be posted.  I won’t syndicate content easily found on the various job boards.  If you’re currently on the bench or looking for a new challenge, you may find it here.  Don’t tell them Jason sent you.  I receive no financial gain or benefit otherwise but I thought I could do something with these opportunities other than deleting them.  Best of luck in your search.

In case you missed the link, the Jobs page.