New in vSphere 5 is the VMFS-5 file system for block storage. VMware customers who upgraded from VMFS-2 to VMFS-3 will likely remember the shell game which had to be played in order to migrate VMs from VMFS-2 to VMFS-3. It worked but it wasn’t the easiest process, particularly if spare storage was not available in order to move VMs around.
VMware has drastically improved the VMFS upgrade process with vSphere 5. Not only can existing VMFS-3 datastores be upgraded to VMFS-5 in place, but the upgrade can be performed with running VMs on the storage being upgraded. Now you might be asking yourself a few questions:
- If ESXi 5.0 hosts can run VMs on VMFS-3 or VMFS-5 (there’s a flexible improvement right there), then why even bother upgrading to VMFS-5?
- Is there any technical difference or advantage between net new VMFS-5 datastores and upgraded VMFS-5 datastores which were once VMFS-3?
By now, you may understand what new features VMFS-5 offers. A unified block size, 64TB datastores without using extents, improvements surrounding sub block allocation (SBA), support for many more files on a datastore, and a new partition type which is what enables datastores larger than 2TB. These new features should answer the first question of “VMFS-5: what’s in it for me?” But what about the 2nd question of “Does it matter which migration path I take to get my datastores to VMFS-5?
The tactical approach differences are subtle but nonetheless could be impactful depending on the environment. I’ve compiled information from vSphere 5 beta documentation and VMware blogs. I then categorized the information into two bulleted lists to compare similarities and contrast the differences.
Similarities between upgraded and newly created VMFS-5 datastores:
- Both upgraded VMFS-5 and newly created VMFS-5 support the new 64TB datastore limit. Obviously you’ll need an array which supports growing the existing datastores beyond their original size which would have been 2TB-512B or less.
- Both upgraded VMFS-5 and newly created VMFS-5 support the new 64TB passthru (physical) RDM limit.
- The maximum size of a non-passthru (virtual) RDM on VMFS-5 is still 2TB -512 bytes.
- The maximum size of a file (ie .VMDK virtual disk) on VMFS-5 is still 2TB -512 bytes.
- The VMFS-3 to VMFS-5 conversion is a one-way process. After you convert the VMFS-based datastore to VMFS-5, you cannot revert back to VMFS-3 without creating a new VMFS-3 datastore (which by the way vSphere 5 supports along with the legacy 1, 2, 4, 8MB block sizes).
Differences between upgraded and newly created VMFS-5 datastores:
- VMFS-5 upgraded from VMFS-3 continues to use the previous file block size which may be larger than the unified 1MB file block size. Copy operations between datastores with different block sizes won’t be able to leverage VAAI. This is the primary reason I would recommend the creation of new VMFS-5 datastores and migrating virtual machines to new VMFS-5 datastores rather than performing in place upgrades of VMFS-3 datastores.
- VMFS-5 upgraded from VMFS-3 continues to use 64KB sub-blocks and not new 8K sub-blocks.
- VMFS-5 upgraded from VMFS-3 continues to have a file limit of 30,720 rather than the new file limit of > 100,000 for newly created VMFS-5.
- VMFS-5 upgraded from VMFS-3 continues to use MBR (Master Boot Record) partition type; when the VMFS-5 volume is grown above 2TB, it automatically switches from MBR to GPT (GUID Partition Table) without impact to the running VMs.
- VMFS-5 upgraded from VMFS-3 will continue to have a partition starting on sector 128; newly created VMFS-5 partitions start at sector 2,048.
Based on the information above, the best approach to migrate to VMFS-5 is to create net new VMFS-5 datastores if you have the extra storage space, can afford the number of Storage vMotions required, and have a VAAI capable storage array holding existing datastores with 2, 4, or 8MB block sizes.
For more information about vSphere 5 storage enhancements and VAAI, take a look at the following links:
- Upgrading VMFS datastores and SDRS by Frank Denneman
- Blocksize impact? by Duncan Epping
- vSphere 5.0 Storage Features Part 1 – VMFS-5 Cormac Hogan, VMware (be sure to read parts 2 & 3 as well)
Nice article, saved me the trouble, especially the caveats around whether to upgrade from 3 to 5.
The PT RDM on VMFS5 can be up to 64TB
Just a correction.
Can I storage vMotion all the VMFS3 VMs to VMFS5 datastore and create a new VMFS5 datastore from the existing VMFS3 datastore?
Yes that’s the preferred method if your VMFS-3 volumes have block sizes larger than 1MB & you have a VAAI capable array (or if you need the smaller SBA sizes or require a greater number of files on the datastore).
Jason, any idea what the technical limitations were behind the 2TB limit staying in place for virtual mode RDM’s and VMDK files? This makes the increased size limits a much less significant improvement for us than it would have been if the individual drive size limit has almost been increased.
@afidel I do not know (good question for VMware) but it does make physical RDMs look attractive for large disk footprint VMs doesn’t it?
I dislike the fact that there is an actual use case for RDMs outside of portability or clustering, etc. Physical mode RDMs carry a lot of caveats, but politically the fact you don’t have to use dynamic disks, etc, at the OS level to create larger volumes becomes a hairy one to navigate. I typically could manage NOT to allow servers which needed such storage capacity to be virtualized. Don’t get me wrong, I’m OK with the idea of having them virtual… I just want the storage to be virtualized too. 2TB+ .vmdks would be ideal.
Is the VMFS max size limit 64TB or 60TB? I see you mention 64TB on this post but on VMware’s storage blog they are saying “~60TB” here: http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
Per Cormac from VMware today on the VMTN Community Roundtable podcast, the supported limit is what the GUI supports from a volume creation standpoint. In the vSphere 5 RTM bits, we’re able to create a 64TB datastore. Today, @tphakala reported to myself and @VMwareStorage (A VMware handle operated by Cormac) “65 TB LUN is rounded down to 64 TB, exact number of bytes reported by df is 70368744177664”. @tphakala provided a few screenshots also:
http://t.co/rMYuTQp
http://t.co/IdFGJrK
hello
in my network i have vsphere server 4.1 and i want upgrade to 5.what happen if i want vmfs 3 upgrade to vmfs 5 for my VMs in vsphere?
Very nice summary, thanks Jason!
based on the following info from this webpage, can v5 create a d: drive greater than 2tb on a 64bit w2k8 vm?
•The maximum size of a non-passthru (virtual) RDM on VMFS-5 is still 2TB -512 bytes.
•The maximum size of a file (ie .VMDK virtual disk) on VMFS-5 is still 2TB -512 bytes.
@jrm Only if you use a vSphere 5 physical RDM.
Nice job, very clearly documents and appreciated!