VMTN Storage Performance Thread and the EMC Celerra NS-120

January 23rd, 2010 by jason Leave a reply »

The VMTN Storage Performance Thread is a collaboration of storage performance results on VMware virtual infrastructure provided by VMTN Community members around the world.  The thread starts here, was locked due to length, and continues on in a new thread here.  There’s even a Google Spreadsheet version, however, activity in that data repository appears to have diminished long ago.  The spirit of the testing is outlined by thread creater and VMTN Virtuoso christianZ

“My idea is to create an open thread with uniform tests whereby the results will be all inofficial and w/o any warranty. If anybody shouldn’t be agreed with some results then he can make own tests and presents his/her results too. I hope this way to classify the different systems and give a “neutral” performance comparison. Additionally I will mention that the performance [and cost] is one of many aspects to choose the right system.” 

Testing standards are defined by christianZ so that results from each submission are consistent and comparable.  A pre-defined template is used in conjunction with IOMETER to generate the disk I/O and capture the performance metrics.  The test lab environment and the results are then appended to the thread discussion linked above.  The performance metrics measured are:

  1. Average Response Time (in Milliseconds, lower is better) – also known as latency of which VMware declares a potential problem threshold of 50ms in their Scalable Storage Performance whitepaper
  2. Average I/O per Second (number of I/Os, higher is better)
  3. Average MB per Second (in MB, higher is better)

Following are my results with the EMC Celerra NS-120 Unified Storage array

SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / RAID 5
SAN TYPE / HBAs: Emulex dual port 4Gb Fiber Channel, HP StorageWorks 2Gb SAN switch
OTHER: Disk.SchedNumReqOutstanding and HBA queue depth set to 64 

Fibre Channel SAN Fabric Test

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read 1.62 35,261.29 1,101.92
Real Life – 60% Rand / 65% Read 16.71 2,805.43 21.92
Max Throughput – 50% Read 5.93 10,028.25 313.38
Random 8K – 70% Read 11.08 3,700.69 28.91
  
 
SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: swISCSI
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

swISCSI Test

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read 17.52 3,426.00 107.06
Real Life – 60% Rand / 65% Read 14.33 3,584.53 28.00
Max Throughput – 50% Read 11.33 5,236.50 163.64
Random 8K – 70% Read 15.25 3,335.68 22.06
  
 
SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: NFS
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

NFS Test

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read 17.18 3,494.48 109.20
Real Life – 60% Rand / 65% Read 121.85 480.81 3.76
Max Throughput – 50% Read 12.77 4,718.29 147.45
Random 8K – 70% Read 123.41 478.17 3.74

Please read further below for futher NFS testing results after applying EMC Celerra best practices

Fibre Channel Summary

Not surprisingly, Celerra over SAN fabric beats the pants off of the shared storage solutions I’ve had in the lab previously, HP MSA1000 and Openfiler 2.2 swISCSI before that, in all four IOMETER categories.  I was, however, pleasantly surprised to find that Celerra over fibre channel was one of the top performing configurations among a sea of HP EVA, Hitachi, NetApp, and EMC CX series frames.

swISCSI Summary

Celerra over swISCSIwas only slightly faster than the Openfiler 2.2 swISCSI on HP Proliant ML570 G2 hardware I had in the past on the Max Throughput-100%Read test. In the other three test categories, however, the Celerra left the Openfiler array in the dust.

NFS Summary

Moving on to Celerra over NFS, performance results were consistent with swISCSI in two test categories (Max Throughput-100%Read and Max Throughput-50%Read), but NFS performance numbers really dropped in the remaining two categories as compared to swISCSI (RealLife-60%Rand-65%Read and Random-8k-70%Read). 

What’s worth noting is that both the iSCSI and NFS datastores are backed by the same logical Disk Group and physical disks on the Celerra.  I did this purposely to compare the iSCSI and NFS protocols, with everything else being equal.  The differences in two out of the four categories are obvious.  The question came to mind:  Does the performance difference come from the Celerra, the VMkernel, or a combination of both?  Both iSCSI and NFS have evolved into viable protocols for production use in enterprise datacenters, therefore, I’m leaning AWAY from the theory that the performance degradation over NFS stems from the VMkernel. My initial conclusion here is that Celerra over NFS doesn’t perform as well with Random Read disk I/O patterns.  I welcome your comments and experience here.

Please read further below for futher NFS testing results after applying EMC Celerra best practices

CIFS

Although I did not test CIFS, I would like to take a look at its performance.  CIFS isn’t used directly by VMware virtual infrastructure, but it can be a handy protocol to leverage with NFS storage.  File management (ie. .ISOs, templates, etc.) on ESX NFS volumes becomes easier and more mobile and less tools are required when the NFS volumes are presented as CIFS shares on a predominantly Windows client network.  Providing adequate security through CIFS will be a must to protect the ESX datastore on NFS.

If you’re curious about storage array configuration and its impact on performance, cost, and availability, take a look at this RAID triangle which VMTN Master meistermn posted in one of the performance threads:

The Celerra stroage is currently carved out in the following way:

  0 1 2 3 4 5 6 7 8 9 10 11 12 13 14  
DAE 2 FC FC FC FC FC FC FC FC FC FC FC FC FC FC FC DAE 2
DAE1 NAS NAS NAS NAS NAS Spr Spr                 DAE 1
DAE 0 Vlt Vlt Vlt Vlt Vlt NAS NAS NAS NAS NAS NAS NAS NAS NAS NAS DAE 0
  0 1 2 3 4 5 6 7 8 9 10 11 12 13 14  

FC = fibre channel Disk Group

NAS = iSCSI/NFS Disk Groups

Spr = Hot Spare

Vlt = Celerra Valut drives

I’m very pleased with the Celerra NS-120.  With the first batch of tests complete, I’m starting to formulate ideas on when, where, and how to use the various storage protocols with the Celerra.  My goal is not to eliminate use of the slowest performing protocol in the lab.  I want to work with each of them on a continual basis to test future design and integration with VMware virtual infrastructure.

Update 1/30/10: New NFS performance numbers.  I’ve begun working with EMC vSpecialist to troubleshoot the performance descrepancies between swISCSI and NFS protocols.  A few key things have been identified and a new set of performance metrics have been posted below after making some changes:

  1. The first thing that the EMC vSpecialists (and others on the blog post comments) asked about was whether or not the file system uncached write mechanism was enabled. The uncached write mechanism is designed to improve performance for applications with many connections to a large file, such as a virtual disk file of a virtual machine.  This mechanism can enhance access to such large files through the NFS protocol.  Out of the box, the factory default is the uncached write mechanism is disabled on the Celerra. EMC recommends this feature be enabled with ESX(i).  The beauty here is that the feature can be toggled while the NFS file system is mounted on cluster hosts with VMs running on it.  VMware ESX Using EMC Celerra Storage Systems pages 99-101 outlines this recommendation.
  2. Per VMware ESX Using EMC Celerra Storage Systems pages 73-74, NFS send and receive buffers should be divisible by 32k on the ESX(i) hosts.  Again, these advanced settings can be adjusted on the hosts while VMs are running and the settings do not require a reboot.  EMC recommended a value of 64 (presumably for both).
  3. Use the maximum amount of write cache possible for Storage Processors (SPs). Factory defaults here:  598BM total read cache size, 32MB read cache size, 598MB total write cache size, 566MB write cache size.
  4. Specific to this test – verify that the ramp up time is 120 seconds.  Without the ramp up the results can be skewed. The tests I originall performed were with a 0 second ramp up time.

The new NFS performance tests are below, using some of the recommendations above: 

SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: NFS
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

New NFS Test After Enabling the NFS file system Uncached Write Mechanism

VMware ESX Using EMC Celerra Storage Systems pages 99-101

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read 17.39 3,452.30 107.88
Real Life – 60% Rand / 65% Read 20.28 2,816.13 22.00
Max Throughput – 50% Read 19.43 3,051.72 95.37
Random 8K – 70% Read 19.21 2,878.05 22.48
Significant improvement here!  
 
 
SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: NFS
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

New NFS Test After Configuring
NFS.SendBufferSize = 256 (this was set at the default of 264 which is not divisible by 32k)
NFS.ReceiveBufferSize = 128 (this was already at the default of 128)

VMware ESX Using EMC Celerra Storage Systems pages 73-74

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read 17.41 3,449.05 107.78
Real Life – 60% Rand / 65% Read 20.41 2,807.66 21.93
Max Throughput – 50% Read  18.25  3,247.21  101.48
Random 8K – 70% Read  18.55  2,996.54  23.41
Slight change  
 
 
SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: NFS
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

New NFS Test After Configuring IOMETER for 120 second Ramp Up Time

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read  17.28  3,472.43  108.51
Real Life – 60% Rand / 65% Read  21.05  2,726.38  21.30
Max Throughput – 50% Read  17.73  3,338.72  104.34
Random 8K – 70% Read  17.70  3,091.17  24.15

Slight change

Due to the commentary received on the 120 second ramp up, I re-ran the swISCSI test to see if that changeded things much.  To fairly compare protocol performance, the same parameters must be used across the board in the tests.

SERVER TYPE: Windows Server 2003 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram (thin provisioned)
HOST TYPE: HP DL385 G2, 16GB RAM; 2x QC AMD Opteron 2356 Barcelona
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC Celerra NS-120 / 15x 146GB 15K 4Gb FC / 3x RAID 5 5146
SAN TYPE / HBAs: swISCSI
OTHER: Shared NetGear 1Gb SoHo Ethernet switch

New swISCSI Test After Configuring IOMETER for 120 second Ramp Up Time

Test Name Avg. Response Time Avg. I/O per Second Avg. MB per Second
Max Throughput – 100% Read  17.79  3,351.07  104.72
Real Life – 60% Rand / 65% Read  14.74  3,481.25  27.20
Max Throughput – 50% Read  12.17  4,707.39  147.11
Random 8K – 70% Read  15.02  3,403.39  26.59

swISCSI is still performing slightly better than NFS on the Random Reads, however, the margin is much closer

At this point I am content, stroke, happy, (borrowing UK terminology there) with NFS performance.  I am now moving on to ALUA, Round Robin, and PowerPath/VE testing.  I set up NPIV over the weekend with the Celerra as well – look for a blog post coming up on that.

Thank you EMC and to the folks who replied in the comments below with your help tackling best practices and NFS optimization/tuning!

Advertisement

39 comments

  1. Jason,

    Thanks for the post. I’ll have to get engaged with the VMTN Storage Performance Thread.

    The test results you posted are very interesting and match some of what I see with my NS-480 (with CX-480).

    A few things really stood out in this data. the 100% read test results with FC were ~10X of what was obtained with iSCSI and NFS. The latencies of NFS in the 60/65 mix tests were 10X of FC & iSCSI. Both of these results were surprising. I was expecting the results to be more consistant across protocols (as we see with our gear).

    Do you have any plans to run tests with multiple VMs on a datastore? I think there is a fair amount of value in testing shared infrastructures as this is how most deploy.

    Thanks again for sharing the results of your work. I’ll get some of ours posted on the VMTN spreadsheet.

  2. jason says:

    I will certainly run more tests. The VMTN tests call for a single VM with pre-defined IOMETER settings. This was test step 1 of many.

    Being new to Celerra (and EMC), I cannot yet account for the performance differences in the NFS test. I wasn’t expecting it.

    For future tests, I will run many workloads and see what can be squeezed out. Unfortunately I do not have 4Gb fabric switches nor do I have 10Gb Ethernet NICs or switches. I’m limited to 2Gb fabric switches and 1Gb Ethernet. I will run into some infrastructure limitations which the Celerra was built to be faster than. The Celerra supports 4Gb fibre on the SPE (block storage) and 10Gb optical on the X-blades where the NAS Data Movers are.

  3. Jason,

    One more question. In your post you show that you have some disk drives identified as FC and others as NAS.

    Does this configuration mean that drives must be configured for on FC or NAS access, and if this is true where does iSCSI reside?

  4. jason says:

    Vaughn,

    Fibre Channel drives were accessed via the SAN fabric (2Gb).

    I used the term NAS to include both iSCSI and NFS which could be incorrect but I’ve heard it used this way before. Access to the NAS drives goes through cge0 in the X-blades.

    Jas

  5. @Vaughn Stewart it seems to me that your last question was a tricky one to Jason since like it wrotes in this post he is taking is first steps on this type of storage.

  6. jason says:

    I ran more tests on NFS and the results were fairly consistent with the first set of tests. This time I ran 5 concurrent IOMETER sessions. The Max Throughput test numbers stack and the cumulative results are about the same as the single IOMETER test. The Random Read test numbers do not stack – running one IOMETER test produces roughly the same results as running 5 concurrent IOMETER tests, although the latency numbers did drift higher in the Random 8K – 70% Read Test

    Max Throughput – 100% Read Test
    IOMETER1 86 725 23
    IOMETER2 86 724 23
    IOMETER3 88 712 22
    IOMETER4 88 712 22
    IOMETER5 88 709 22

    Real Life – 60% Rand / 65% Read Test
    IOMETER1 122 490 4
    IOMETER2 124 482 4
    IOMETER3 122 490 4
    IOMETER4 121 496 4
    IOMETER5 121 496 4

    Max Throughput – 50% Read Test
    IOMETER1 58 1,063 33
    IOMETER2 59 1,049 33
    IOMETER3 59 1,043 33
    IOMETER4 58 1,047 33
    IOMETER5 57 1,069 33

    Random 8K – 70% Read Test
    IOMETER1 177 337 3
    IOMETER2 183 326 3
    IOMETER3 176 339 3
    IOMETER4 173 345 3
    IOMETER5 176 339 3

    Cloning a 270GB VM on FC and swiSCSI takes about 16 miniutes.
    Cloning a 270GB VM on NFS takes nearly an hour.
    DeDupe and Virtual (Thin) Provisioning is not enabled on any of the file systems.
    NFS goes through cge1 on the X-blade.
    swISCSI goes through cge0 on the X-blade.
    X-blades are configured in primary/standby mode.

  7. Jim O'Donald says:

    Jason,

    Do you know what version of DART code you are running on the Celerra? I ask because we have encountered a bug with NFS on an older version of DART code that slowed down performance on NFS.

  8. jason says:

    @Jim O’Donald:
    Version: T5.6.47.11

  9. Jim O'Donald says:

    Do you switches support jumbo frames? I wonder if that would make a difference.

  10. Jason:

    I too am interested in the source 10x difference in performance of the FC comparison. Based on the rated raw performance capacity of each of the 15 FC drives – even at RAID0 – the ceiling for an accelerated array should be between 5,800 and 6,500 IOPS. As a configuration of 3-groups of 5-disk RAID5 organizations, the realized raw performance would be measurably less.

    In a 15-drive RAID0 configuration, peak disk IOPS (read, sequential) could be estimated at 425 per drive. Striping across the full array would yield a full-stripe read at somewhere south of 6,400 IOPS (disks alone). This indicates that some serious caching is going on between the FC controllers and HBA’s… The average response times hint at this being the case.

    Organized as described (3x RAID5), the performance numbers on iSCSI are about what you would expect and very good. I’m wondering of some of the EMC’ers out there would care to elaborate on the source of the performance difference beyond my guesswork.

    Thanks for keeping the performance thread alive. I look forward to seeing more details in the future.

  11. make that “ceiling for an unaccelerated array” – the spell checker changed it for me :)

  12. jason says:

    @Jim O’Donald I dougt the NetGear switch supports jumbo frames. Thank you for the suggestion.

    @Collin C. MacMillan You are correct about the iSCSI/NFS drive layout. The aggregate number of drives is 15, however, these 15 drives are broken up into 3 RAID5 storage groups consisting of 5 drives each.

    On the fibre channel side, each storage processor (SP) has 598MB cache configured for 32MB Read and 566MB Write.

  13. Collin C MacMillan says:

    Jason- obviously there is something wrong with your NFS results as – by all accounts and performance testing – NFS and iscsi should be within 5% of eachother… Maybe it is a bug as suggested (given the difference).

    However, looking at the FC vs. ISCSI results the IOPS difference jumps out as suspicious for the standard vmtn test set which should exceed the 600MB cache. But at 35K IOPS and 1GB/sec, results (100% read) far exceed both disk and HBA capacities (factor of 4-5x). Likewise, the response time on the 100% read test would indicate heavy caching. If the read cache is only 32MB, what accounts for the 4-5x performance bonus? Can you show a concurrent test w/FC as you did on the nfs follow-up?

    Thanks!

    The other tests seem very real world, but I’d have a lot of trouble when my results so far exceeded my expectations. Tried iozone to look at cache roll-offs?

  14. jason says:

    Thank you for your interest in the article and the comments! The FC fabric speed is 2Gb, not 1Gb (4Gb HBAs but only 2Gb fabric switches unfortunately). I will perform more tests on FC and post the results.

  15. Collin C MacMillan says:

    I understood the FC to be limited to 2Gbps – I was referring to the 1.0192GB/sec (1,019.2MB/s) reported on the 100% read trial. Capacity of the 2Gbps connection is only 250MB/s (0.25GB/s) so that’s the 4x acceleration factor in bandwidth. The same us true on the IOPS report. Something to be investigated and explained.

  16. Jim O'Donald says:

    @Jason I was thinking about this and I was wondering how the cache was configured for the CX-120. That could affect the performance you are seeing. the EMC recommendation is 20% read and 80% write. I would be interested if that affects your results.

  17. Jason,

    Great post. I see the same performance re: large block file copies in NFS operations in my environment. Cloning and snapshot deletions take way longer on my 1GB NFS volumes (which happens to be a FAS3140) as opposed to my 4GB FC LUN’s (which happen to be EMC Clariion). In daily operations, NFS performs comparable to FC. I always took this as the nature of NFS v block protocols, and the trade off for ease of management /scalability. Thoughts?

  18. Jonathan Barley says:

    I had similar NFS performance issues with a NS-40 on DART 5.5 a couple of years ago; basically Celerra/NFS had trouble when queue depth greater than about 5. The solution was to use the “uncached” option when mounting the file system e.g. “server_mount server_2 –o uncached Afs_00n /Afs_00n”, but I don’t know whether this is still relevant with DART 5.6.

  19. @Russ – You have a FAS3140 and you use VMware clones? Oh, you need to check out the Rapid Cloning Utility. Immediate, pre-deduplicated VM clones.

    http://blogs.netapp.com/virtualstorageguy/2009/12/preview-rapid-cloning-utility-30-vcenter-plug-in.html

    (Jason – sorry for the commercial)

    @Jason – The 4Gb FC numbers, suggest you are caching a lage amount of the I/O workload.

    With the NFS & iSCSI numbers alos are on a 1GbE pipe I would suggest that the only accurate testing you could demonstrate would be shared datastore scaling across a multinode cluster with a common block size (like 4KB as in NTFS & EXT3).

    For high performance IO tests you could aggregate the 1 GbE links with iSCSI (& multi-TCP sessions & RR PSP) and compare those results to FC.

  20. jason says:

    @Vaughn – first of all, how are you doing? I hope your message indicates you’ve pulled through.

    No problem on the RCU. Please do share the information.

    I don’t have managed Gb Ethernet switches that will support aggregation so I’m stuck with the 1Gb pipe for now, which I’m fine with. Will need to note in the tests that performance numbers were performed over a single 1Gb NetGear switch (w/o aggregation).

    ps. I got a sweet package tonight. Might need to airline seats for PEX.

  21. Russ says:

    @Vaughan – I do use RCU, and I love it. I was speaking more along the lines of svmotion, specifically. Snapshots aren’t really an issue with SMVI, but in the event where someone does take a vmsnap in vsphere, commit can take hours depending upon deltas.

    @Jason – even if you used link aggregation, you’ll never get more than 1GB of throughput from source to destination. Unless you know something I don’t. :) I’m pricing 10gb blades for my 6509 which will make bandwith (much) less of an issue.

  22. Russ says:

    P.S. I was referring to NFS on the 1GB limitation, which seems to be the protocol in question. iSCSI performance looks pretty good on your Netgear. I’m curious to see what you find Jason. I wish I had a spare Catalyst laying around for you….

  23. Leif Hvidsten says:

    While I’m not an EMC user and only have experience with NetApp, Jonathan’s post may be onto something. In the excellent multivendor post on NFS by Chad, Vaughn, and others, it is mentioned to:
    1. Enable the uncached write mechanism for all file systems (30%+ improvement.)
    2. Disable the prefetch read mechanism for file systems consisting of VMs with small random accesses patterns.

    http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-customers-using-vmware.html

    You could also check that your design follows some their general best practices as well as EMC’s docs. Though, I would think when performance testing just 1 VM’s I/O, a load balanced link aggregation setup isn’t going to improve anything since the traffic would be over 1 TCP session to 1 datastore, unless you’re connected to more than 1 datastore and generating I/O concurrently to each. However, your spindle setup would be a huge factor.

  24. jason says:

    Performance numbers updated in the blog post. NFS is a lot closer now to swISCSI after a few tweaks from EMC.

  25. Lars Troen says:

    You should also update the VMTN thread with your new finding. Everyone is not reading this blog yet. :)

  26. jason says:

    Yep I plan on it

  27. Chad Sakac says:

    Thanks Jason! NFS performance on Celerra is right inline with iSCSI in general (minor variances depending on IO type and other factors), and both can meet a lot of customer needs. In turn, with small block IO in the 4-64K range ( which tends to be IOps bound), they perform nearly identically to FC. FC does pull away with larger IO sizes (>64K) which tend to be bandwidth bound (MBps), unless one switches to 10GbE (which is supported across the EMC midrange portfolio). This is due to the single TCP session (for data) nature of the NFSv3 client PER NFS mount.

    The tweaks are important. I would HIGHLY recommend anyone using Celerra NFS to look at the steps we sent over to Jason, which Scott Lowe has posted here:

    http://blog.scottlowe.org/2010/01/31/emc-celerra-optimizations-for-vmware-on-nfs/

    The Celerra for VMware techbook is also important reading for any EMC Celerra NFS customer using VMware (there are also CLARiiON and Symmetrix with VMware techbooks). These are all publicly and openly available (and orderable in hardcopy if you so desire).

    One that is Celerra specific, and very important is the “uncached” filesystem parameter. This means that the Datamover doesn’t cache the IO. Based on the hardware architecture of the Celerra, write/read caching is still done, but it’s done on the block part of the Celerra (underneath the filesystem). The Celerra filesystem read/write caching is tuned for general purpose NAS (wide variety of files of all different sizes), in the NFS on VMware use case, the pattern is different – dominated by a relatively small number of very large files (VMDKs). This paramter (which applies only to the particular filesystem, not the Celerra as a whole) has a very significant effect on performance.

    A vcenter plugin is coming very shortly that automates all the filesystem creation and management (along with many, many other very cool things) on EMC Celerra NFS. Come to VMware partner exchange!

    :-)

  28. Russ says:

    Hey Jason,

    RE: “Cloning a 270GB VM on NFS takes nearly an hour.”

    Has cloning speed improved with these updates?

  29. jason says:

    Russ, slight improvement. New clone time of a 270GB VM is 45 minutes from and to NFS.

  30. Paul Aviles says:

    Jason, can you post the IOmeter configuration files for the test you conducted? I am trying to duplicate your performance and I cannot get any way closer to your numbers.

    Regards,

    Paul

  31. jason says:

    @Paul:
    I’ve uploaded the Iometer benchmark file and a description text file to the following location:
    http://boche.net/dropbox/iometer/

  32. jonathan says:

    Thanks. A great article which is clear and helpful. Actually, I also want to try to duplicate your performance result.

  33. Paul Aviles says:

    Jason, the link seems to be broken. Can you recheck it please?

    Thanks,

    Paul

  34. jason says:

    Which link specifically? All 4 links in the first paragraph are working. I verified this morning.

    Jas

  35. primergy rx says:

    WOW just what I was searching for. Came here by searching for
    hp proliant

Leave a Reply