FreeNAS Performance Part 1: NFS Storage

EDIT 1/8/2013: This post should be titled FreeNAS: The performance you will get when you don’t allocate enough RAM, or enough disk resources.

These results are not a true representation of what FreeNAS can do. Here’s a better example: FreeNAS Performance Part 2

Following the Microsoft iSCSI VS. StarWind iSCSI, I would like to also compare another option that offers FreeBSD based network storage – FreeNAS. It supports AFP, CIFS, NFS, iSCSI and has a very user friendly web GUI – further information is available here at the FreeNAS website.

Test Specifications

The same whitebox server that was used for the StarWind and Microsoft iSCSI tests was used for the FreeNAS server – 3.00 GHz Xeon, 3GB RAM, single 1GbE interface, single 80GB spindle for both the OS and NFS export.

OS Installation Performance

Let me put it this way – after 1 hour, none of the VMs had finished more than ~48% completion….Just short of 2 hours after the install was initiated, one of the VMs had successfully installed an OS, and the other 2 had failed setup with errors. Here’s some of the built in reporting for FreeNAS:

And CPU utilization:

The latency for the NFS datastore is terrible:

Running IOMeter on a single VM while the other two VMs were installing the OS (Same IOMeter worker configuration as in previous tests):

Hoping to improve performance, the other 2 VMs were powered down, and the IOMeter test was run again:

The IOPS only improved by ~100 – the VM disk IO latency is still around ~1700+ ms – this is confirmed again by terrible host datastore latency – overall average write latency 100ms+ :

 

Conclusion

FreeNAS NFS storage, when configured in the same way as all previous experiments, has worse performance than local storage.

4 thoughts on “FreeNAS Performance Part 1: NFS Storage”

  1. FreeNAS really needs more than 4 GB of usable RAM (so closer to 5 GB installed RAM) before its ZFS backend can really start to shine.

    Also, testing FreeNAS with a single disk as the backend storage is not realistic. A single disk is not the standard deployment scenario for FreeNAS. Also, a single disk can only provide so many IOPS. So if you were testing with more than one VM installation at a time then you were most likely overloading the single storage disk.

    Reply
  2. Anon-

    This is true – but at the same time, it's an apples to apples comparison. Testing [i]ANY[/i] storage platform against a singe spindle is not a "standard deployment" at all…the point is was that StarWind can front ANY storage with high speed RAM cache and inline dedupe – even a single disk. Imagine what it could do with eight 10K SAS drives…

    Reply
  3. NFS and ZFS for vmware is not a good idea. vmware nfs client works on a synchronous way, nfs works bouth and zfs is better for async (slim zil shoud solve this problem, try to use zfs v28)

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.