Software Based Storage: Thoughts and local storage tests

It has occurred to me that all these comparisons are not exactly equal…while the VM configurations, test procedures, and testing hardware are all identical – there are certainly ways to improve performance…some methods could be applied to all comparisons (adding a storage controller card with battery-backed write cache, and several 15K SAS spindles), and some are specific to the software presenting the storage (using an SSD to house the ZIL and\or l2ARC – only applies to ZFS-based products). In reality, these tests are performed using the ‘absolute worst case scenario’ – who in their right mind would use a single (7200 RPM, non-enterprise) drive to house anything more than a music library?

All that said, I wanted to take the network and the 3rd party storage providers out of the question and repeat some tests using a local datastore on the ESX host. Here’s what the datastore looked like during zeroing and OS install:

During install, latency stayed right around 40ms. Complete installation for 3 VMs took just under 1 hour. Here’s the datastore latency during idle operation and the IOMeter test at the far right of the chart:

Latency was just under 20ms during idle. The first test was a single VM running the standard IOMeter worker as in previous comparisons:

This shows the local storage to be around 25 IOPS worse than the MS iSCSI target and a single IOMeter test. The 3 VM test shows where DAS takes a big hit:

The best average I saw during the test:

So in the end, local storage is clearly not the way to go (except in some very specific use cases, but that involves some gear that will NEVER be approved for a home lab…and nor should it be)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.