*NEW* StarWind V8 [BETA] is here!! and it looks VERY promising!!
One of my lab servers is up for sale HERE.
For some small to medium sized businesses and even some home power users, shared storage is a must-have. Unfortunately, standard high performance SANs carry with them a hefty price tag and a massive administrative overhead - so an EMC or NetApp filer is often out of the question. What is the answer? Turn an everyday server into an iSCSI storage server using iSCSI software target software.
Microsoft's iSCSI target software was originally part of the Windows Storage Server SKU and only available to OEMs. When Windows Storage Server 2008 was released, it was also included in Technet subscriptions making it readily available for businesses and home users to test and validate. Windows Storage Server 2008 R2 was different - it was released as an installable package to any Server 2008 R2 install - but again, it was available through Technet. Then, in April of 2011, Microsoft released the iSCSI target software installer to the public (in this blog post).
Enter Starwind. They have had a software target solution around for quite some time - the current release version is 5.8. The free-for-home-use version is becoming very feature rich - features include thin provisioning, high speed cache, and block level deduplication. The pay versions add multiple server support, HA mirroring, and failover - the full version comparison can be found here.
The Test Setup
I first want to compare both solutions side by side - first testing general OS performance, then more specialized workloads, etc. All network traffic is carried on a Catalyst 2970G switch, on the native VLAN with all other traffic - this is not an optimal configuration, but I want to start with the basics and try to improve performance from there.
iSCSI Target Server
- Whitebox Intel Xeon 5160 Dual Core 3.0GHz
- 3GB RAM, single GbE link
- 60 GB iSCSI LUN presented from standalone 80GB 7200RPM SATA disk
ESXi 5.0 Host Server
- Whitebox AMD Athlon X2 3.0 GHz
- 8GB RAM, single GbE link
Windows 7 Test VM
- 2 vCPU, 2GB RAM
- Windows 7 SP1 x86
- 20GB system volume - *Thick provisioned, eager zeroed
Comparison 1: Installing OS
The StarWind target was installed, and a virtual volume presented over iSCSI - 60GB, with deduplication turned on and a 1.5GB cache enabled. First impressions: OS installation is very quick - I did not time it, but it was remarkably quick. During the install, the iSCSI server was clearly using most of its resources for iSCSI operations - the single 1GbE link was saturated at 95%:
The high speed cache feature is very clearly a factor as it allocates the RAM immediately, and the CPU load is all from the StarWind process:
The Starwind software was uninstalled and the Microsoft target software installed in its place. A 60GB LUN was presented to ESXi - and OS installation began (the VM was created with the same specs). Immediately, it was obvious that the installation was going much slower than with the StarWind target software. The resources in use by the iSCSI server clearly show this:
Average network use is around 30%:
Same story with CPU use and allocated RAM:
Comparison 2: IOMeter Test
Here is the configuration used for the tests:
This test may be a bit one sided due to the fact that this test VM is the only running on this datastore, and thus the entire IOMeter test file is likely coming from the RAM cache. Either way, here are the results:
It will be interesting to see if this performance scales with more VMs (containing similar blocks - in the kernel, etc) and with more RAM in the iSCSI server.
The results while using the Microsoft software target:
Part 1 Results
IOMeter clearly shows a 10X improvement in performance - these results will need to be verified with a heavier load, but I can only imagine that more RAM and faster base storage disks will only improve these results.
In part 2 I will see if these results will scale with multiple server VM workloads - and also how effective the StarWind deduplication engine is.