13. March 2012 09:01
by Jake Rutski
In Part 2 of this series, we will look at the performance of the Software iSCSI targets under a heavier load - more specifically, 3 server VMs. While this may not seem like very much load, keep in mind that the backend storage is still just a single 7200RPM spindle, and all networking is over a single 1GbE link. All of the hardware from test 1 is the same, but here are the specs for the three new VMs:
Windows Server 2008R2 SP1 (3x)
- 1 vCPU, 2GB vRAM
- 18GB System drive - * Thick provisioned, eager zeroed
Create VMs, mount install ISO, begin installing OS, repeat two more times. During the install, here's what the iSCSI server looks like:
It's a bit hard to tell, but the green graph is iSCSI I/O Bytes/second, the red is iSCSI Target disk latency, and yellow is iSCSI requests/second. In the backround, you can see that the CPU and RAM are fairly dormant.
The iSCSI server network adapter does not appear to utilize more than 30% of total available bandwidth. *Keep this network utilization graph in mind...the pattern will show up again...*
The next important bit of information: From start to Windows took 35 minutes, 53 seconds for 3 VMs installing simultaneously.
First, one VM running an IOMeter test, while the other two are idle:
Once again, network usage does not appear to exceed ~30% during the test. Iometer results for 1 VM (start of test):
Average after one minute:
Next, 3 VMs running the same IOMeter workers (same as in Part1):
After one minute:
As you can see by the IPOS performance, this may not be the best solution. In part 2B we'll look at the same tests using a StarWind iSCSI target.