My lab consists of pretty basic hardware, but still, it’s interesting to see performance benefits using different solutions. So I decided to see if there were any performance differences between using 2008 R2 NFS and Starwind iSCSI as storage for VMs.
So I had this XenServer 5.6 FP1 running on a HP ProLiant ML115 G5. Then I had a ML115 G1 model with several disks in it, I decided to do all tests on a basic SATA 500 GB disk (Samsung HD501LJ1). They both were connected to a 8-port Gigabit network switch (Netgear GS608). Nothing fancy, just one subnet, no VLANs, no redundancy, no security 🙂 I just made sure there wasn’t any other traffic on the switch that could interfere.
I configured NFS on 2008 R2 and Starwind iSCSI with a 200 GB image disk. Then I installed a VM running 2008 R2 and did a Copy VM to get the same machine on both NFS and iSCSI storage. Then, one by one, I used ATTO Disk Benchmark to perform some disk performance tests. I did that a few times to see if the tests turned out different, but they turned out almost the same every time.
From my tests, it seems that Starwind iSCSI is about 2-3 times faster on write but NFS is slightly faster on read. I think there are more tests to perform to get more accurate results, but this could give you an idea if you’re thinking of the same setup in your lab. Please comment of your experiences.
Result using Starwind iSCSI:
Result using 2008 R2 NFS:
Oh and by the way, I did a hdparm -t /dev/iscsi/<path> to check the speed aswell and it showed ~40-52 MB/s (ran it about 10 times).