VMware Cloud Community
Mathboy2006
Contributor
Contributor

Which is faster: iSCSI to Linux box over 1Gb or local SATA storage with intel ICH9?

Short version:

Is using iSCSI with a whitebox target generally faster than using local onboard non-raid SATA storage (ICH9 on an intel DP35DP board in my case)?

Long version:

I'm choosing between two hardware configurations for ESXi 3.5 Update 2:

1) One machine:

ESXi and storage all on one machine, using SATA drives connected to the ICH9 southbridge.

2) Two machines:

iSCSI target: DP35DP with 4x500Gb drives in RAID-5, and a single gigabit dedicated network connection for iSCSI. (Running Ubuntu 8.04 Server.)

ESXi box: Athlon X2 4200+ booting from a USB key. The ESXi machine will have a two nics (one for iSCSI and one for everything else).

I've tried (1), and was disappointed by the poor file transfer performance from a physical machine to a vm (about 8 to 9 MB/s). I was expecting at least 20MB/s in real use. Is trying (2) worth the effort? Is it likely to be faster?

Thank you in advance for any help or suggestions! Smiley Happy

0 Kudos
11 Replies
kjb007
Immortal
Immortal

Sounds like the least common denominator is still the sata drives. Try taking the RAID out of the picture first, and see how well that performs. It may be a problem with the controller, instead of the connection medium.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Mathboy2006
Contributor
Contributor

The 8-9MB/s was without RAID, I'm currently testing (1) with just one 500Gb drive - ESXi doesn't support RAID on the ICH9.

I'm also wondering if the poor performance is related to ESXi using the ICH9 controller. I haven't seen any benchmarks with it, so I'm not sure what to expect. Ubuntu runs fine with it - much faster than 9MB/s regardless of whether software RAID is used or not, which is why I was thinking iSCSI might be faster.

0 Kudos
kjb007
Immortal
Immortal

Ultimately, you're putting a VMFS on it, and the vmkernel is having to deal with the controller. If you're getting better results from the Ubuntu, which is using regular Linux drivers to access and talk to the controller, then yes, iSCSI, or even NFS in this case will run faster. If your network is not congested, which it appears it is not, then NFS may perform faster. Since you're not having to worry about the controller anymore, then iSCSI will probably perform well also.

Hope that's clear,

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Mathboy2006
Contributor
Contributor

OK, thanks - sounds like it's worth trying. Smiley Happy I'll give it a go and post results here.

0 Kudos
Dave_Mishchenko
Immortal
Immortal

Given that the ICH driver is not supported I wouldn't expect great performance so as suggested trying iSCSI is a good idea. Your performance will be further complicated in that without a battery backed write cache, ESX will not cache any writes which will slow down performance.

0 Kudos
Mathboy2006
Contributor
Contributor

Your performance will be further complicated in that without a battery backed write cache, ESX will not cache any writes which will slow down performance.

Thanks - do you mean lack of write cache is a problem for the local SATA storage, but not for iSCSI? (I'm guessing the target machine handles all the read/write cacheing for iSCSI so there's no problem if there's a dedicated network link?)

0 Kudos
kjb007
Immortal
Immortal

It will just further reduce your performance on either side. Making slow even slower. : ) Both sides will be affected by the lack of the cache, but it will be more pronounced on a system that is already performing poorly.

Any additional Cacheing done at the target side, if your iscsi target software does any cacheing, will help, and yes, it will be done at the location where the disk actually exists. The dedicated network link will help as well so there's no competition for the throughput.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Dave_Mishchenko
Immortal
Immortal

ESX will be waiting for the write to be committed in both cases, and with local storage ESX will get that confirmation when the physical write is actually accomplished. With your iSCSI target, depending on it's caching abilities, it may signal to ESX that a write has been completed when in fact it has not, so you may get a performance edge there (but with serious consequences if the cache is not battery protected).

0 Kudos
Mathboy2006
Contributor
Contributor

Using a Gigabyte EP35-DS3L board and local SATA storage off the ICH9, I now have ubuntu/samba VMs transferring files to a windows box at 15MB/s (write to VM) and 27MB/s (read from VM). I think that's good for using an old broadcom nic and unsupported hardware.

Unfortunately I couldn't get hold of two ESXi supported gigabit nics - intel 1000 GT's are all sold out in my city. So my tests with iSCSI were pretty useless. Using a single nic in the ESXi box (shared for both iSCSI and the VMs) I got a feeble 6 to 7MB/s for both read and write.

It'd be awesome if anyone can point me to some benchmarks with whitebox ESXi and a whitebox (software) iSCSI target, but in the meantime I'll just go with local SATA storage.

Thanks very much for the help. :smileygrin:

0 Kudos
Mathboy2006
Contributor
Contributor

I finally have been able to run some benchmarks using software iSCSI and NFS (unfortunately using different hardware to the tests above).

ESXi Server: whitebox quad core 4GB machine, with dual gigabit nics. Onboard ICH9 for local storage.

Storage: dual core machine running Ubuntu 8.04 for the NFS/iSCSI host. Configurations: 4x500GB SATAII in RAID-5, 2x500GB SATAII in RAID-1, single 500GB SATA drive.

All tests are done by transferring files to and from a Virtual Machine running samba under Ubuntu 8.04, with the other machine being a native Windows Vista computer. I saw a lot of variation with repeated runs, so these numbers are only a very rough guide for my specific hardware. Switching between sync and async NFS shares didn't make much difference for me, even after removing the nfs shares in ESXi storage, rebooting both machines and re-adding the shares.

NFSv4 to RAID-1

Read from VM: 30 to 45 MB/s

Write to VM: 11 to 15 MB/s

NFSv4 to single disk

Read from VM: 30 to 45 MB/s

Write to VM: 11 to 15 MB/s

iSCSI using RAID-5

Read from VM: 25 MB/s

Write to VM: 5 MB/s

iSCSI to single disk

Read from VM: 25 MB/s

Write to VM: 5 MB/s

Local Storage on ESXi box Read from VM: 44 MB/s Write to VM: 25 to 40 MB/s

0 Kudos
mauricev
Enthusiast
Enthusiast

Aren't the numbers on the white box slow, like something is wrong?

0 Kudos