VMware Cloud Community
Ritmo2k
Enthusiast
Enthusiast

iSCSI Throughput

I am trying to understand some old discussions on iSCSI throughput for vmfs stores in my situation.

If I understand correctly there is a bottleneck in the possible throughput that you can get even if you have trunked interfaces. In my scenario I have a quad port nic teamed and want to know if my vm's all boot from san over the trunked interfaces versus boot from vmfs shared over iSCSI if it would be faster?

Any opinions?

Thanks guys,

jlc

0 Kudos
12 Replies
nick_couchman
Immortal
Immortal

My ESX servers have 4 Gb FC interfaces to my SAN and GigE links between them. The SAN volume is a RAID10 array. I have never had very good throughput on the VMFS datastore - I think it's a limitation of VMFS - it's designed to be a clustered filesystem, so it has to maintain lock information between all of the hosts, which isn't very fast. It is very adequate for running the VMs, but when I go to move files to the datastore, the speed is very, very slow.

0 Kudos
Ritmo2k
Enthusiast
Enthusiast

Nick,

I understood this was also the case since you could only add one vmkernel port (effectively).

I wonder if I would get better performance going the route I intended, by booting from SAN directly over iSCSI and not even haveing any VMFS?

Thanks!

0 Kudos
nick_couchman
Immortal
Immortal

Yes, you would very likely get better performance this way, you'd just lose some of the benefits of VMFS, unless, of course, you had a smaller VMFS share where you kept your VM config files, RDM mappings, etc.

0 Kudos
Dave_Mishchenko
Immortal
Immortal

You could potentially see better performance if you used a software iSCSI initiator within the VM instead of the ESX iSCSI initiator. Have you considered a hardware iSCSI HBA instead? Also with the software iSCSI initiator, by default you'll just be making one connection to your iSCSI SAN and unless they have improved things lately, then you would be limited to link failover and not aggregation. You can get around this in part by creating multiple iSCSI targets with different IP addresses.

What sort of storage are you using and how many disk spindles do you have? Do you expect the drive system to be able to keep up with the full bandwidth of one or two NIC ports?

0 Kudos
Ritmo2k
Enthusiast
Enthusiast

Dave,

A lot of my vm's only use 1 disc as they don't host thier own data so I can't leverage a software ini inside the guest. Do hardware iSCSI HBA's really make a difference? What should I use on the target side?

I am still deciding on what to use for storage and have some good reco's. We are an HP shop so I will likely want to stick with that vendor. Can you elaborate on how the multiple target (different IP addresses) can help? Aside from segregating IO from many vm's off one system, does this increase network throughput?

I probably will use Raid 10 for the VM's as a result of large amount of small IO requests.

Thanks!

0 Kudos
nick_couchman
Immortal
Immortal

Yes, hardware HBAs really do make a difference. Hardware HBAs offload a lot of the iSCSI processing from the host CPU onto the card. This especially makes a difference inside VMs, where running a software initiator can eat CPU cycles that would otherwise be shared with other VMs. All that said, I still run software initiators inside my VMs pretty frequently :-).

As far as the target side, you can use many of the open source and/or freeware iSCSI target servers. Openfiler is a very popular one. Many of these run on standard server hardware, so you can purchase a server with 6-8 1TB drives, create a RAID array, and then use one of these pieces of software to provision the storage out into multiple volumes and share them on an iSCSI network.

0 Kudos
Ritmo2k
Enthusiast
Enthusiast

I have a fair bit of experience with iet so I will use it.

I am still a bit uncertain how the multiple targets with different IP addresses helps? Would that involve me routing connections over diffrent physical nics to gain performance?

Thanks!

0 Kudos
chucks0
Enthusiast
Enthusiast

It is important to remember that link bandwidth is seldom the limiting factor with normal workloads which have lots of random IOs. When working with iSCSI (or any storage array), the disk subsystem is far more vital than the bandwidth of the GB nics.

0 Kudos
Ritmo2k
Enthusiast
Enthusiast

So given all this info, I see a Netapp S550 only does Raid 4 or 6. I presume that would not be very good for an ESX server, is that correct?

Thanks!

0 Kudos
nick_couchman
Immortal
Immortal

That depends on how well the NetApp device is able to buffer data and how good the sustained transfer rates on the NetApp device are. If you're going to use VMFS then you'll probably be fine with RAID 4/5/6 - VMFS doesn't seem to perform that well, in my experience, despite the fact that I back it with a RAID10 and connect with 4Gb FC. If you're using the NetApp via NFS, then the RAID level may slow you down some if the NetApp can't sustain good speeds.

0 Kudos
Ritmo2k
Enthusiast
Enthusiast

Alright guys,

I think I have all the info I need! Is there some sort of benchmarking I can run from the console to simply guage throughput for general info? Can I run bonnie from the console?

Thanks for everything!

0 Kudos
Dave_Mishchenko
Immortal
Immortal

You'll want to run the tests from within a VM. See this thread for setting for IOMeter - http://communities.vmware.com/thread/73745?tstart=0&start=15.

0 Kudos