VMware Cloud Community
Useless1
Contributor
Contributor

iSCSI throughput... Theoretical Performance Question

Hi,

Before everyone switches off... let me just say I have read up and have an idea of what I am talking about, looking for clarification and guidance... let me start with what I have and what I know about that!

We have an EVA which is running nicely... its running 4Gb FC to which as I only have 4 disk shelves is probably overkill, in the sense that I could not get the 800MBs (full duplex) that the fibre connections offer out of the array because of the lack of spindles within it (If that makes sense)...

Anyways after a company merger it looks like we are going to pick up a VMware environment which a consultant built, which has an iSCSI SAN (Dell box I believe)...

Now what I am wondering is the following...

This box has 6 NIC connections on the back, however I believe only 3 are active at any one time... They have been linked at 1Gbps to a couple of switches and back to the ESX hosts.. These are running 3.5 with the software initiator...

Looking at this setup it appears that:

Because they are on 3.5 currently, the software initiator only supports one active path to a LUN at any one time (Changed in vSphere)

And because they have 3 active NIC's, this gives around 375MBs theoretical throughput (Based on 125MBs per link multiplied by 3)... however I have read this link http://blog.fosketts.net/2009/01/26/essential-vmware-esx-iscsi/ which mentions:

■The most common configuration (ESX software iSCSI) is limited to about 160 MB/s per iSCSI target

Is this full duplex or something?

now they have created multiple LUN's on this device, which I believe was done in an attempt to spread the load a bit as they can have multiple paths then... like this statement...

■Adding multiple iSCSI targets adds performance across the board, but configurations vary by array

But how does this add performance, (Because you can control the paths and split them over your NICs??)

I also take it this slower throughput than fibre (I am talking 1GB iSCSI here) is why the arrays tend to have SATA disks in, as there is no point putting in blindingly quick disks if the controllers cant get it to the ESX hosts at that speed...?

To come down to the nuts and bolts, I guess the question I am asking is, is there a rough guide (Obviously dependent on VM's I guess), to a limitation of how many VM's can run on an 1GB iSCSI infrastructure... I know vSphere changes iSCSI options a bit and lets you have multiple active paths to a target, but surely the limiting factor is always going to be how many entry points you have in, which is going to be determined by the number of NIC's that are in the iSCSI box?

0 Kudos
3 Replies
Andy_Banta
Hot Shot
Hot Shot

Anyways after a company merger it looks like we are going to pick up a VMware environment which a consultant built, which has an iSCSI SAN (Dell box I believe)...

Do you have the model? Form what you've described later, it sounds like and EqualLogic PS.

■The most common configuration (ESX software iSCSI) is limited to about 160 MB/s per iSCSI target

Is this full duplex or something?

Yes. The best in one direction will be around 110-120, but if you're have plenty of read ad write traffic, the links will fill in both directions.

■Adding multiple iSCSI targets adds performance across the board, but configurations vary by array

But how does this add performance, (Because you can control the paths and split them over your NICs??)

It adds performance because you'll establish one session for each target. These sessions might get spread across multiple host ports and are likely to be spread across multiple storage ports.

I also take it this slower throughput than fibre (I am talking 1GB iSCSI here) is why the arrays tend to have SATA disks in, as there is no point putting in blindingly quick disks if the

controllers cant get it to the ESX hosts at that speed...?

That's storage box implementation. You'll find that many iSCSI storage systems use SAS, FC or SSD drives, as well. Throughput can be increased significantly using multiple ports (and vSphere4),

and PS series boxes can come close to saturating their links.

I guess the question I am asking is, is there a rough guide (Obviously dependent on VM's I guess), to a limitation of how many VM's can run on an 1GB iSCSI infrastructure?

This largely depends on what applications you're running. Do you have I/O intensive workloads? Chad's article that you reference has quite a bit of information on workloads, and his virtualgeek blog

regularly discusses capacity planning.

Andy

0 Kudos
J1mbo
Virtuoso
Virtuoso

I also take it this slower throughput than fibre (I am talking 1GB iSCSI here) is why the arrays tend to have SATA disks in, as there is no point putting in blindingly quick disks if the controllers cant get it to the ESX hosts at that speed...?

Just a price point thing. iSCSI devices are available with 15k SAS drives.

The selection depends on the workload, i.e. IOPS vs sequential throughput. For a database IOPS and latency are likely to be much more important than sequential throughput, so 14 15k drives might not saturate a 1Gbps link on a normal workload, whereas a sequential read app (video edition maybe) might easily do just that with only two SATA drives.

0 Kudos