VMware Cloud Community
mcdouglas
Contributor
Contributor

iSCSI performance

HI,

I'm testing a brand new MSA2324i G2 with a DL360 G6 server. I'm having

trouble achieving the gigabit maximums with my setup, seems to be

consistent 35MB/s write and 48MB/s read performance, nowhere near the

limits of the gigabit link (128MB).

I tried creating a vdisc consisitng

of 24 10k SAS drives in RAID50 (3*8) configuration, and now testing

with a vdisc with 5 10k SAS drives in RAID0 configuration. Tried both

MPIO (all 4 ethernet ports connected with 2 separate switches, and 2

VMkernel ports) and also trieng now a single ethernet link to one of

the ports.Everytime i receive the same performance figures.

Any idea what could be wrong?

Reply
0 Kudos
5 Replies
Dave_Mishchenko
Immortal
Immortal

Your post has been moved to the Performance & VMmark forum. What sort of performance tool and test profile are you using?




Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
RParker
Immortal
Immortal

I'm having trouble achieving the gigabit maximums with my setup, seems to be consistent 35MB/s write and 48MB/s read performance, nowhere near the limits of the gigabit link (128MB).

A network is ONLY as fast as the SLOWEST link. NIC cards are seldom (if ever) the bottle neck. Disk, Switches, type of Apps, type of data transmitted, type of protocol used, as well as the backplane of the machines.

DISK IO is usually the single biggest limiting factor on ANY network, including the SAN. so the source AND target must be able to accommodate the speed of the NIC. so I suspect your Disk RAID / Controller cannot transmit bits fast enough to fill the NIC bandwidth.

you would need 12 or 14 REALLY fast drives in a good RAID (RAID 10 or RAID 0) to keep up... Same for the OTHER end (target) as well. So you have to consider ALL the points along the network, not JUST the NIC.

Reply
0 Kudos
mcdouglas
Contributor
Contributor

Well, as you can see in my first post, i have 24 10k sas disks in this enclosure, so i guess that fulfills the disk requirements. Protocol is iSCSI and servers are powerful G6 servers from hp.

Reply
0 Kudos
plauterbach
Contributor
Contributor

How are you driving the workload? What benchmark tools are you using, please other details (thread count, block size, etc.)

Reply
0 Kudos
mcdouglas
Contributor
Contributor

I made some progress with testing, however i'm still not satisfied.

I created a RAID0 array (10 SAS drive) and a RAID50 array (14 SAS drive). Both volumes are owned by controller A on the MSA. Created 1-1 volumes on each vdisk and added them as datastores to the esx server.

Created 1-1 hard disk in both datastores and mapped them to a virtual machine runnin 2008 r2.

Also, in ESXi I enabled multipathing by adding 2 vmkernel ports with separate ip subnets and binding the correct vmnic to the correct vmkernel port. Also set the path selection policy to round robin. The vmnics are connected directly (no switch) to the MSA's 2 port, these ports have separate IPs in the corresponding subnets. Not using right now the B controller and it's port.

I'm testing now with Iometer, 256 kbyte request size,100% sequential writes with 2 worker threads (one for each disk).

I'm seeing 50-55MB/sec on both worker threads, so thats around 100-110 MB/sec for the VM. However with load ballancing on the 2 gbit linksit should be much more than this (double i think). Can't be disk spindle problem because both the raid50 andthe raid0 arrays produce the same MB/sec values. Tested with reading, and thats about the same (60MB/sec).

I can see the load ballancing working, because both vnmnics are generating traffic, however its kind of strange, not continouse. See the attached picture.

Any ideas? Is it an ESX configuration problem or something wrongwith the MSA? (thou there aint too much options to mess up)

Reply
0 Kudos