VMware Cloud Community
scottmcgurk
Contributor
Contributor

iSCSI San Performance

Hi all - quick question, just wanted to run a few things past you guys to make sure I'm not missing something obvious!

At present, we have 2 ESX servers connected over gigabit bond to an iSCSI San.

The san is backed by 10 146Gb SAS 10k drives (raid 5) and an HP E500 controller with 256Mb Cache in 50/50 read/write config.

We have roughly 50 XP VM's against this store, and un-suprisingly are experiencing slow performance with high I/O loads.

I was going to reccomend upgrading to an HP array (will still be iSCSI, management will not buy into Fibre Channel) backed by 12 300Gb 15k SAS disks (raid 5) with an HP P600 (512MB Cache) controller.I'd also be moving the two servers from the Software iSCSI initator to 1 iSCSI HBA in each server.

My question is - would my above recommended scenario be able to ease the loading demands on the network or would you reccomend another solution?

Thanks in advance for any help!

Scott

Reply
0 Kudos
3 Replies
jbruelasdgo
Virtuoso
Virtuoso

would not hurt if you read this post:

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

and sometimes a RAID designed for performance helps also

tip: take a look into the ESX NIC Teaming configuration

if the info is useful, do not forget to assign points accordingly

Jose B Ruelas

http://aservir.wordpress.com

Jose B Ruelas http://aservir.wordpress.com
Reply
0 Kudos
mike_laspina
Champion
Champion

Hi,

I would say your bottle neck is most definetly NOT the 10 disk sas array. Your looking @ a maximum of 75MB/s per ESX host using the software iSCSI stack. You need to consider using mutiple hardware based iSCSI cards over RR policy to increase the pipe and offload your ESX resource pressure.

The array can deliver ~100MB/s write and ~300MB/s read which is much more than you can drive from an ESX host in this case.

http://blog.laspina.ca/

vExpert 2009

http://blog.laspina.ca/ vExpert 2009
Reply
0 Kudos
DwayneL
Enthusiast
Enthusiast

Hello

I think your storage should be fine for 50 XP VM's. What is your MTU set at? Do you have MPIO working? Maybe it's a queue depth problem. We are Equallogic shop and we have 50 VM's running on servers but we do have 14 drives working.






-Dwayne Lessner

-Dwayne Lessner
Reply
0 Kudos