VMware Cloud Community
shrcol
Contributor
Contributor

Performance issues with ISCSI SAN

Afternoon all,

Firstly let me say that I am fairly new to VMware VI3 and am on a steep learning curve!

We have recently implemented VI3 using two Intel based (professionally custom built) servers with a StoreVault S550 SAN for storage. I am aware that hardware compatibility is important however the setup was done on a fairly tight budget and the servers had already been ordered before thier use was designated. The SAN is listed as VI compatible.

For the most part everything has been fine, however we are finding that one of the virtual machines (file server, probably the most utilized) is at times exceedingly slow to respond when opening documents from shared network drives. We have had some issues with other hosted file servers however these have been minimal. The setup of the hosts and the SAN is fairly straight forward, gigabit ethernet (cat 5) between the hosts and the SAN via a dedicated switch which then links to the main LAN. The SAN is configured as an ISCSI target for the software interfaces on the ESX hosts. Other less disk orientated VM's work fine and without issue.

Having looked at the utilization of the VM thats having the problems it seems a good percent of the time to have exceedingly high disk read times / average queue length which tells me that the speeds the host is communicating with the SAN at are poor. We have sanity checked the configuration and the comms as far as we can and are now thinking perhaps switching to dedicated fibre might be the way to go to get better speeds, however being as I am still a relative newbie to the product I don't want to go recommending costly solutions without being fairly sure of there benefit! I am also more than aware that this is a fairly complex product and am sure there are areas of tuning that could be made to gain performance increases.

Any thoughts most welcome - please say if you need further information.

Howard.

Reply
0 Kudos
5 Replies
jrenton
Hot Shot
Hot Shot

You will get better throughput if you use hardware initiated iSCSI rather than software.

Purchase some iSCSI HBAs for your servers and then configure the LUNs using the hardware initiator.

jrenton
Hot Shot
Hot Shot

Also your v-switch configuration could effect the performance of you software iSCSI.

Do you have a dedicated v-switch for the VMKernel?

How many physical NICs are attached to this v-switch.

shrcol
Contributor
Contributor

Thanks for your quick response.

There is only one vSwitch for all and we have two NIC's attached. Currently one active and one standby however all of this can be changed.

Howard.

Reply
0 Kudos
Lightbulb
Virtuoso
Virtuoso

Also check how the LUNS are provisioned on the SAN. Is it all one big RAID 5 Array?

Improper LUN setup on the backend often gives rise to storage performance issues. You might want to create LUN based on the performance demands of the VMs i.e. 1 RAID 5 for regular VMs 1 RAID 1+0 for I/O intensive VMs

FC will increase your speed but to set it up properly with redundancy if very costly.

Check backend storage design before committing to new hardware.

Also check out this post

Message was edited by: Lightbulb

Reply
0 Kudos
shrcol
Contributor
Contributor

To be honest the whole solution as put in over a fairly small time period and so its definatly worth looking at the SAN/LUN config again. The linked site makes interesting reading, just having a trawl through now. Thanks for your help.

Reply
0 Kudos