VMware Cloud Community
ERN7
Contributor
Contributor
Jump to solution

How to boost Performance in iSCSI SAN

Folks,

I am currently testing below solution with provided hardware’s/software’s details and I don’t get maximum output according to what I am supposed to get.

I have tried everything and read all manuals and recommendation that VMware offers but still no lock...

Multipathing has helped and I was able to gain 150MB to the overall pool, but a single VM only gains a fraction of it. I tried up to 4 paths and there was no additional benefit beyond 2!!!

Any suggestion would be very mush so highly appreciated.

==========================

iSCSI Target :

24GB DDR3 RAM DUAL XENON QC E5620 2.4 GHZ

7x SSD in RAID 5

Raid Controller :

LSI MegaRAID SAS 9280-24i4e

BBU for RAID is installed.

iSCSI target software : open -E DSS V6,

HOST :

96 GB DDR3 RAM Dual XEON X5660 2.8 GHZ

Software iSCSI initiator is the built-in VMware one.

MTU is set to 9000 for all.

===========================

Regards

Edward

Reply
0 Kudos
1 Solution

Accepted Solutions
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

Getting throughput with iperf doesn't mean your network isn't a bottleneck.

For example, flow control can have a huge improvement on SAN throughput, without touching iperf results at all.

There's not a lot to go wrong with VMkernel that will slow down iSCSI, unless you refer back to multipathing issues.

Specific multipathing settings can have a significant impact on throughput - I would ask what your recommended SATP and PSP was, and whether RR IOPS are relevant, but that gets back to whether your SAN is actually a supported SAN.

View solution in original post

Reply
0 Kudos
11 Replies
ERN7
Contributor
Contributor
Jump to solution

Folks,

We have just verified that our network is working per its full capacity and Network is not a limiting factor of not getting a max. performance for our SAN environment, in fact the resource pool is always stays on 800 MB for both read/write even with Multipathing!!

I assume VMKernel is the limiting factor but not sure how to fix it otherwise.

Any suggestion is highly appreciated.

Regards

Edward

.

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

Adding paths to a multipathing setup isn't expected to increase performance for a single VM - it's about sharing load across multiple LUNs.

If you're actively maxing out a single network connection then moving up to a faster network type is the only option to increase a single VM's performance.

That said, I'd be more interested in considering the SAN as a limiting factor - it doesn't look to be a professional unit.

ERN7
Contributor
Contributor
Jump to solution

Hi Josh26,

Thanks for your feedback and for provided details,

FYI. Currently we have a 10G network connection and I was able to verify the bandwidth/speed by installing Iperf and testing / receiving an almost 98% of 10G!

I am not really sure if SAN can be considered as limiting factor but it maybe... hard to tell..I am still thinking about VMKernel for some reason.

Thanks.

Regards

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

Hi,

Getting throughput with iperf doesn't mean your network isn't a bottleneck.

For example, flow control can have a huge improvement on SAN throughput, without touching iperf results at all.

There's not a lot to go wrong with VMkernel that will slow down iSCSI, unless you refer back to multipathing issues.

Specific multipathing settings can have a significant impact on throughput - I would ask what your recommended SATP and PSP was, and whether RR IOPS are relevant, but that gets back to whether your SAN is actually a supported SAN.

Reply
0 Kudos
rickardnobel
Champion
Champion
Jump to solution

Could you put some numbers on what kind of throughput you get for a single VM? What tool do you use to test this?

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
ERN7
Contributor
Contributor
Jump to solution

Thanks for your input.

Please check below.

Tests done using Iometer for 5 minutes with 60 second ramp. Netflow statistics are average sustained rate from the port connected to the NAS.

1 Host, 1 VM, 256k Read: ESXTOP for vmhba34 MB Read 725; Netfow 6.1 Gbps

1 Host, 2 VM, 256k Read: ESXTOP for vmhba34 MB Read 875; Netfow 7.3 Gbps

2 Host, 1 VM each, 256k Read: ESXTOP for vmhba34 MB Read 520 – 580 on both; Netfow 9.3 Gbps

Let me know please.

Regards

Edward

Reply
0 Kudos
ERN7
Contributor
Contributor
Jump to solution

Thanks for your feedback again.

I am going to look into it and will get some details in retrun.

Thanks.

Regards

Edward

Reply
0 Kudos
rickardnobel
Champion
Champion
Jump to solution

ERN7 wrote:

1 Host, 1 VM, 256k Read: ESXTOP for vmhba34 MB Read 725; Netfow 6.1 Gbps

1 Host, 2 VM, 256k Read: ESXTOP for vmhba34 MB Read 875; Netfow 7.3 Gbps

2 Host, 1 VM each, 256k Read: ESXTOP for vmhba34 MB Read 520 – 580 on both; Netfow 9.3 Gbps

Even if numbers like 875 MB/second in theory could be even larger I would say that this is an extreme high throughput already!

Are you having performance problems in your system or do you just want to optimize/tweak it to highest possible rate?

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
ERN7
Contributor
Contributor
Jump to solution

Hi,

Thanks again for your input,

I am currently running some tests in changing IOPS # using RoundRobin path selection, and will get back to you with the outcome, its pretty tricky!!

What I currently have are all default numbers and configuration.

Regards

Edward

Reply
0 Kudos
ERN7
Contributor
Contributor
Jump to solution

Folks,

Thanks for all your great feedbacks and hints...

Today, I have made some changes to IOPS and change the default IOPS to 9 and surprisingly I have achieved tremendous/unbelievable numbers !!

(Out of all numbers i have tried, 9 was the utmost),

(i.e. Path Selection Policy Device Config : (Policy=iops, iops=9, and ........)

Please check the numbers below ... those are the numbers that I have gained afterwards,

1 Host, 2 VM, 256k Read on one 256k write on the other: ESXTOP for vmhba34 MB Read 440, MB Write 560; Netfow 8.8 Gbps

2 Host, 1 VM each, 256k Read on one 256k write on the other: ESXTOP for vmhba34 MB Read 660, MB Write 570; Netfow 10.3 Gbps

Netflow in combined input /output sustained average.

I am now good to go with above numbers and take the project to the next level... that is awesome...kind of unbelievable.

Thanks again.

Regards

Edward

Reply
0 Kudos
joshuatownsend
Enthusiast
Enthusiast
Jump to solution

might be worth looking at this: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=100259....

I have seen Delayed Ack causing issues on more arrays than those listed in this KB.

If you found this or other information useful, please consider awarding points for "Correct" or "Helpful". Please visit http://vmtoday.com for News, Views and Virtualization How-To's Follow me on Twitter - @joshuatownsend