VMware Cloud Community
mauser_
Enthusiast
Enthusiast

Iscsi Multipath with ESXI 5.1

Hello,
i have a Lab with :
2 vmware ESXI 5.1 hosts
1 HP Procurve 1810-24G --> Configured Link Aggregation with dynamic trunk with my Qnap.
QNAP 259 PRO --> Configured Raid1-set with 1 TB Harddisks of WD, configured with link aggregation

I have configured my vmware-environment with the following configuration : http://www.vmware.com/files/pdf/techpaper/vmware-multipathing-configuration-software-iSCSI-port-bind...

When running a speedtest in a VM i have the following speedresult (see attachment).


Is this good performance, or could it be better with this hardware?
Will a faster qnap (TS 469/669) result in better performance?

Is there a full guide available howto configure iscsi multipath in combination with Qnap storage?

I hope i can test with some other lab-users, or maybe do some teamviewer-sessions!

Thx

Reply
0 Kudos
8 Replies
Razorblade12
Contributor
Contributor

Hello mauser!,

I also have a setup with vSphere 5.1 and a MPIO iSCSI (2x 1Gb/s) connection to a NAS.

My transfer speeds are 105MB/s read and 104MB/s write. (according to Crystal Disk Mark) on my RAID1 LUN

On a second LUN - single 2.5" 5400rpm disk - I have transfer speeds around 30MB/s read/write.

So I can say that transfer speeds highly depend on the number and kind of spindles used in a LUN.

I don't think that a faster QNAP will result in higher transfer speeds but adding disks to your LUN and creating a RAID 5/6 LUN will result in better transfer speeds.

Reply
0 Kudos
mauser_
Enthusiast
Enthusiast

Hi,

what do you think of my speedresults? Read is much higher than write.

How dit you config your iscsi-Multipath-settings?

The  best load balancing method for me is "recently used" instead of "round robin".

I don`t know why..do you know?

I am thinking about a Qnap 469 or 669 for better iscsi-performance.

Any recommendations?

Reply
0 Kudos
Razorblade12
Contributor
Contributor

Hey mauser!,

I think your transfer speeds are okay according to your configuration. Write ist usually slower than read.

I guess you have two 5.400rpm 3.5" disks. Using two 7.2k rpm disk will surely increase transfer speeds.

My MPIO settings are nearly the same as described in your howto. But I have two separate vSwitches one for each physical NIC with round robin.

You could also try to use Jumbo Frames with an MTU > 1500. Provided your infrastructure does support Jumbo Frames.

In my case MPIO did not help to increase transfer speeds - before using MPIO I had transfer speeds of around 100MB/s. But that's okay because I only have one LUN with two spindles, so there's not much gain with MPIO in my case.

I cannot say anything about your round robin/recently used issue. But I've heard that round robin is not the best in every case.

Also I do not use LAGG with my NAS. Just set up a Portal with two NICs/IP addresses.

As I said, increasing the number of spindles and LUNs will surely help to increase your transfer speeds. It also depends on how your RAID is configured in your QNAP NAS. (Sector size, caching policy...)

Reply
0 Kudos
mauser_
Enthusiast
Enthusiast

Hi Razor,

i am using 7200 RPM disks --> 2 x WD1002FBYS, 1TB

What performance-increase did you get with Jumbo Frames?

Why using 2 seperate nics instead of LAGG?

I am now doing a test with 1 Samsung 256 SSD in my NAS for performance check. What do you think the results will be?

Do you have the settings for your Sector size, caching policy from your qnap?

Thx

Reply
0 Kudos
mauser_
Enthusiast
Enthusiast

Hi Razor,

i have done some speedtest with the qnap with 1 Samsung SSD :

LACP

SSD with LACP on switch and NAS with MPIO "round robin" : Read: 99,76, Write 103,67  --> in NIC-interface of QNAP, Write on Ethernet 1, Read on Ethernet 2

same, than with "most recently used : Write : 66,55,  Read : 108,29, same screen with nic use on the Qnap

Balance RR (no LACP)

Write : 48,61 , Read : 88,03 , Both nics used for some effort traffic

2 NICS with different ip-adresses, without LACP, Round Robin

Write : 74,98, Read : 91,29

with "most recently used"

Write 70,56, Read : 108,55

My conclusion:

- No much higher performance on SSD, because the NICS are the bottleneck....will a QNAP wit 4/5 disks result in a better performance? --> the nics seems to be the bottleneck

- Most of the time at my configuration "most recently used" have a better performance than "Round Robin". Only "round robin" have a better performance with LACP enabled --> This is my best performance

So best Performance seems to be :

- SSD (or 4/5 Harddisks in raid 5/6)

- Round robin (also check most recently used)

- LACP enabled between switch and Qnap

But...will a NAS like TS 459,459,469 will perform that much better for the money?

It seems around 100 MBs is the max for 1 VM, and not 200 MBs. Maybe traffic over more vm`s to different LUNS will maximize. But is this worth in a lab-environment.

Or....are there other possiblities to get the max (200 MBs) for 1 one vm?

Any people with suggestions?

Thx

Reply
0 Kudos
Razorblade12
Contributor
Contributor

Hi mauser!,

your numbers look reasonable for an iSCSI NAS. And I think they're okay.

With Jumbo Frames I achieved some additional MB/s but had to diasble it for the sake of reliability. But in general Jumbo Frames reduce CPU utilization as there are less packages and less IRQs for the CPU to handle. And also the payload is higher.

So if the CPU is the bottleneck I would suggest using Jumbo Frames.

I do not use a qnap NAS, just have a different NAS with iSCSI enabled so I cannot tell anything about the qnap devices you mentioned.

Also I do not use LAGG because I have 2 NICs directly connected to my vSphere server - without any switch.

Keep in mind that you cannot exceed 100MB/s per NIC! So you need to be sure that you're using two different streams at the same time, each with 100MB/s to benefit from MPIO.

Reply
0 Kudos
mauser_
Enthusiast
Enthusiast

Hi Razor,

it seems a "upgrade" to another QNAP with more disks is not reliable because the NIC is the bottleneck?

Another Thing:

i did do a test to a NFS-Share with the same VM:

Results:

Read: 99,94

Write: 87,44

It seems a NFS-share will have a better performance that Iscsi.

But...NFS share will be shared with performance and iscsi not?

Reply
0 Kudos
Razorblade12
Contributor
Contributor

Hi mauser!,

NFS as primary storage for VMs is not recommended, because of its write cache policy and overhead. NFS performance will decrease with many small files.

As soon as you have more VMs your NIC won't be the bottleneck anymore, because you will have more datastreams to deal with. That's where iSCSI MPIO will come in handy and you need more disks or RAID volumes to serve the different datastreams.

Reply
0 Kudos