VMware Cloud Community
Mohammad1982
Hot Shot
Hot Shot
Jump to solution

iSCSI Multipathing --interesting questions

Hi All,

Here is the scenario, I have a ESXi4.1 environment. I have got 4 ESXi servers and a vCenter server. I need to configure iSCSI. I am using a NetApp box for this. I create vSwitch on which I create a VMkernel port and assign a IP that is in the same subnet as NetApp box. I give 4 nics to the this vSwitch all active, then I enable s/w iSCSI initiator. I am able to communicate with the NetApp box. now the question is through which nic I am communicating to the iSCSI BOX?

What happens if one of the vmnic of this vSwitch fails?

Can I configure link aggregation for this setup with only one VMkernel port, will it load balance in that case?

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Mohammad Wasim
0 Kudos
1 Solution

Accepted Solutions
AndreTheGiant
Immortal
Immortal
Jump to solution

For iSCSI I do not recomend to use link aggregation, but to configure the network in the right way (and as suggested by the storage vendor for vSphere) in order to use multipath.

Which NIC is working? Usually one for each LUN... (so more LUN are a good idea).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro

View solution in original post

0 Kudos
8 Replies
RR9
Enthusiast
Enthusiast
Jump to solution

Using link aggregation is the option.

You can configure link aggregation using this NICs on physical switch. Check these KBs for more information

http://kb.vmware.com/kb/1004048

http://kb.vmware.com/kb/1001938

0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

For iSCSI I do not recomend to use link aggregation, but to configure the network in the right way (and as suggested by the storage vendor for vSphere) in order to use multipath.

Which NIC is working? Usually one for each LUN... (so more LUN are a good idea).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
RR9
Enthusiast
Enthusiast
Jump to solution

Andre, just curios to know if there are disadvantages of using link aggregation

0 Kudos
vmroyale
Immortal
Immortal
Jump to solution

Hello.

Also make sure to check out NetApp's TR-3428 document.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

It's related on own iSCSI works.

It's better have the fault handled at application level than on IP level.

You can loose a packet... IP can handle this? How?

At application level you can control it better way.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
Mohammad1982
Hot Shot
Hot Shot
Jump to solution

Hi Andre'

Thanks for your answer. Well I have some more questions as follows, kindly clarify.

So if I am using link aggregation then it will load balance is it? as per the statement

""Which NIC is working? Usually one for each LUN... (so more LUN are a good idea).""      

If this is true then it is better than binding the s/w iscsi adapter to the vmnics. Why is it recommended to use binding the adapter and configuring Multipathing.

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!! Regards, Mohammad Wasim
0 Kudos
rickardnobel
Champion
Champion
Jump to solution

Andrew M wrote:

You can loose a packet... IP can handle this? How?

IP (the protocol) can not handle a packet loss since that is not its responsability, but TCP will detect and resend all lost packets.

My VMware blog: www.rickardnobel.se
0 Kudos
pfuller
Contributor
Contributor
Jump to solution

Look at section 9.1 in NetApp's TR-3428 starting on page 42. Multipathing pathing is better because you can have more connections to the iSCSI target which means you can sends more commands at one time to the disk sub-system. Also, since you can force a vKernel vmk to use a select physical adapter you can in theory utilize more bandwidth then just one link.

0 Kudos