VMware Cloud Community
netKid
Contributor
Contributor

iSCSI - ESXi and SAN - Quick and dirty topology for highest bandwidth

I am setting up a couple of Dell servers as ESXi hosts, connected to a Dell SAN for LAB purposes.

I'm new to vmware, but have a fair grip on networking. For now, what I want to accomplish is best possible performence. There's only one switch anyway, so redundancy on port level is just a bonus in this case.
ESXi hosts have 2 NICs that can be used for iSCSI traffic, so I'm thinking 2 Gbps links to the switch. The SAN will be configured with only one controller.
I've looked into how to set up multipathing, but I cant say that i fully understand what is the best and simplest solution in this case.
From my networking experience I immediately thought of LACP for port bundling, but is this a requirement?

🙂

simple topology:

simple_topology.PNG

0 Kudos
4 Replies
Josh26
Virtuoso
Virtuoso

Firstly, LACP is only supported with the Enterprise Plus license, on a distributed switch. It's therefore out for a lot of users, especially any "quick and dirty".

NIC teaming is not as effecient as people want to think it is. Your one host talking to one SAN will only ever use one NIC - unless you use multipathing.

There are plenty of multipathing guides out there easily googled. That's the solution you want.

netKid
Contributor
Contributor

Thanks, Josh. Actually have the correct license for LACP, but wanted to keep it simple, and gain best performance - so I'll stick to multipathing.

0 Kudos
hstagner
VMware Employee
VMware Employee

There is not enough information here to tell you definitively what is the correct design for your environment. However, here are some general tips.

  • You can get better performance in most cases by enabling Jumbo Frames. However, it needs to be enabled end-to-end. From Array to Physical Switch to vSwitch to VMkernel port. Support for Jumbo Frames depends on the array and the physical switching infrastructure. You'll have to test.
  • Multipathing (vs. teaming) with iSCSI requires multiple VMkernel ports. Each VMkernel port must be "claimed" for iSCSI.
  • Since you are using an Equallogic array, take a look at the Multipathing Extension Module from Dell. Here is a good place to start: http://www.dellstorage.com/WorkArea/DownloadAsset.aspx?id=3064

I hope this helps.

Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too).

Regards,

Harley Stagner

VCP3/4, VCAP-DCD4/5, VCDX3/4/5

----------------------------------------- Don't forget to mark this answer "correct" or "helpful" if you found it useful (you'll get points too). Regards, Harley Stagner VCP3/4, VCAP-DCD4/5, VCDX3/4/5 Website: http://www.harleystagner.com Twitter: hstagner
0 Kudos
HarryJohn
Contributor
Contributor

Hi Netkid,

Some of the best practices have already been mentioned here however it would also be worth mentioning to use VMware's Round Robin multipathing policy rather than fixed if you are not already licensed to use Dell's MEM module.

From your diagram it looks like controller B on the EqualLogic is not used - bear in mind the controllers on the PS6100 are active/passive so it is worth connecting this up to the switch to protect you from a controller failure without worrying about having to configure the second controller - it mirrors the first controllers configuration

Just in case you are still looking for some best practice ideas when working with EqualLogic PS6100 arrays have a look at this guide on EqualLogic VMware iSCSI setup best practices. There are only 3 parts to this guide currently but these should be the most relevant parts for you.

If you are considering adding a second switch in the future there are some important aspects to consider especially with respect to Inter-Switch Links (stacking/LAG) mentioned in the article above.

Hope this helps,

Harry

VCP5-DCV

0 Kudos