VMware Cloud Community
madware
Contributor
Contributor

iSCSI binding, teaming, etc?

New to the VMware and Virtual Architecture stuff, just wondering if there is more optimization that I can do with my network in regards to my remote storage (where the virtual machines reside) and my ESXi Hosts.

Not sure if there is more I can do with teaming on the vCenter Server side of things.

I am aware of jumbo frames but was going to tackle that last after the teaming and binding were finalized. Here is what I have so far.

NAS STORAGE (SYNOLOGY DS1513+)

Bond 1 - LAN 1,2,3

10.10.10.175

3000 Mbps, Full Duplex, MTU 1500

connected to

MANAGED SWITCH (D-LINK DGS-1500-20)

NAS Bond 1 -> LAN 1,2,3 (LACP Group 1)

ESXi 5.1 Host -> LAN 11,12,13 (LACP Group 2)

ESXi 5.1 Host -> LAN 14,15,16 (LACP Group 3)

connected to

ESXi 5.1 HOST #1

LACP Group 2 (from managed switch) -> vmnic3, vmnic4, vmnic5

10.10.10.184

MTU 1500, Route based on IP Hash, vmnic3,4,5 all active adapters, everything else factory default - set on both the vSwitch and iSCSI port

ESXi 5.1 HOST #2

LACP Group 3 (from managed switch) -> vmnic3, vmnic4, vmnic5

10.10.10.194

MTU 1500, Routed based on IP Hash, vmnic3,4,5 all active adapters, everything else factory default - set on both the vSwitch and iSCSI port

Below is a screenshot from HOST #1 - in the left window where it shows the vSwitch and the NAS iSCSI DataStore port, as noted above the settings were done for both vSwitch and Port.


Host 1 layout.png

10 Replies
vfk
Expert
Expert

You should move to this post to VMware ESXi 5 forum, it is more active and you will get a quicker response...

--- If you found this or any other answer helpful, please consider the use of the Helpful or Correct buttons to award points. vfk Systems Manager / Technical Architect VCP5-DCV, VCAP5-DCA, vExpert, ITILv3, CCNA, MCP
Reply
0 Kudos
madware
Contributor
Contributor

Okay thanks!

Reply
0 Kudos
JPM300
Commander
Commander

Hey madware,

Well for starters I don't belive iSCSI acutally can do LACP or its rare and I haven't seen it.  iSCSI for the most part uses MPIO(Multipath I/O) drivers to load balance the traffic much like Fibre.

Also I noticed you only have 1 NIC for your VM Network and Management on the ESXi host, so if you ever loose this NIC you will not only loose the management of the host but also all your VM's will drop off the network.

With that said I would setup up your iSCSI like this:

vSwitch 0

VMNetwork        - vmnic0  - Active Adapters vmnic0 and vmnic3

Management      - vmnic3  - Active Adapter vmnic0, standby vmnic3

vSwitch1

NAS iSCSI1 Datastore - 10.10.10.184 - vmnic4  - Active Adapter vmnic4, not used adapter vmnic5
NAS iSCSI2 Datastore - 10.10.10.185 - vmnic5 - Active Adapter vmnic5, not used adapter vmnic4

Go into the iSCSI Software Initator and bind vmnic4 and vmnic5 to iSCSI.  Then once you get your LUNS/Datastores attached you set your Multipathing to Round Robin or to whatever your SAN recommends

However since you have a bond on your synology NAS you will need to look at its configuration / setup recommendations and see how it wants the iSCSI nics / bonds to work

However if you where to use NFS your configuration is fine, other then the single point of failure with 1 nic you have in vSwitch0.

vervoortjurgen
Hot Shot
Hot Shot

one more comment for you iscsi

make sure you use MTU 9000

set on vswitch 1 and on your synology teaming and physical switches

kind regards Vervoort Jurgen VCP6-DCV, VCP-cloud http://www.vdssystems.be
Reply
0 Kudos
madware
Contributor
Contributor

Thanks for the feedback guys, will definitely give your suggestions a go!

Reply
0 Kudos
JPM300
Commander
Commander

If you have any other question madware let us know,


Also if you have all your nics on your SYNOLOGY in a LACP trunk/etherchannel you will probably have to break 2 nics of it out of the LACP and just give them IP addresses to do the iSCSI setup / MPIO properly.

Just figured I would mention this.

Cheers

Reply
0 Kudos
King_Robert
Hot Shot
Hot Shot

iSCSI multipathing

In order to be able to use iSCSI multipathing in vSphere, we need to create two VMkernel ports, bind them to two different uplinks and attach them to software iSCSI HBA.
My ESXi host has 4 NICs. Two are assigned to vSwitch0 which has Management VM Port group and three VMkernel ports. One for Management and two for iSCSI. Following picture shows vSwitch0 in the networking tab of the vSphere Client: 

 

The management traffic in untagged, iSCSI traffic is on VLAN 1000. As I also wanted to use jumbo frames (yes, ESXi supports jumbo frames, despite official documentation claiming for a long time otherwise), I had to create the VMkernel ports from CLI. The binding of the iSCSI VMkernel ports to sw iSCSI HBA must also be done from CLI. ESXi does not have service console, therefore first step is to install vMA (VMware Management Assistant) which replaces the service console.

VMware Management Assistant (vMA)

The VMware approach to service-console-less hypervisor is based on following rationale. If we have many ESX hosts in a datacenter each has its own service console, which needs to be maintained, patched and consumes host resources. The patching often requires host restarts, which means we have to vMotion workloads or accept downtime. vMA basicaly offers the same functionality as service console, is detached from the host (in fact can run virtual or physical) and can control many hosts. vMA comes as a 560 MB OVF package that can be downloaded from VMware website. I deployed it in the VMware Workstation running on my laptop. It is Red Hat Enterprise Linux 5 VM which takes about 2.5 GB hard drive space. The setup is quite straightforward with network and password questions. 

There are various ways how to connect vMA to ESXi host. I decided to use vSphere FastPass. First we define the connection and then we can initialize it anytime with one command. 

sudo vifp addserver esxi.fojta.com –username root –password <password>
vifpinit esxi.fojta.com 

Reply
0 Kudos
madware
Contributor
Contributor

First of all, thanks for all the feedback. 2 weeks ago never heard of vmnic, vmkernel, iscsi binding, etc. Getting a huge crash course on all of this and the only help I have is google and you guys.

Anyways, here is what I have now. As well all the MTU settings have been set to 9000 across the board.

vSwitch0 screenshot

Untitled1.png

vSwitch1 screenshot (I didn't show all the VMkernel properties but they each have their own vmnic with the other 2 unused, follows suit on the one I am showing)

Untitled2.png

Under storage adapters and my iscsi software adapter, my connected targets and paths have increased a LOT with round robin selected.

Untitled3.png

Untitled4.png

Reply
0 Kudos
JPM300
Commander
Commander

That all looks good, there is only two things you can change if you want:

1.)  vSwitch0  I would put all the nics in active-active so you can round robin with orig port id to utalize the other nics more.  Or you can have the other port groups use the other nic/nics as active with 1 or 2 of the others on stand by.  Kind of like what you did with the iSCSI except instead of putting the other two nics in unused put the min stand by

example:

vSwitch0

Management - Active VMNIC0 - Standby VMNIC1,VMNIC2

VM Network - Active VMNIC1, VMNIC2 - Standby VMNIC0

2.)  The way you have ISCSI setup is perfect, however there is a bit of a tabu subject on weather or not you should bind your iSCSI nics to iSCSI if they are on seperate networks like you have them.  VMwares stance is:  If your iSCSI is on seperate Networks don't bind the vmk's to iSCSI.  If they are on the same network IE.) All 3 iSCSI VMkernel ports are on 192.168.2.x then bind them so VMware knows how to multipath it.  However many vendors in there installation documentation tells you to put the iSCSI on different networks but still bind it so?? From my testing the only difference between the non-bind and bind when on seperate networks is the speed at which a re-scan happens.  Source: VMware KB: Considerations for using software iSCSI port binding in ESX/ESXi

With that said I would say your setup looks Greate! and I would leave your iSCSI the way it is unless synology states otherwise, but your pathing all looks good as you have 11 luns with 33 paths which means each lun is getting 3 paths for mpio and failover Smiley Happy

Great job!

Reply
0 Kudos
fpineau
Contributor
Contributor

iSCSI multipathing

In order to be able to use iSCSI multipathing in vSphere, we need to create two VMkernel ports, bind them to two different uplinks and attach them to software iSCSI HBA.
My ESXi host has 4 NICs. Two are assigned to vSwitch0 which has Management VM Port group and three VMkernel ports. One for Management and two for iSCSI. Following picture shows vSwitch0 in the networking tab of the vSphere Client:

I JUST wrote a step-by-step blog about doing this exact thing with a 2-host infrastructure and a Synology iSCSI NAS if you're interested.  You can find it here:

How to set up VMware ESXi, a Synology iSCSI NAS, and Active/Active MPIO | Frank&amp;#039;s Tech Supp...

Reply
0 Kudos