MartinPasquier
Contributor
Contributor

iSCSI network configuration

Jump to solution

Hi everyone,

I actually have two esxi 5.1 hosts configured with iSCSI SAN and they are working fine. Actually, the network configuration is as below:

MD3200i: 4 of 8 ports configured on two controllers. two on one subnet on VLAN x and two on another subnet on VLAN y.

ESX HOSTS: two different vSwitches with one vmnic and one vmkernel on each host.

All is running fine, with multipath configured.

So now: I want to enhance storage performances by using the last four ports on the md3200i and the two nics per host.

My question is should I create new vmkernel for each "new" nic or may I just add nics on my vSwitches ?

Maybe I've misunderstood the documentation I've read but the portbinding doesn't enhance performances but only HA feature ?

iSCSI-Config.png

Thanks in advance

Martin

Tags (1)
0 Kudos
1 Solution

Accepted Solutions
a_p_
Leadership
Leadership

Unless there's something that has changed lately, you need a one-to-one relationship between VMkernel port groups and vmnics. It actually doesn't matter whether you configure this in a single vSwitch - with setting the vmnics as Active/Unused - or by using one vSwitch per VMkernel port group.

André

View solution in original post

0 Kudos
8 Replies
ClintColding
Enthusiast
Enthusiast

On the ESXi side I would simply add the NICs to your already created vSwitches with a new kernel port for each. I assume you have a dual controller SAN and each controller has 4 ports. Just make sure you add the newly added NICs to the correct subnet for that controller and everything will be fine.

MartinPasquier
Contributor
Contributor

Hi ClintColding,

Thanks for your answer.

So I simply add my to my VMKernel Ports and configure new IP addresses on the two controller of my storage array ? With that, the bandwidth would be automatically multiplied by the ESXi hosts ?

I don't have to configure port binding ?

0 Kudos
a_p_
Leadership
Leadership

Unless there's something that has changed lately, you need a one-to-one relationship between VMkernel port groups and vmnics. It actually doesn't matter whether you configure this in a single vSwitch - with setting the vmnics as Active/Unused - or by using one vSwitch per VMkernel port group.

André

View solution in original post

0 Kudos
MartinPasquier
Contributor
Contributor

Hi a.p,

Thanks for your reply. OK I've read a lot of documentations today and maybe too much I think Smiley Wink

Now I've actually configured (on each Host):

One vSwitch with four NICs and four VMKernel Ports.

I linked the Kernel Ports on my storage adapter and overrided the NICs teaming with each time ONE NIC on use and the other UNUSED.

I also changed the path topology for Round Robin.

Is there a think I might know about this ?

Thank you

Martin

0 Kudos
ClintColding
Enthusiast
Enthusiast

Near the end of this doc it shows the best practices for using single or multiple vSwitches when using multiple subnets: VMware vSphere 5.1

0 Kudos
a_p_
Leadership
Leadership

>>> Is there a think I might know about this ?

Sounds ok to me. If the storage system supports "Round-Robin" that's basically all you need to do. What you may do in addition, is to check whether using Jumbo-Frames is supported by your switches and the storage system. Depending on your workload you may see an increase in performance with Jumbo-Frames.

André

0 Kudos
MartinPasquier
Contributor
Contributor

OK sounds working fine with actual conf. Thanks for answerd.

I've one last question about jumbo frames. My core stack switching actually holds management, data and iscsi connections. So is there Any issue about changing mtu to 9000 on the trunk between the two switches? Will I have problems with datas converging on it with 1500 mtu ?

thanks in advance

Martin

0 Kudos
ClintColding
Enthusiast
Enthusiast

Each iSCSI subnet should be on its own switch. There shouldn't be a need for them to talk across the switches.

0 Kudos