Hi everyone,
I actually have two esxi 5.1 hosts configured with iSCSI SAN and they are working fine. Actually, the network configuration is as below:
MD3200i: 4 of 8 ports configured on two controllers. two on one subnet on VLAN x and two on another subnet on VLAN y.
ESX HOSTS: two different vSwitches with one vmnic and one vmkernel on each host.
All is running fine, with multipath configured.
So now: I want to enhance storage performances by using the last four ports on the md3200i and the two nics per host.
My question is should I create new vmkernel for each "new" nic or may I just add nics on my vSwitches ?
Maybe I've misunderstood the documentation I've read but the portbinding doesn't enhance performances but only HA feature ?
Thanks in advance
Martin
Unless there's something that has changed lately, you need a one-to-one relationship between VMkernel port groups and vmnics. It actually doesn't matter whether you configure this in a single vSwitch - with setting the vmnics as Active/Unused - or by using one vSwitch per VMkernel port group.
André
On the ESXi side I would simply add the NICs to your already created vSwitches with a new kernel port for each. I assume you have a dual controller SAN and each controller has 4 ports. Just make sure you add the newly added NICs to the correct subnet for that controller and everything will be fine.
Hi ClintColding,
Thanks for your answer.
So I simply add my to my VMKernel Ports and configure new IP addresses on the two controller of my storage array ? With that, the bandwidth would be automatically multiplied by the ESXi hosts ?
I don't have to configure port binding ?
Unless there's something that has changed lately, you need a one-to-one relationship between VMkernel port groups and vmnics. It actually doesn't matter whether you configure this in a single vSwitch - with setting the vmnics as Active/Unused - or by using one vSwitch per VMkernel port group.
André
Hi a.p,
Thanks for your reply. OK I've read a lot of documentations today and maybe too much I think
Now I've actually configured (on each Host):
One vSwitch with four NICs and four VMKernel Ports.
I linked the Kernel Ports on my storage adapter and overrided the NICs teaming with each time ONE NIC on use and the other UNUSED.
I also changed the path topology for Round Robin.
Is there a think I might know about this ?
Thank you
Martin
Near the end of this doc it shows the best practices for using single or multiple vSwitches when using multiple subnets: VMware vSphere 5.1
>>> Is there a think I might know about this ?
Sounds ok to me. If the storage system supports "Round-Robin" that's basically all you need to do. What you may do in addition, is to check whether using Jumbo-Frames is supported by your switches and the storage system. Depending on your workload you may see an increase in performance with Jumbo-Frames.
André
OK sounds working fine with actual conf. Thanks for answerd.
I've one last question about jumbo frames. My core stack switching actually holds management, data and iscsi connections. So is there Any issue about changing mtu to 9000 on the trunk between the two switches? Will I have problems with datas converging on it with 1500 mtu ?
thanks in advance
Martin
Each iSCSI subnet should be on its own switch. There shouldn't be a need for them to talk across the switches.