VMware Cloud Community
air-wolf
Contributor
Contributor

ip-hash Nic teaming

folks,

I need some help here as I don't have clear answer on NIC teaming.

my esx have a 10GB fiber nic and 4 physical on board 1GB nics. I bond/trunk the 4 interface on the HP procure as 4gb interface.

my question is on the esx vswith and management interface, I want o use the IP-hash but what should I set these nics as i want active and passive for failover. I want the 10GB as the primary and the 4 on board as standby.

should i put 10gb as primary then put all 4 on board nics to standby and ignore the compliance by esx that stated "all nic must be active"?

thanks, for all your help....

0 Kudos
22 Replies
RanjnaAggarwal
VMware Employee
VMware Employee

All must be active

Regards, Ranjna Aggarwal
0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

my esx have a 10GB fiber nic and 4 physical on board 1GB nics. I bond/trunk the 4 interface on the HP procure as 4gb interface.

Does only the four 1 GB nics go into the HP Procurve switch? What does the the configuration look like for these?

Have you done any setup inside the ESXi host, i.e. with vSwitches and vmnics?

My VMware blog: www.rickardnobel.se
0 Kudos
anujmodi1
Hot Shot
Hot Shot

The standby adaptors will not participate in IP-Hash NIC teaming. only active nics can participate in the NIC Teaming. As you mentioned that you have 1 10 gb nic card but you can add 1 or 2 1gb nic in active active nic teaming to do the testing.

Regards,

AnujM

Anuj Modi, If you found my answer to be useful, feel free to mark it as Helpful or Correct. The latest blogs and articles on Virtulization: anujmodi.wordpress.com
0 Kudos
weinstein5
Immortal
Immortal

By having a active/standby configuratgion you are not going to take advantage of IP Hash because IP Hash examines every outbound packet and selects the outbound NIC based on originating IP and destination IP addresses - so with only a single active NIC IP Hashs will not be used -  hence the warning -

I would select the Route Based on Virtual Port ID with a configuration that you are looking at implementing -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
air-wolf
Contributor
Contributor

I did setup in in esx as all active and as well as 10gb active and 4 bonded nics as passive. Both tested fine but with active/passive I received configuration warming

0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

I did setup in in esx as all active and as well as 10gb active and 4 bonded nics as passive. Both tested fine but with active/passive I received configuration warming

Could you describe the setup and configuration on the physical switch side? And on the vSwitch? Are all five vmnics part of the same IP hash NIC teaming?

My VMware blog: www.rickardnobel.se
0 Kudos
air-wolf
Contributor
Contributor

on the HP side nothing was done to the fiber nic other  than assign vlan.

as for the 4 on board nics, I have to trunk on HP side as TRUNK instead of LACP. As HP LACP trunking is not supported by vmware. after that, aside the vlan  for that trunk.

on the esx side, I put all 4 onboard nics and fiber channel nic to active for both vswith and management console. that is...oh, then change to IP Hash route.

0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

on the HP side nothing was done to the fiber nic other  than assign vlan.

as for the 4 on board nics, I have to trunk on HP side as TRUNK instead of LACP. As HP LACP trunking is not supported by vmware. after that, aside the vlan  for that trunk.

on the esx side, I put all 4 onboard nics and fiber channel nic to active for both vswith and management console. that is...oh, then change to IP Hash route.

Unfortunately this will not be a working solution. If you have five interfaces on the vSwitch side with IP hash going out to one single port + the other 4 ports connected to a HP Trunk (static link aggregation) then this will lead to multiple problems as you start to load the host with VMs. It will not be ok for the physical switch to see the same MAC address on both the single 10 Gbit port and at the same time on the static trunk.

My VMware blog: www.rickardnobel.se
0 Kudos
air-wolf
Contributor
Contributor

If that is the problem then please provide the solution how your going to do it. As I said, this team nic is a bit unclear and esx want all nics to be active if using ip-hash. As during my test, I bring fiber and 3 nics down and still able to contact the esx with no lost of connectivity.

Regards,
0 Kudos
rickardnobel
Champion
Champion

Could you describe some more of your wanted setup: do you plan to use vMotion? IP based storage through iSCSI or NFS?

How many VMs do you plan to run on the host?

Do you have a specific reason for going with "IP Hash" opposed to "Port ID"? The default Port ID might actually be better for this situation.

My VMware blog: www.rickardnobel.se
0 Kudos
air-wolf
Contributor
Contributor

Yes, I'm using ip based storage both iscsi and NFS. We want to setup for vcloud environment. The following is what I currently have:

Vmotion is a must have

Esx1 have 10 nics trunk on HP switch as single TRUNK

Esx2 have 10 nics trunk on HP switch as single TRUNK

Esx3 have 1 fiber 10Gb and 4 on board nics trunk on HP switch as single trunk

Vcenter with 2 nics trunk on HP as single trunk

Netapp storage

There are dedicated vlan for each storage protocol as well as users vlan.

Total vm's I have are about 40 servers.

I'm not sure what is the best way to use and if I should use ip hash or port ID. I want the best performance as possible and I want traffic on esx3 on 10gb because esx2 & esx1 will operate on 10gb after trunking.

Let me know what your suggestion for the setup. How would you setup the vswitch nic and management interface.

0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

Esx1 have 10 nics trunk on HP switch as single TRUNK

Esx2 have 10 nics trunk on HP switch as single TRUNK

Do you only have one physical switch? Which model of HP Procurve?

There are dedicated vlan for each storage protocol as well as users vlan.

So the iSCSI and NFS traffic has its own portgroups on the same vSwitch as all user VLANs?

I want traffic on esx3 on 10gb because esx2 & esx1 will operate on 10gb after trunking.

Actually the 10 x 1 Gbit in HP Trunk (like Cisco static etherchannel) will not be the same as one 10 Gbit interface. With the 10 trunked interfaces you could in some situations get more than 1 Gbit/s for a VM, but that is not sure either.

My VMware blog: www.rickardnobel.se
0 Kudos
air-wolf
Contributor
Contributor

Do you only have one physical switch? Which model of HP Procurve?

yes, one physical module switch model 5412ZL

So the iSCSI and NFS traffic has its own portgroups on the same vSwitch as all user VLANs?

yes, each trafic has its own portgroup on the same vSwich

e.g under the vSwitch I have the following

Office - vlan 2 - 192.168.5.0/24 subnet for all users

NFS - vlan 3 - 192.168.6.0/24 subnet for NFS

iSCSI - vlan 4 - 192.168.7.0/24 subnet for iSCSI

iSCSI - vlan 5 - 192.168.9.0/24 subnet for backup

0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

yes, one physical module switch model 5412ZL

That is a very good and high quality switch, and if you have redundant power into the chassis it should be fine. If using some lower-end switch it is typically recommended to have at least two physical switches to be able to handle switch failures. (Or switch reboot at firmware upgrade - still watch out for that.)

yes, each trafic has its own portgroup on the same vSwich

e.g under the vSwitch I have the following

Office - vlan 2 - 192.168.5.0/24 subnet for all users

NFS - vlan 3 - 192.168.6.0/24 subnet for NFS

iSCSI - vlan 4 - 192.168.7.0/24 subnet for iSCSI

iSCSI - vlan 5 - 192.168.9.0/24 subnet for backup

In my opinion this is not really an optimal setup, since the traffic will (somewhat) random be placed on the different adapters. If you are unlucky the most bandwidth-consuming VMs could use the same physical vmnic the one to the NFS netapp. You could also risk that vMotion is on the same vmnic as either some critical VM or even the storage network access.

The "Ip hash" nic teaming could be in some use for VMs with very many client sessions going in - IF there is actually high bandwidth demands, but for iSCSI and NFS the "ip hash" will gain you almost nothing. It is also important that even with "ip hash" so between two IP nodes (like a VM server and a specific client) you can never get more than 1 Gbit.

My VMware blog: www.rickardnobel.se
0 Kudos
depping
Leadership
Leadership

Before you go down a rat-hole, mixing 10Gbps and 4 x 1Gbps in a trunk and all 5 in a IP-Hash active confige is going to kill your network. I've seen it before and believe me it is not a pretty sight. You will have an insane amount of mac-flapping as ESX is not aware of the "4 x 1 trunk" and just load balances traffic across all 5. Not recommended at all.

The config by itself is already a bit flakey to be honest, single switch... single 10G link. Not sure why you selected this. but anyway I would place all of them in a single "virtual port ID" load balanced vSwitch and set up "active / standby" for some of the portgroups. For instance your storage network could go over 10G and the vMotion and then set a 1Gb link as standby just in case anything happens.

0 Kudos
air-wolf
Contributor
Contributor

ok, so the configuration i have for the vSwitch and all 5 nics in active is ok instead of 10gb active and 4x1gb standby.

there isn't a need for vm to have more than 1gb bandwidth. all I need is the esx host to have the maximun bandwith as possible.

so the IP-hash or port-ID route really didn't many any different in my configuration, therefore I can use either type?

vmotion is on the same vSwith with different portgroup and different vlan.

and yes, my HP have 4 redundancy power supply Smiley Happy

0 Kudos
air-wolf
Contributor
Contributor

Depping,

I did not have 5 NICs(10gb and 4x1gb) in the same HP trunk group. 10gb is not trunk on HP switch only the 4x1gb are trunk on HP switch.

all i did is place all 5 NICs in active vSwittch and set to ip-hash

active 10GB and 4x1gb with ip-hash

instead of

active 10GB

stanby 4 x 1gb

with active and standby complaint that all nics must be in active.

4x1gb are there incase the 10gb is down so what is flakey about it? explain please....is not like only 1 nics. as i want the 10GB as the primary traffic. but it does hurt to spread out.

0 Kudos
depping
Leadership
Leadership

Let me try to be clear here:

1) If you have 4 x 1GB as "standby" and your 10GB fails only 1 of those 4GB links will become active.

  --> in other words, there is no point in having this setup.

2) You already have all the NICs, why not just use them? Use the 1GB links for management traffic for instance  and VM traffic

3) Redundant power supplies are nice, but if the switch fails it fails...

0 Kudos
rickardnobel
Champion
Champion

Jim wrote:

I did not have 5 NICs(10gb and 4x1gb) in the same HP trunk group. 10gb is not trunk on HP switch only the 4x1gb are trunk on HP switch.

all i did is place all 5 NICs in active vSwittch and set to ip-hash

As I noted already in comment number 8 this is really not working. In a link aggregation team (call it HP trunk, Cisco Etherchannel, VMware Ip hash) all involved ports on both sides must be logically connected together. This is very important, since if not the MAC address flapping will break your network.

My VMware blog: www.rickardnobel.se
0 Kudos