VMware Cloud Community
BobD01
Contributor
Contributor
Jump to solution

Cross Host Fencing Problem with vCloud 5.1

Hi,

I have 6 ESX 5.1 hosts in my vCloud resource cluster, all works fine except that if I create a fenced vApp and the vShield Edge device or VM land on a certain host then I can't talk to the VM.  If both the Edge device and the VM land on the suspect host together it's fine.  All other hosts work OK.

I'm using VLAN 220 for the Network Pool so I thought that it may not be presented to the dodgy host but that is not the case.  I'm pulling my hair out over this one, can anyone assist me here?

thank you

Reply
0 Kudos
1 Solution

Accepted Solutions
IamTHEvilONE
Immortal
Immortal
Jump to solution

If your network Pool is VCDNI based, you need to ensure a few things:

1. that the vLAN 220 actually exists in the network fabric

2. vLAN 220 is trunked to all NIC Ports associated to the Distributed switch that backs the Network Pool

3. Ensure that your network supports an MTU of 1524 ... if it doesn't set the MTU of the network pool to 1476, and recreate anything that used that network pool thus far (to get the new MTU size).

View solution in original post

Reply
0 Kudos
3 Replies
IamTHEvilONE
Immortal
Immortal
Jump to solution

If your network Pool is VCDNI based, you need to ensure a few things:

1. that the vLAN 220 actually exists in the network fabric

2. vLAN 220 is trunked to all NIC Ports associated to the Distributed switch that backs the Network Pool

3. Ensure that your network supports an MTU of 1524 ... if it doesn't set the MTU of the network pool to 1476, and recreate anything that used that network pool thus far (to get the new MTU size).

Reply
0 Kudos
_morpheus_
Expert
Expert
Jump to solution

Try unprepare and re-prepare the host

Reply
0 Kudos
BobD01
Contributor
Contributor
Jump to solution

Thank you.  I did ask whether the vlan was trunked down and was told that it was, as it turned out it was not configfured further up the network stream.  It worked on all other servers because their primary vmnic was 4 on the dvSwitch, the dodgy hosts had vmnic 5 as primary - it was that side that was down on the enclosure.

This did highlight one thing though... the default NIC teaming for a dvSwitch port group is active/standby (not load balanced), as the Network Pool spawns these port groups on the fly you have no control over the teaming config (as it does not inherit from the switch layer like a standard switch does) so all traffic flows through the same NIC on all servers.

Thanks again for your time in answering.

Reply
0 Kudos