VMware Cloud Community
mcirrici
Contributor
Contributor

Can't ping directly connected ESXi 6.5 host

Currently have two ESXi 6.5 hosts within the same cluster which is managed by a vCenter 6.5 instance, and I recently added a directly connected 1G link between the two ESXi hosts.

I assigned the vmnics to vSwitch0 and created a new vmKernel Port and assigned those vmKernel Ports IP's within the same subnet.

When I login to each ESXi I can ping the vmKernel Port of each respective ESXi host, but I cannot ping the other side. Am I missing a configuration option here?

ESXi-1                                        ESXi-2

vSwitch0                                   vSwitch0

vmK1 -10.250.139.141             vmK1 -10.250.139.142

vmnic2                                      vmnic0

   |________________________|

Tags (2)
0 Kudos
9 Replies
diegodco31
Leadership
Leadership

Hi

Did you use the vmking?

VMware Knowledge Base

Diego Oliveira
LinkedIn: http://www.linkedin.com/in/dcodiego
0 Kudos
ChrisFD2
VMware Employee
VMware Employee

Regular ping should work but as above it's always better to use vmkping and use the source vmk.

On the directly connected network, is it routable at all via the management vmk interface?

Finally, the cable you have used, is it a crossover? I don't know if ESXi will switch the pairs or not but it's always best to use a crossover to be on the safe side.

Regards,
Chris
VCIX-DCV 2024 | VCIX-NV 2024 | vExpert 6x | CCNA R&S
0 Kudos
a_p_
Leadership
Leadership

Welcome to the Community,

what exactly are you trying to achieve with the direct connection?

If I understand you setup correctly this is likely a "hit, or miss" configuration depending on the failover configuration.

André

0 Kudos
mcirrici
Contributor
Contributor

standard ping cmd should work

0 Kudos
mcirrici
Contributor
Contributor

yes, the connection is a crossover cable.

Yes, I can ping the new vmk port from the management vmk interface but can't reach the other side.

0 Kudos
mcirrici
Contributor
Contributor

I am trying to speed up the vMotion process by using a direct connection, b/c at the moment the only way to use vMotion is to use the management vmkernel port and the network is complex and slow

0 Kudos
diegodco31
Leadership
Leadership

Check if the following VMware KB article helps: VMware Knowledge Base

Diego Oliveira
LinkedIn: http://www.linkedin.com/in/dcodiego
0 Kudos
a_p_
Leadership
Leadership

I've never used direct connect for vMotion myself, but it should basically work.

What you may do is to create a new vSwitch on each host, with a vMotion VMKernel port group, and the vmnics connected to the vSwitch. Use IP addresses from a dedicated subnet, i.e. one that's not in use yet (e.g. 192.168.139.0/24), and ensure that "vMotion" is only enabled for these new port groups.

Please note, that in this case you must only enter the IP address, and the subnet mask for the vMotion port groups. Do not edit the default gateway address!!!

André

0 Kudos
ChrisFD2
VMware Employee
VMware Employee

Direct attached does work for vMotion, vSAN etc.

Use a seperate vSwitch so the traffic can't leave the host but on that interface. Create the vmk/port group off that and tag it for vMotion traffic.

vMotion is fine on 1Gbit, but really benefits 10Gbit.

Regards,
Chris
VCIX-DCV 2024 | VCIX-NV 2024 | vExpert 6x | CCNA R&S
0 Kudos