VMware Cloud Community
dmented
Enthusiast
Enthusiast
Jump to solution

Multi-home ESXi 5.1 not working.

Hi,

I'm having trouble making multi-home work in my ESXi 5.1 setup.

Here's my config:

vmk0 - Management Interface; 10.0.103.0.11 /24

vmk3 - Isolated/ Storage interface: 192.168.0.161 (NFS storage is at: 192.168.0.10)

Default Gateway: 10.0.103.1

Now, i can't reach my storage with this config as it still keeps using vmk0 to try to reach it. I checked the esxcfg-route and there's a route going thru 192.168.0.0/24 network. I'm not sure why it still keeps using this.

Please let me know if i miss anything.

Thanks

<see screenshot>

0 Kudos
1 Solution

Accepted Solutions
tomtom901
Commander
Commander
Jump to solution

Hi,


Could you enable Promiscous mode (set to accept) on the physical ESXi port group to which the virtual ESXi is connected with vmnic5. By default this is disabled and in a nested ESXi this can cause problems.

Screen Shot 2013-11-11 at 16.30.48.png

Hope this helps,

View solution in original post

0 Kudos
11 Replies
a_p_
Leadership
Leadership
Jump to solution

Is it the same when you use vmkping instead of ping?

André

0 Kudos
tomtom901
Commander
Commander
Jump to solution

Can you do a ping from the NFS storage to the vmk3 interface?

0 Kudos
OscarDavey
Hot Shot
Hot Shot
Jump to solution

Esxi usually do this when it fail to connect to the other VMK , try please to ping the Vmk3 to see if you are having a normal connection and let me know the result to be able to analyze whats happening

Best regards

Yours, Oscar

0 Kudos
dhanarajramesh
Jump to solution

10.0 and 192.168 different subnet.  you need to separate this two network by vlan otherwise host will always use the default g.w.  It is best practice to have only one VMkernel port configured for each network or VLAN.

0 Kudos
dmented
Enthusiast
Enthusiast
Jump to solution

a.p.

vmkping to the same vmk3 interface (192.168.0.161) is working fine. BUT vmkping to the NFS storage fails.

tomtom901

it fails. i've tried attaching a Windows VM on the isolated network i can ping the NFS/ Storage endpoint fine.

dhanarajramesh

I'm sorry but thats the whole point. We wanted to have isolated back-end NFS traffic thats why 10.0.103.0/24 doesn't have route to 192.168.0.0/24.

... and yes  - thats why I have 2 VMKernel ports (vmk0 and vmk3; see screenshot) serving two different Networks.

0 Kudos
tomtom901
Commander
Commander
Jump to solution

@dhanarajramesh: Not be rude or anything but do you have a basic understanding of networking or only VMware? If you have 2 IP adresses in the same subnet, you do not need to route this traffic, as it will stay in the same subnet and uses broadcasts to discover stuff like MAC adresses.

@dmented:

Have you ruled out physical cabling or physical switch config? I don't know if this is a production server where you cannot easily test things, but could you give the output of the following command:

esxcli network nic list


Could you also share a screenshot of the configuration -> network page in the vSphere client? Seeing as a Windows VM (on the same host?) can ping the NFS storage, I'm thinking along the lines of an (accidential) misconfiguration.


Hope this helps,




0 Kudos
a_p_
Leadership
Leadership
Jump to solution

I agree with , this looks like a network configuration issue, most likely in combination with the physical network. From how I understand this so far, you have both VMkernel ports on the same vSwitch!? This may work with e.g. VST (virtual switch tagging) and the physical switch port(s) being configured as 801.1Q (trunk/tagged) ports, or in a flat network where everything runs in the same VLAN (or where there's no VLAN configuration at all).

So please provide some details about the virtual switch and port group configuration as well as your physical network (e.g. port configuration, VLANs, ...)

André

0 Kudos
dmented
Enthusiast
Enthusiast
Jump to solution

Thanks for the help so far guys!

a.p./ tomtom901

Let me clarify my config: (this is a home lab - sorry if it appears a bit complicated)

Physical ESXi: one NIC only

vSwitch0: -vmnic0: directly connected to 192.168.0.0/24

vmk0 - management of PhysicalESXi

vyatta router-Eth0

vSwitch1: no NIC uplink

- vyatta router Eth1 (serves as internal router; 4095 portgroup)

- other VLANs tagged in each port group as config.

See vyatta config:

help2.png

- i can route fine 10.0.0.0/8 within the network.

- my SNAT is sketchy.. feel free to correct/ suggest a good approach here (thanks!)

======

ESXi01 as VMWare Guest:

heres the config:

help3.png

* you can see the observed IP range is fine.

As a troubleshooting step:

I've setup a windows machine connected to the same portgroup and networking as that of my ESXi. and it worked fine Smiley Sad

11-11-2013 11-12-25 PM.png

One thing I notice, when I go to the PortGroup of the Physical ESXi host, there are no IP's seen in the VM list. Not sure maybe because vmk3 (192.168.0.0/24) is not a management port?

help4.png

* you can see the test win2k8 VM here (192.168.0.25)

0 Kudos
tomtom901
Commander
Commander
Jump to solution

Hi,


Could you enable Promiscous mode (set to accept) on the physical ESXi port group to which the virtual ESXi is connected with vmnic5. By default this is disabled and in a nested ESXi this can cause problems.

Screen Shot 2013-11-11 at 16.30.48.png

Hope this helps,

0 Kudos
dmented
Enthusiast
Enthusiast
Jump to solution

tomtom901

wow! thanks sir! that worked like a charm. *internet clap*

Any idea why this caused the issue? internet link explaining this or anything will be helpfull

Thanks again!

0 Kudos
tomtom901
Commander
Commander
Jump to solution

I don't know what is the technical reason, just know that I had the exact same issue in my lab a few years ago. Enabling this fixed it and since I've seen people come and go with the same problem.

0 Kudos