VMware Cloud Community
ptouch1
Contributor
Contributor

Windows Guest cannot connect to another guest on a different host

I have two identical machines running ESXi 5.1 with 6-7 guest operating systems on each one.  Most of the operating systems

are windows server 2008R2 Enterprise, with a win7 and Linux guests as well.  All windows machines are part of a windows

domain, and have DNS and static IPv4 addresses configured correctly.  All machines can be pinged by a separate physical

machine, and all machines can ping the gateway, the internet, and other VM's on the same host with no problems. All

firewalls are turned off.  The machines are all on the same switch and same subnet.  The switch is a Cisco sg200-26 with a

recent firmware and is running the default vlan1 with "admit all" packet settings.

Here is the issue.  Both a windows 7 machine and a linux machine on Host1 cannot ping or connect to a windows 2008R2 server

on Host2.  The same VM server on Host2 cannot connect to the win7 machine nor to the linux machine on Host1, either.  I used a port scanner on the windows 7 guest machine to scan the ports on the windows 2008R2 server on Host2, but it came back saying that all ports are filtered.  The same scan of the windows 2008R2 server run from a third physical machine shows multiple ports open.  I've been researching for days and testing to try to find a solution, but I'm still baffled as to the exact cause and resolution.  I'm open to ideas on what could be causing the lack of connection between the two VM's.

Tags (3)
Reply
0 Kudos
8 Replies
a_p_
Leadership
Leadership

Did you configure any settings on the port groups (e.g. VLAN-ID, policies, ...)?

How does the configuration on the physical switch ports look like.? Please post the configuration of one of the ports.

André

Reply
0 Kudos
ptouch1
Contributor
Contributor

Here is a document with the screenshots of the settings.  I hope this helps.  Let me know if there is any other information that you are looking for.  Some of the ports are currently set to general on the cisco switch, but that was done as part of the troubleshooting.  The issue remains with switch ports set to trunk.

Reply
0 Kudos
a_p_
Leadership
Leadership

I'm not 100% sure about the "General" setting for the switch ports. I'd suggest you configure them as "Access" ports. Think of this as a switch to switch connection rather than VM to switch.

André

Reply
0 Kudos
ptouch1
Contributor
Contributor

I tried setting the switch ports to "Access" mode, but that doesn't make a difference.  Could it have anything to do with the fact that I am using the default vlan1 on the cisco switch and the ESXi boxes are set to a vlan ID of 0 (none)?

Also, these servers have multiple nics on them used for ISCSI on different networks.  The ISCSI are all on their own vswitches which are connected to dedicated nics to separate the ISCSI traffic.  I am able to ping from the server to the ISCSI target on the alternate VMhost, but still not able to ping from the primary IP of the server to the primary IP of the ISCSI server, nor to the win7 machine.  The ISCSI nics are not teamed, while the management nics are teamed.  However, I broke the team on one of the hosts and that made no difference either.

Reply
0 Kudos
a_p_
Leadership
Leadership

... while the management nics are teamed.

How did you team them? EtherChannel, LACP, ...?

With the default settings "Route based on originating Port ID", the physical switch ports may not be teamed, but have to be configured as Access Ports with Spanning-Tree set to portfast. Is there a chance you provide the switch port settings from the CLI, i.e. using the show run command? The web GUI might be a nice tool but you usually get a better overview from the CLI.

André

Reply
0 Kudos
ptouch1
Contributor
Contributor

I looked at the docs and did some googling on accessing the SG200 switches via command line, however, the docs say that telnet and ssh are unsupported, and they don't have a serial port.  I didn't make any special settings on these ports on the SG200 except for turning off spanning tree and hard coding the ports to 1000 full with matching configurations on the ESXi boxes.

I'm going to revert back to single gig links on the ESXi manangement network and see if that makes a difference when I do it on both machines.  I should be able to do that shortly and report back.

Thanks!

Reply
0 Kudos
ptouch1
Contributor
Contributor

I reverted all of the teams to single gig links, but that made no difference.

However, I found VMWare kb article 1556 referencing Microsoft NLB on ESXi.  All of the weird connectivity issues are centered around the web servers running Microsoft NLB.  We are using Microsoft NLB in unicast mode on two different VMHosts, which looks like it is unsupported.  Does VMware support Microsoft NLB on two different hosts if it is run in multicast mode?

Any input on this is much appreciated.  Thanks!

Reply
0 Kudos
ptouch1
Contributor
Contributor

I changed the NLB mode to multicast instead of using unicast, and that has corrected the connectivity issues that I was seeing.  Thanks to all who responded!

Reply
0 Kudos