Hi all. It's me again.
Progressively adding VMs while working through a "training kit", I have installed two new member servers in a domain which has been up for a while. The servers are assigned addresses in the range of the DC sponsoring the DNS space. I find no errors in addressing. I've run the "ipconig /displaydns and ipconfig /all . The machines see the correct DNS server as well as the default gateway. They can ping all nodes on my network. But they can't ping each other. The PC hosting the VMware DC / DNS server is connected to a Linksys wireless router. My Virtual lab is spread out over 1 laptop and 2 PCs.
The PC hosting the virtual DNS server cannot ping either of the two member servers in question, but they can ping the DNS server, the default gateway (Linksys), and all real or virtual nodes on the network.
In short, the two most recently installed VMs cannot see each other. They cannot be seen by other nodes. But they can see all other nodes (real or virtual) on the network.
Pings "from them" succeed at 100%. But pings "to them"or between them always time out.
Remote desktop connection can be established going from the problem machines to the DNS server. Yet, ICMP packets from the DNS server to them will not succeed.
Again....I'm not employed in IT, but studying. I've only seen this type of connectivity (ICMP) issue in misconfigured routing.
I hope my description is not to convoluted. Can anybody help?
Thanks for all the views
It was a simple inbound firewall rule. I never had to create a new inbound ICMPv4 rule for other VMs on my lab network. So I had totally ruled out the idea. But just for fun, I did it. Problem solved. I only wish I knew why it happened.