VMware Cloud Community
pdx99
Contributor
Contributor
Jump to solution

Network problem

I have two Windows 2003 R2 Ent vms running as a Microsoft cluster on an ESX 3i host. Until a little while ago these vms were able to reach the domain controller in the domain they are joined to, were able to ping the default gw and had outside connectivity and the cluster worked properly. Now they don't have any of those capabilites. They are still able to ping each other. They use the default vm switch which uses the hardware nic in the esx 3i host machine and the cluster network name resource doesn't go online.

No patches or changes were made by me although I did experience a problem where the static IP address info configured on the vm nics was no longer present and the nics were back as DHCP clients. When I attempted to reapply the static IP info I received the following error:

"The IP address xxx.xxx.xxx.xxx you have entered for this network adapter is already assigned to another adapter 'VMware Accelerated AMD PCNet Adapter'. is hidden from the Network Connections folder because it is not physically in the computer. If the same address is assigned to both adapters and they become active, only one of them will use this address. This may result in incorrect system configuration."

I found the following solution online and did it after which I was able to get the cluster network name resource online:

To resolve this problem, follow these steps to make the ghosted network adapter visible in the Device Manager and uninstall the ghosted network adapter from the registry:

1. Select Start> Run.

2. Enter cmd.exe and press Enter.

3. At the command prompt, run this command:

set devmgr_show_nonpresent_devices=1

4. Enter Start DEVMGMT.MSC and press Enter to start Device Manager.

5. Select View > Show Hidden Devices.

6. Expand the Network Adapters tree (select the plus sign next to the Network adapters entry).

7. Right-click the dimmed network adapter, and then select Uninstall.

8. Close Device Manager.

**

My two questions are:

1) Why did the static IP info disappear and does this happen often in ESX 3i?

2) How do I get full network connectivity back to the NICs on the two VMs?

Thanks

Reply
0 Kudos
1 Solution

Accepted Solutions
Lightbulb
Virtuoso
Virtuoso
Jump to solution

If it is lab environment might as well start over. I have done the same thing myself.

I use this as a guideline whenever I need to throw up a MSCS cluster

http://exchangeexchange.com/blogs/bkeane/archive/2007/07/30/mscs-clustering-in-vmware.aspx

View solution in original post

Reply
0 Kudos
3 Replies
Lightbulb
Virtuoso
Virtuoso
Jump to solution

Answers sort of

Question 1: It is possible these "missing" NICS are the leftover from a past reconfiguration of the VMs hardware (See http://kb.vmware.com/kb/1513). As this is a cluster that seems highly likely, whoever built the cluster added the shared storage after the NICs had been assigned on the system which pushed 1 nic on each host off the bus, so to say. You can usually ignore the warnings about these phantom devices.

Question 2: Recreate as needed your cluster NICs. I am a little unclear which NICs are offline. In a standard MSCS cluster each node will have 1 External NIC and 1 Heart beat NIC. The heartbeat NIC is for cluster communication and no external network access is needed (These should have Staic IPs that are not routable to other systems. I put them on separate vswitch). The External interfaces are where the Cluster resources interact with the external network (These should have Staic IPs that are routable to other systems)

Reply
0 Kudos
pdx99
Contributor
Contributor
Jump to solution

Thanks for the info.

I built the cluster add I did add some shared storage after the nics had been configured so that clarifies the phantom nic issue.

The private/heartbeat and public cluster nics are setup as you mentioned (static ips, different subnets, different vswitches, private nics aren't configured with a gateway and unnecessary configuration removed, etc) and all was working as it should until recently. The problem now is that the public nics don't have any outside connectivity/route...neither node can successfully ping the gw or the domain controller or browse to the dc, etc.

I'll either re-create the cluster nics or maybe just evict the nodes and start over or maybe - since this is a lab environment - start with fresh vms.

Reply
0 Kudos
Lightbulb
Virtuoso
Virtuoso
Jump to solution

If it is lab environment might as well start over. I have done the same thing myself.

I use this as a guideline whenever I need to throw up a MSCS cluster

http://exchangeexchange.com/blogs/bkeane/archive/2007/07/30/mscs-clustering-in-vmware.aspx

Reply
0 Kudos