Hi,
I have an org VDC with an edge gateway and a routed organization network. VMs on the routed org network cannot get out past the edge gateway.
The gateway has the following details:
External IP - 10.7.190.246
Internal IP - 10.10.20.1
There are no firewalls on either the edge gateway or the org VDC network.
From a VM in the org, connected to the org gateway, I cannot ping the 10.10.20.1 address.
From the edge gateway appliance (the VM itself) I cannot ping the IP of the VM in the org VDC (10.10.20.2)
I have disconnected all the VMs in the org from this network, deleted it and redeployed it from scratch. No luck.
I have also created a couple of new VMs in the org which correctly get an IP address, but still cannot reach the gateway IP (10.10.20.1)
Can anyone give me some advice on where I should look from here??
TIA
Ok, there's more to this than I first thought.
It appears that the affected VMs cannot get network connectivity at all.
If I take a VM and attach it to a standard dvSwitch port group outside of vCloud completely, and give it a LAN IP address, it still can't ping anything. Anywhere.
I've even gone as far as removing the VMs NIC and adding a new one. No dice.
What does vCD 5.1 do to VMs???? (or vShield for that matter...)
What kind of network pool are you using?
We're using a VCD-NI pool.
This morning things are looking even worse. We're in a situation now where we can't connect to any VMs in our vCloud environments at all.
When a VM is connected to a network backed by VCDNI network pool, and the VM is powered on, then some dvfilter attributes are added to the VM's .vmx file. You can't just take the VM and then connect it to some other portgroup and expect it to work (because of those attributes). Stopping the vApp should clear those attributes from the .vmx file.
Ah, ok. That's interesting, I didn't know that.
I'm still at a loss though. The vShield edges seem to be working fine, from the LAN I can ping both the external and internal interfaces of them.
When it comes to the VMs inside the organisations though, I cant ping anything. They can't ping eachother or the vShield gateway.
Sounds like a VCDNI issue then. The common problems I see are that the VLAN used for VCDNI is not plumbed into the physical switch to the hosts, or the hosts are not prepared correctly with the vcd host agent. I'd start with the VLAN
Hi,
Our Pool doesn't have a VLAN associated with it. The VLAN ID field in the Network Pool Settings is empty (and greyed out.)
I should also mention, that this environment was working 100% yesterday.
There was some maintenance performed overnight, which entailed putting all ESX hosts into maintenance mode and moving their management interfaces to standard vSwitches (we've had some fun with the dVswitches over the last few weeks...) I don't know if that would have affected things?
From the VCD Hosts list, I see all hosts as being OK. There are no errors there.
Try putting all hosts into maintenance mode once at a time, unprepare, reboot, reprepare, and take out of maintenance mode
Ok, I'll give that a go. Thanks.
Thanks _Morpheus_ looks like your advice has resolved the issue. Put all the hosts into maintenance mode, unprepeared, rebooted, prepared and out of maintenance mode.
Things are back to normal now. Which leads to the obvious question, are there any known circumstances that would cause this issue to happen?
Thanks
If it was working right up until you messed with the management portgroup then who knows what will happen to the dvfilter processing. VCDNI encapsulates packets and sends them out the management portgroup using the management IP address. I really don't know what would happen if you have established VCDNI networks and then do something to mess with the management layer. I guess users shouldn't do this type of thing since it seems to break VCDNI.