VMware Cloud Community
feadin
Contributor
Contributor

Cannot connect to VM after migration from vmware server 1.6

Hello every one,

we were running a Citrix Access Gateway 4.5 on vmware Server 1.6

My Job is to set up an esxi 5.1 on a new hardware and migrate the vm to that server.

Migrating the old vm to the new server using the converter standalone server worked just fine,

the vm can be startet and configured without problems.

However, when I try to connect to the Citrix Access Gateway using the IP by which the

old vm was reachable, I get no answer from the vm.

IP-configuration of the Server is correct, I can ping the physical nic, that should be used

to connect to the vm (it is a different one than the Management IP of the esxi and lies in another IP-Network)

Unfortunately, I cannot manage the CAG using a console or CLI, so I have no chance,

to check if the IP config of the CAG is still the same as before the migration.

Does any one have an idea, what could be wrong?

0 Kudos
5 Replies
weinstein5
Immortal
Immortal

Do the CAG have IP addresses on the 172.10.x.x subnet or is it deifferent? Using the vSphere client and accessing the VM console are you able to ping the gateway used by CAG box?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
feadin
Contributor
Contributor

172.16.10.9 is the adress of the physical NIC, which shall be used to communicate with the CAG from the DMZ. I can ping that address.

The CAG itself has the IP 172.16.0.101 and that address does not answer to pings.

0 Kudos
weinstein5
Immortal
Immortal

Networking within ESXi is different than it is in VMware Server - with VMware Server you can configure networking so that it acts like a NAT allowinf you to use the IP address of the VMware Server and have traffic carried to the virtual machine with ESXi it is different. In its simplest sense you have a virtual switch (which in you screen shot you have 2 configured). There are three types of virtual switch -

  1. Internal Only - this is a virtual switch with no physical NICs and allows you to create an internal only network
  2. vSwitch with one physical NIC - allows for creating a virtual network that also can connect to a physical network
  3. vSwitch with 2 or more physical NICs - allows for creating a virtual network that can connect to a physical network and provides load balancing and fault tolerance

To the virtual you can connect a vmkernel port which is a virtual NIC that is used for management communication to the ESXi host. The vmkernel port is assigned an IP address.

You can also have a VM Portgroup which is how the VMs will communicate to the virtual network and if available out to the physical network.

So with the configuration I am betting there is no route to the 172.16.0.x network form the 172.16.10.x network configured on the firewall-

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
feadin
Contributor
Contributor

Thank you for your answer.

At the moment, the server and the client from which I manage it are not connectet to the productive network,

that means there is no firewall at all between these two computers. The nics are linked directly with two crossover cables.

I have added some routes on the client and tried many different IP-settings on the client nic, but nothing helped so far....


I can't remember if I testet a route from 172.16.10.x to 172.16.0.x yet, I'll give it a try

0 Kudos
feadin
Contributor
Contributor

I finally found the solution.

the cag had two virtual nics with adapter type "flexible",

so I tried and added one with type "E1000" - after that it worked!

Thanks again for your help Smiley Happy

0 Kudos