We are just setting up our simple system - we have a PowerEdge 2900 w/ ESXi 4.0 installed (so 1 host). When we connect to it directly with the vSphere client, the connection with the host stays open and everything is fine. When we connect via vCenter using the vSphere client (an eval download pf vcenter), it always times out within a couple minutes. We have adjusted visible timeout fields but that doesn't seem to fix the problem.
Looking for guidance. Is this a feature of the eval vCenter?
I am thinking that this may be in relation to network traffic. If the server initially stays connected to VC for a few minutes that means that the TCP connection to the server is ok, however the return communication on port 902 UDP is not able to get back to the system.
Do you have any firewalls / routers / etc. seperating the host service console from vCenter?
The vCenter goes to the same switch as the host, so no firewalls or routing layers. Again, it works OK when I connect the client directly, it's just when it goes through the vCenter server. Note that both the vCenter server and the vClient are on the same box.
Do you see any errors in the vpxd log file on the vCenter server?I
think you will see the error if its a network issue. Else try
restarting VC servic. I had seen this issue once and restarting VC
service resolved the issue.I am sure this is not a feature in eval mode
had some problems some time ago with a PowerEdge 2950. Connectivity was very weird. The problem was the network interface. The PowerEdge got 2 different NICs, one Broadcom and one NVidia.The Broadcom worked fine, while the NVidia NIC lost connection from time to time, so you might try to use the second NIC.
I'm just encountering exact the same on one of our ESX 4.0 Servers. Using DIRECT connection to the host works fine, using vCenter Server keeps on loosing connection to the first host every 2 Minutes, regardless of user interaction, e.g. changing tabs oder settings.
The second host has the same configuration and does not loose connection.
Did anyone find a solution for this?
We ended up solving it. We had VCenter running as a VM on top of VMWare Workstation on a desktop. When we installed it on the actually host (not within a vm), the problem was not there.
I think I have a solution for the problem. But fist the description of our configuration:
We have 2 ESX 4.0 servers runnig on 2 Dell PowerEdge 2950 III, both equipped with 2 quad-core Xeons and 36 GB of RAM, hosting 8 virtual machines. One of them (placed on the first machine, which had comm errors) is a Win 2003 Server 64bit with vCenter Server installed.
Before the upgrade from 3.5 to version 4.0 the VMServer was running on a dedicated physical machine. As VMWare announced the virtualization of VCenter Server as working for production, I decided to follow this and installed the Server as mentioned above. The network settings where kept simple, all machines, including the vCenter Server AND the service console of the host where bridged to all 6 physical NIC's onboard of the PowerEdge.
And that's the point the mess started. The host started to flip away within 2 minutes, regardeless of what I tried to do. It seems that the vswitch has a problem to route the traffic correctly from the service console to the vCenter Server trough one vswitch. So I decided to set up a second service console on a dedicated NIC and connect through this one to the host - and "voila" - things worked like a charm. I removed the first service console (don't forget to unbind the gateway settings of the sc, otherwise you won't be able to remove it) and reassigned the ip-settings to the second one and everything still works fine.
For testing purposes I assigned several different NIC's to the sc, but all of them where working, I only want to mention this because of schepp's post in this thread. Our server has 2 onboard NIC's and one PCI-E card with 4 NIC's, all of them are working, as long as you keep the service console seperated from the NIC's the vCenter Server communicates.