VMware Cloud Community
jtcdesigns
Contributor
Contributor

VI client not connecting... no access to ESX machine through Browser

Not sure what has happened here but I have seemed to have lost connection... My virtual machines are still running on the system and I still have SSH access to it. Seems as if something messed up in a firewall setting in that machine because i have an identical machine that works completely fine. I'm currently on 3.0.3 of ESX. When I try to connect with the VI client I get an error message that pops up

Exception from HRESULT: 0x8004012C

Is there a service that needs to be restarted? I have access to the machines running through VNC but otherwise I am kind of lost here.

Reply
0 Kudos
7 Replies
Troy_Clavell
Immortal
Immortal

you can try to resart hostd

at the service console type:

service mgmt-vmware restart

Reply
0 Kudos
jayolsen
Expert
Expert

What does the output of "service mgmt-vmware status" say if you run this via your SSH session without the quotes? If it says stopped try running "service mgmt-vmware start"

If it does say running try "service mgmt-vmware restart"

Reply
0 Kudos
jtcdesigns
Contributor
Contributor

I did status and it said vmware-hostd is stopped so I tried "service mgmt-vmware start" and I get this

Starting VMware ESX Server Management services:

VMware ESX Server Host Agent (background)

Availability report startup (background)

So then I check the status again and it says vmware-hostd is stopped. So for craps and giggles I try "service mgmt-vmware restart" and I get

Stopping VMware ESX Server Management services:

VMware ESX Server Host Agent Watchdog

VMware ESX Server Host Agent

Starting VMware ESX Server Management services:

VMware ESX Server Host Agent (background)

Availability report startup (background)

And once again vmware-hostd is stopped when I check the status again. Seems like something is shutting it down right away.

Reply
0 Kudos
jayolsen
Expert
Expert

Might check your logs:

/var/log/messages

/var/log/vmware/hostd.log

Reply
0 Kudos
jtcdesigns
Contributor
Contributor

This is what I get when I go to /var/log/messages

# nano messages

GNU nano 1.2.1 File: messages

Dec 9 11:03:55 esx01 VMware[init]: HOSTINFO: This machine has 2 physical CPUS, 2 total cores, and 4 logical CPUs.

Dec 9 11:03:55 esx01 VMware[init]:

Dec 9 11:03:56 esx01 VMware[init]: + Segmentation fault (core dumped) setsid $CMD

Dec 9 11:03:56 esx01 VMware[init]: connect: No such file or directory.

Dec 9 11:03:56 esx01 watchdog-hostd: Executing cleanup command '/usr/sbin/hostd-support'

Dec 9 11:03:57 esx01 VMware[init]: connect: No such file or directory.

Dec 9 15:57:12 esx01 watchdog-hostd: PID file /var/run/vmware/watchdog-hostd.PID not found

Dec 9 15:57:12 esx01 watchdog-hostd: Unable to terminate watchdog: Can't find process

Dec 9 15:57:13 esx01 watchdog-hostd: PID file /var/run/vmware/watchdog-hostd.PID not found

Dec 9 15:57:13 esx01 VMware[init]: Begin '/usr/sbin/vmware-hostd -u', min-uptime = 60, max-quick-failures = 5, max-total-failures = 1000000

Dec 9 15:57:13 esx01 VMware[init]: connect: No such file or directory.

Dec 9 15:57:13 esx01 VMware[init]: connect: No such file or directory.

Dec 9 15:57:13 esx01 VMware[init]: HOSTINFO: Seeing Intel CPU, numCoresPerCPU 1 numThreadsPerCore 2.

Dec 9 15:57:13 esx01 VMware[init]:

Dec 9 15:57:13 esx01 VMware[init]: HOSTINFO: This machine has 2 physical CPUS, 2 total cores, and 4 logical CPUs.

Dec 9 15:57:13 esx01 VMware[init]:

Dec 9 15:57:13 esx01 watchdog-hostd: Executing cleanup command '/usr/sbin/hostd-support'

Dec 9 15:57:14 esx01 VMware[init]: + Segmentation fault (core dumped) setsid $CMD

I get this in /var/log/vmware/hostd.log

GNU nano 1.2.1 File: hostd.log

Log for VMware ESX Server, pid=2284, version=3.0.3, build=build-107381, option=Release, section=2

Current working directory: /var/log/vmware

HOSTINFO: Seeing Intel CPU, numCoresPerCPU 1 numThreadsPerCore 2.

HOSTINFO: This machine has 2 physical CPUS, 2 total cores, and 4 logical CPUs.

Thread info: Min Io, Max Io, Min Task, Max Task, Max Thread, Keepalive, thread kill, max fds: 2, 200, 2, 10, 25, 8, 600, 2048

Setting system limit of 2048

Set system limit to 2048

Closing stdout and stderr.

VMServices Plugin initializing

System libcrypto.so.0.9.7 library is older than our library (90701F < 90709F)

vmxLoaderPath="/usr/lib/vmware/bin/vmkload_app"

vmxFilePath="/usr/lib/vmware/bin/vmware-vmx"

vmxFilePathDebug="/usr/lib/vmware/bin-debug/vmware-vmx"

VM refresh disabled

Vmsvc Object registered

VMServices Plugin initialized

Fetch all physical NICs....

FILEIO: Failed to write to new lock file /etc/vmware/esx.conf.MULTILOCK (No space left on device).

An error occurred while fetching physical NIC: Failed to lock: /etc/vmware/esx.conf: No space left on device.

FILEIO: Failed to write to new lock file /etc/vmware/esx.conf.MULTILOCK (No space left on device).

An error occurred while fetching physical NIC: Failed to lock: /etc/vmware/esx.conf: No space left on device.

FILEIO: Failed to write to new lock file /etc/vmware/esx.conf.MULTILOCK (No space left on device).

An error occurred while fetching physical NIC: Failed to lock: /etc/vmware/esx.conf: No space left on device.

FILEIO: Failed to write to new lock file /etc/vmware/esx.conf.MULTILOCK (No space left on device).

An error occurred while fetching physical NIC: Failed to lock: /etc/vmware/esx.conf: No space left on device.

I'm not really sure what any of this means but seeing no space left on device is a bit odd...

Reply
0 Kudos
jayolsen
Expert
Expert

I'd have to bet that your issue is related to be the no space left messages. You can type "vdf -h" to get a list of your disk space / usage. Might be time to call support though.

Reply
0 Kudos
jtcdesigns
Contributor
Contributor

unfortunately we dont have support anymore... But I just did that command and I have one..... /dev/sda2 that is 100% use and mounted on / I'll have to look into that not completely sure whats the matter. We are in a plan to actually get ESX 3.5 with 2 new servers since ours isn't said to be supported or not for 3.5

Reply
0 Kudos