VMware Cloud Community
dneuhaeuser
Contributor
Contributor

ESXi4: "No free memory for file data"???

Hallo everybody,

I have a problem with a new installed ESXi4 machine here.

These are my hardware specs:

MB: Supermicro X8SIL-F (Intel 3420 Chipset)

CPU: Intel Core i3 530

RAM: Kingston 4 GB ECC

HDD: one SATA WD2502ABYS

Aprox. 6-7 minutes after boot the system starts to produce messages like this on the log console (Alt-F12):

"cpuX:4160(BC: 3582: Failed to flush xx buffers of size 8192 each for object 'sel.raw' 1 4 9b3 0 0 0 0 0 0 0 0 0 0 0: No free memory for file data"

These messages then appear every 15 seconds or so.

I can start virtual machines and they run fine.

But when I try to change any configuration entry, vSphere Client says:

"Error interacting with configuration file /etc/vmware/esx.conf: Write failed during Unlock. This is likely due to a full or read-only filesystem."

...

"Unable to write to file /etc/vmware/esx.conf.nrXz5e while saving /etc/vmware/esx.conf operation aborted. It is likely this was caused by a Full Disk."

But the disk cannot be full!

I already installed the system 3 times from scratch but the problem persists.

The messages even appear before any VM is created.

I just boot the machine and leave it alone for 6-7 minutes.

I tried another harddrive, just to be sure.

I patched the system to the latest available build (244038).

Problem stays.

Any idea is appreciated!

--Dennis

0 Kudos
7 Replies
RParker
Immortal
Immortal

I am only going to point out one thing, HCL. Hardware Comtability List. It designates which machines and hardware are designed to work with ESX.

If you look at the list, and this MB is not on the list, that could be the source of the problem.

0 Kudos
golddiggie
Champion
Champion

It's also not on the UWB mobo listing......

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

0 Kudos
dneuhaeuser
Contributor
Contributor

Meanwhile I discovered that the problem has somehow to do with the onboard IPMI on the mainboard.

First I deactivated the IPMI (BMC) feature by jumper.

I booted up ESXi and let it run for a few hours. No more error messages. No more problems.

Then I deactivated the CIM-feature in ESXi and activated the mainboard IPMI again.

Now the server is already running for 48h without any problems.

The setting is under the configuration tab, advanced settings:

UserVars: UserVars.CIMEnabled

Default was 1 and I set it to 0.

Now ESXi cannot show sensor status anymore but I can live with that for the moment.

Perhaps some firmware update or ESXi update will sort this out sometime in the future...

--Dennis

0 Kudos
Hatclub
Contributor
Contributor

FYI this board is working perfectly under ESX 4.1.0. I have not had chance to test ESXi 4.1.0, but there's the potential that you should find this board working properly with the latest ESXi.

I am using the dedicated IPMI port however as sharing/failover/etc is undesirable for my infrastructure anyway, and I had sensors working, but my network interfaces would crash all the time. Now fixed, however.

0 Kudos
ccostan
Contributor
Contributor

Dennis,

Did you ever resolve the issues with the SuperMicro board and the IPMI?   I am also experiencing this issue.  Disabling the Hardware sensors is a good workaround though.

Thanks.

Carlo.

Carlo Costanzo | http://www.VMwareInfo.com | @ccostan
0 Kudos
vinzent
Contributor
Contributor

I have the same problem here (Build 348481) but disabling CIM (UserVars.CIMEnabled 0) didn't help.

After a while the error appears again.

Has anyone a deeper understanding of what is going on?

Greetings

Hermann

0 Kudos
TheDarren
Contributor
Contributor

The logs in /var/log/vmware/vpxa are filling the small ESXi partition.  Since we have ssh enabled on the ESXi hosts, login and remove the old logs from this directory.  (cd /var/log/vmware/vpxa && rm *.gz)

We have to do this prior to making config changes to free up space, but it is easier than a reboot.

0 Kudos