VMware Cloud Community
wgreb
Contributor
Contributor

clomd strange errors,

Hi Everyone,

Yesterday, I noticed a problem, lines of errors appeared in clomd.log on all ESXi nodes. Output below.

2014-12-13T20:40:57Z clomd[555227]: W110: Failed to process CMMDS update u:15b52954-feed-d001-26d9-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:7241(

2014-12-13T20:41:16Z clomd[555227]: W110: Failed to process CMMDS update u:91fe3454-661c-7c0e-aa51-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:6388(

2014-12-13T20:45:16Z clomd[555227]: W110: Failed to process CMMDS update u:4c013554-3aee-2349-eb53-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:6388(

2014-12-13T20:47:00Z clomd[555227]: W110: Failed to process CMMDS update u:3afe2a54-c41d-4986-d7f1-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:7143(

2014-12-13T20:48:11Z clomd[555227]: W110: Failed to process CMMDS update u:81fe2a54-7286-edc3-2b79-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:7143(

2014-12-13T20:48:16Z clomd[555227]: W110: Failed to process CMMDS update u:39fe3454-9c96-895a-1204-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:6389(

2014-12-13T20:48:57Z clomd[555227]: W110: Failed to process CMMDS update u:c29a2954-e0f9-82a9-fb20-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:7249(

2014-12-13T20:50:01Z clomd[555227]: I120: CLOMFetchDOMEvent:Read 576 bytes of 792 bytes from DOM

2014-12-13T20:50:01Z clomd[555227]: I120: CLOMFetchDOMEvent:Read an additional 216 bytes from DOM

2014-12-13T20:50:01Z clomd[555227]: I120: CLOM_PostWorkItem:Posted a work item for 79a68c54-58f3-bca9-55c7-0cc47a120f1a Type: PLACEMENT delay 0 (Success)

2014-12-13T20:50:01Z clomd[555227]: W110: Object size 43285303296 bytes with policy: (("proportionalCapacity" (i0 i100)) ("hostFailuresToTolerate" i1) ("spbmProfileId" "629c9562-9

2014-12-13T20:50:01Z clomd[555227]: W110: Cannot create SSD  database: Out of memory

2014-12-13T20:50:01Z clomd[555227]: W110: Couldn't generate base tree: Out of memory

2014-12-13T20:50:01Z clomd[555227]: I120: CLOMVobConfigError:CLOM experienced an unexpected error.  Try restarting clomd.

2014-12-13T20:50:01Z clomd[555227]: CLOMGenerateNewConfig:Failed to generate a configuration: Out of memory

2014-12-13T20:50:01Z clomd[555227]: W110: Failed to generate configuration: Out of memory

2014-12-13T20:50:07Z clomd[555227]: W110: Failed to process CMMDS update u:e5f02a54-5e41-1a23-5af9-0cc47a120f1a:t:23:o:5425b04f-0edf-65f6-fdff-0cc47a120f1a(Hlth:Healthy:2):r:7147(

2014-12-13T20:50:07Z clomd[555227]: I120: CLOMFetchDOMEvent:Read 576 bytes of 792 bytes from DOM

2014-12-13T20:50:07Z clomd[555227]: I120: CLOMFetchDOMEvent:Read an additional 216 bytes from DOM

2014-12-13T20:50:07Z clomd[555227]: I120: CLOM_PostWorkItem:Posted a work item for 7fa68c54-9839-18af-7f9d-0cc47a120f1a Type: PLACEMENT delay 0 (Success)

2014-12-13T20:50:07Z clomd[555227]: W110: Object size 43285303296 bytes with policy: (("proportionalCapacity" (i0 i100)) ("hostFailuresToTolerate" i1) ("spbmProfileId" "629c9562-9

2014-12-13T20:50:07Z clomd[555227]: W110: Cannot create SSD  database: Out of memory

2014-12-13T20:50:07Z clomd[555227]: W110: Couldn't generate base tree: Out of memory

2014-12-13T20:50:07Z clomd[555227]: I120: CLOMVobConfigError:CLOM experienced an unexpected error.  Try restarting clomd.

What's more. I started investigating and analyzing the log files after my Veeam backup process had ended with failure at one of my virtual machines.

First of all, I solved the problem by restarting clomd service at all ESXi hosts and virtual machine which lost its vmdk disk.

I would like to ask you whether you have experienced similar problem and what is more important why it happened and how to prevent it?

ESXi ver.localhost 5.5.0 #1 SMP Release build-2068190

I would appreciate any help.

Cheers,

Wieslaw

0 Kudos
2 Replies
CHogan
VMware Employee
VMware Employee

This is a known issue unfortunately.

See http://kb.vmware.com/kb/2097906 for more details, and other symptoms.

My understanding is that this will be fixed in the next update.

http://cormachogan.com
wgreb
Contributor
Contributor

Thanks Hogan!

I do hope that the problem won't cause the data loss at vSAN storage.

WGreb

0 Kudos