After about 49 days of uptime, the system process of my ESXi 6.7 U1 host (free version) is consuming a lot of CPU time:
ID GID NAME NWLD %USED %RUN %SYS %WAIT %VMWAIT %RDY %IDLE %
1 1 system 190 39.67 784.10 0.01 17919.68 - 45.31 0.00
By expanding GID 1 in esxtop, I find that a process called "OCFlush" is responsible for the high load.
Searching the web for OCFlush yields a post on reddit where people suppose this might be some kind of overflow, since 2^32 ms ≈ 49 days and 17 hours.
Is that a known bug? What is the purpose of OCFlush?
having the same problem here.
ESXi: 6.7.0 Update 1 (Build 10302608)
ID GID NAME NWLD %USED %RUN %SYS %WAIT %VMWAIT %RDY %IDLE %OVRLP %CSTP %MLMTD %SWPWT
1 1 system 152 45.82 382.54 0.00 14532.68 - 54.70 0.00 16.19 0.00 0.00 0.00
Having the same problem here. ESXi: 6.7.0 Update 1 (Build 10302608). It seams is dependent on the hardware. I have the same build installed on 5 different machines and this issue occurs only on one of them : ASRock E3C226D2I mother board with Intel Haswell Core i3-4170 3.7 GHz CPU and 16GB RAM.
Maybe we could compare the different device drivers the hosts are using to maybe sort out the culprit which is causing this kind of overflow (?)
Same issue here,
HPE Microserver Gen10, Opteron X3421 and 16GB RAM, ESXi 6.7 U1.
Current uptime is 64 days and I can tell the system process peaks are making it heat up a bit judging from the fan throttling up and down.
Same problem on my side too ... forcibly I'm using ESXi 6.7 Update 1 on an Intel NUC8i7BEH.
Could that be the reason why after some days/weeks, CPU consumption from process "system" rise slowy until heating up the server and making the fans go high ?
NB : Even when I switch off all the VMs residing on this ESXi host, "system" process continues to consume CPU for no obvious reason.
After a reboot, everything gets back to normal.
I'm planning on switching to 6.7 Update 2 to see if things get better ... or same ... or worse maybe ?
Can someone please advise on that one if you had some informations from VMware.
Thanks
I got this working with 6.5.0 Update 1 (Build 5969303) only! Because in my case its a homelab issue too, this is a sufficient workaround for me.
We have the same issue with ESXi 6.7U1 build 10764712.We have this version running on different hardware generations (Xeon E5-2600v4 and older and newer generations), and all suffer from the same issue. Latency to VMs increases a lot after 49 days of uptime.
We also raised a case with VMware Support, they sent it to engineering but so far no solution. However, after some testing (which takes 49 days per iteration...) we reached the preliminary conclusion that the issue is not present in ESXi 6.7U2 build 13473784.
Did you ever get an answer on this?
I am getting this on a new host build with v7 causing the guest machines to get laggy, well I think its the cause.
PCPU USED(%): 9.7 46 6.7 58 51 3.0 32 25 38 18 23 34 17 15 16 31 10 36 30 11 21 22 29 17 9.7 33 18 19 33 5.7 4.1 42 NUMA: 26 21 AVG: 24
PCPU UTIL(%): 8.9 39 6.2 50 45 3.5 27 22 34 16 21 30 15 13 14 27 8.9 31 27 11 18 19 25 15 8.4 28 16 16 28 4.9 3.5 36 NUMA: 23 18 AVG: 21
CORE UTIL(%): 47 55 46 49 48 49 28 40 39 36 37 40 36 32 33 39 NUMA: 45 37 AVG: 41
ID GID NAME NWLD %USED %RUN %SYS %WAIT %VMWAIT %RDY %IDLE %OVRLP %CSTP %MLMTD %SWPWT
61240 61240 esxtop.2106034 1 5.90 5.82 0.00 94.27 - 0.00 0.00 0.00 0.00 0.00 0.00
1 1 system 648 2.41 2532.76 0.00 61594.78 - 670.92 0.00 2.53 0.00 0.00 0.00
Forgot to mention this is all certified hardware
Dell R740 running Dell EMC ESXi image.