We have a slightly strange performance issue with some of our vm's, which appears linked to specific hosts.
We have 7 hosts. There are 7 vm's running the same share of the same workload, 1 on each host, the vm's running on 2 of the hosts show some differences;
- within the guest os load average and cpu usage are far higher than shown within vCenter (for the other vm's it matches up).
- slower transaction response times.
- ready time is under 1% (generally lower than on the other vm's).
- lower cpu usage within vCenter than the other vm's.
- if we swap the vm's round, whichever 2 end on these hosts start exhibiting the same behaviour.
- the 2 hosts are not over loaded, if anything they have lower cpu usage than the other hosts (all generally around 40%), though we do have a virtual/physical cpu ratio of about 3-1.
We are running lots of other vm's, some show minor signs of the same behaviour but it's a specific workload (which uses a lot of cpu) that is being affected the most.
All our hosts are running the same ESXi build which (along with vCenter) are the latest bar 1 patch levels.
There is some variety of hardware, but we have 2 other hosts that are exactly the same not showing this behaviour.
I've run out of ideas. Anyone have any suggestions as to what could be causing this.
@Hi @jonl123 - Interesting one. It sounds like it could be related to the storage networking for those hosts. Is the network config the same on each host, are the switch ports they are connected to configured identically to the other hosts that work fine? Dare I say Jumbo Frames? Although I doubt that could cause high CPU.
If they are Linux guests, are you able to see the 'WA' value in 'top' to see if the CPU is high due to waiting on IO?