How do you mean tuning? Sure, you can configure CPU affinity and lock down a VM to specific cores, but the VMkernel does a damn good job of scheduling the CPU when and where needed. So, my advice would be to stay away from stuff like that.
The general recommendation is, unless directed by VMware support or there is functionality requirement to makes changes to kernel parameters, you don't change the defaults settings. Defaults settings work pretty well in most environments. vSphere CPU scheduler is already at optimal level and doesn't require further tuning.
What are you intending to achieve?
Most of the actual CPU tunning you would want to do inside Vmware would be with keeping things within your Numa barriers, Reservations, Shares, and or affinity rules if push comes to shove.
The only other thing people can "tune" in the VMkernel that is normal practice is the HBA queue depth for your SAN.
You will need to ask your SAN provider what they would like it at, there is also a VM KB floating around there but I can't find it at the moment, it has some of the common numbers for the common sans, ie, VNX, Equallogic, HP, ect
Thanks for responding
By your experience you already saw somewhere that some modification in vmkernel improved CPU performance?
In the case of CPU affinity as would be done, have some documentation that says how to make her?
thanks for listening
Thanks for responding
This recommendation does not change the vmkernel exist in some VMware KB?
With your experience you have seen someone at some place (some recommend VMware engineer) with the need to modify the vmkernel for better performance?
Actually I intend to get to the next ponto.No my work I'm competing with the hyper-v, the staff where I work asked to search if there is something that improves the dealings of cpu in vmkernel in vsphere.
Here is a brief write on it:
Its old documentation but it hasn't changed much or at all really in 5.x
You are hard coding a VM to specific cores on a physical CPU or hyperthreaded cores if you have hyperthreading on. What this does is essentially takes the scheduler out of the equation, but at a cost. It effects DRS, resource pools, and how other VM's share the CPU as that affinity is always going to have first access. I would say the affinity rules are very niche use cases but if you find yourself in one of those you can always test it out. In many of the troubleshooting cases I've seen its been rare that CPU affinity was the fix, but the option is there.
Here is a blog on Vmware CPU scheduler vs Hyper-V CPU scheduler
In most cases HyperV is still playing catch-up with VMware, as Vmware is a much more mature product. With that said I wouldn't even reccomened putting Hyper-V into a production enviroemnt until 2012 launched as it didn't a "in-house" NIC teaming option. With that said more and more companies are looking at Hyper-V to "Save Costs" as it comes free with the Windows License vs paying for the VMware license and the Windows License to run on top of it. As far as performance goes I haven't looked at any hard numbers on VMware vs HyperV 2012 but I did do A LOT of research on VMware vs Hyper-v2008R2 back when 2008R2 was new. To make a long story short the gains in "How fast can we make this VM' are pretty well topped out and you will see very similar numbers in that respect. However where you see VMware leap ahead in bounds is in maturity.
What does this maturity bring to the table:
- More vendors working with the visualization platform and have had longer access to the API's. The best analogy I can think of is the phone market, look at the Apple store, Google Market, then look at the Apps available on the Windows 8 App store.
- More Cluster options, HA, DRS, svmotion, vcops, vcac, the list goes on and on
- Over all management time. Typically Microsoft Hyper-V Clusters took a lot longer to provision and or manage over Vmware, however this again is a matter of opinion.
- Community support
- VMware Support, I find the VMware support more enjoyable then Microsofts but again to each their own.
Anyhow, I hope this has helped