I am a Senior Software Engineer and haven't dealt much with VMWare ESX server. We have a custom built app that can create threads and runs them on multiple processors (basically a custom EAI/ESB). There can be anywhere from 10-1000 of these running at a given time. Some of these threads execute fast some long (5 seconds-15 minutes). Our test enviroment is not VM and is a quad core machine 4gb RAM. When we put it into production (ESX farm), our admins told us that 1 CPU would adequate and that VMWare ESX will use multiple cores in the back end and spread out all of these threads even though the guest OS has only 1 CPU. When under full load the performance is horrible and basically ends up just grinding to a halt at certain times (the quad core physical machine can handle this load with no issues and when I ran through the same load on that machine it processed through it all in about 45 minutes). I have requested at least 2 cores on our the guest as I suspect context switching between 1000 threads is causing some issues, but the infrastructure group insists that ESX can automatically handle spreading out the threads behind the scenes and that even though we have only 1 core on the guest that it really has access to all the cores on the host machine and that it is a code issue. (Our ESX farm is 8 quad/quad core xeon with hyperthreading and 32GB RAM data is on a SAN)
So my question is, is this true? I read through all the white papers and feature sheets, etc, and can't find a clear answer. From what I undestand from the scheduling white papers, etc that I have read this is not the case, but I wanted to throw the question out to the community to make sure i wasn't missing something.
in this case you may benefit with 2 vCPU's. 1 core = 1 socket. ESX cannot give more CPU to a single vCPU box. So, if your vCPU is equal to a 2.66Ghz processor, that is the maximum it will get.
i was told, that assigning more than one vCPU to a vm reults in the vm waiting until the assigned numer of phys cores (vCPUs) is available before its getting scheduled for processing.
e.g. if you have a 4 core system and assign 4 vCPU to a vm, this vm will wait until all 4 physicals cores are available before it gets scheduled. chances are high that on a heavy loaded esx server this is not very often the case, while a vm configured with fewer vCPUs might get scheduled more often.
resulting a vm with a single vCPU might be getting far more often physical processing time than a vm with multiple vCPUs, resulting in getting more physical processing time in total.
pls correct me if im wrong!
This is (mostly) true, which is why you should be careful about multiple vCPU VMs and avoid them if possible. it doesn't sounds like your app lends itself to avoiding them, and does appear to use them well, so you should try it. It will, of course, load up your physical hardware faster.
VCP, vExpert, Unix Geek