VMware Cloud Community
joeyCon
Contributor
Contributor

variable IIS performance depending upon load on other ESX VMs

I am running a web server on an ESX box.

When I am running without any other VMs running on the ESX box, I get a throughput of 40 requests per second, without queues building up.

When I place other 4 VMs on the box (under heavy load, both CPU and network I/O) the performance of my web server deteriorates to 20 requests per second.

All VMs on single cores. All windows 2003, IIS 6

This happens if I set the affinity of the CPUs to set cores, or allow them to roam. If I set the affinity to a set core I also turn off hyperthreading.

There are serveral spare cores to allow for spare CPU capacity for virtual switches, etc...

Has anyone else had these issues? Am I missing something?

I'd be very greatful for any assistance.

Joe

Tags (2)
Reply
0 Kudos
5 Replies
weinstein5
Immortal
Immortal

What is the configuration of your ESX host? How many CPUs/core? How much memory? That is normal behavior if you are constrained by resources -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
joeyCon
Contributor
Contributor

All VMs are on a single core, 2gig memory - memory is not the bottleneck, there are no disk queues.

I am thinking the issue may be down do the creation of virtual switches , however I would have expected other peoples web servers to have experienced this phenomenom.

There are spare cores for the host to utilize, is ESX work this way?

I believe that it is recommended to split heavily loaded web servers across ESX boxes, do you know the reason for this?

Reply
0 Kudos
weinstein5
Immortal
Immortal

ESX schedules a single vCPU to a single core - so each of you VMs will be scheduled to a single core - ESX does this very well so best practice is not to set affinity

How is your networking configured? Are you using a NIC team? I know you say memory is not a bottleneck, but how much memory does each host have?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
joeyCon
Contributor
Contributor

I am undertaking this activity to asertain performance metrics (maximum throughput) for our IIS web server.

I ran with affinity set so as to compare the performance difference from Physical box (single core) to ESX box single core. I also ran the test without affinity set and got the same performance.

We have 2 gigs of memory per VM, during peak load less there is more than a gig free on each VM.

There is a NIC shared for 14 ESX boxes. When I place the other VMs on a separate ESX box, using the same NIC the problem goes away. I therefore can't see how it can be NIC related.

The Web Server CPU is at 100% for both tests so it doesnt appear that the network is bottlenecking the Web Server.

If the other servers on the ESX box are not network intensive, but high on CPU (factorial calc on each) there is no problem!

Is it normal practive to have multiple servers which have heavy network I/O (and simultaneously high CPU) on a single ESX box? (NOTE -even when there are several spare cores for the ESX box)

Reply
0 Kudos
joeyCon
Contributor
Contributor

For those of you who have been reading this thread. The problem has been identified as being a dodgy VM.

This behaviour does not happen when a different VM is used. The VM was cloned from the same image, has identical software installed, and is running an identical test.

Has anyone else encountered radical differences in performance characteristics between cloned VMs?

Reply
0 Kudos