I have a unique environment seperate from our production vSphere environment. Short version it is a water table modeling environment. We feed these systems some variables and conditions along with past historical scenarios and it crunches these numbers for a few days and then outputs a number of results. Basically, we are trying to predict the future! :smileysilly:
Anyway, this is extremely high CPU utilization (runs at 100% when launched), and runs for a number of days at at time. Due to very expensive licensing of the software package, we can only run two of these machines at a time. Physical environment is DL580 with (4) 10 core CPUs and 512G RAM. Virtual environment is 2 VMs on this host located on shared storage (Equallogic) and each VM has a single physical drive in the host for paging (900G 10K SAS each).
Ideally, we could just assign each VM 20 cores of the available 40. We need to take into account for overhead of the host server though also. Here is my thoughts of configuring this to try and maximize performance and not create %ready times for the VMs. Any comments/suggestions are welcomed as we would like to get as much performance out of this host/VMs as possible.
In short, I am thinking of setting CPU affinity rules for each VM. Example, VM1 will have a CPU affinity of 0-18. VM2 will hvae a CPU affinity of 19-37. The host will still have CPUs 38-39 for its functions. Sound reasonable? Can I set affinity for the host to only try and use 38-39?
I am also thinking of over-committing the memory. Since these hosts will be doing similar tasks b/c they are running the same modeling software, I think it is safe to assume they will be able to share much of the same physical RAM. We will have to watch vCenter and see how memory is being utilized, but I think I will start off with 384G for each VM.