JCS725
Contributor
Contributor

DRS and Resource Pool Advice Needed

I've been running ESX for almost 3 years now and just now want to enable DRS. My servers are getting populated enough where I think it will help.

I've setup 4 freshly loaded ESX 3.5 u3 (fully patched as of this morning) servers and created a cluster. I've set the cluster to be fully automated.

These boxes are only going to host production guest VM's. All CPU and Memory should be available for them.

Between the 4 boxes I have 70,865 MHz and 117,148 MB of memory.

How should I set this up?

Thanks for any advice.

0 Kudos
19 Replies
Jasemccarty
Immortal
Immortal

What's the breakdown of the guest types?

OS's? Workloads? etc?

Jase McCarty

http://www.jasemccarty.com

Co-Author of VMware ESX Essentials in the Virtual Data Center

(ISBN:1420070274) from Auerbach

Please consider awarding points if this post was helpful or correct

Jase McCarty - Field SA at PureStorage - @jasemccarty
0 Kudos
JCS725
Contributor
Contributor

85% Windows

15% Linux

My ESX Hosts average 40% CPU and maybe 30% memory useage.

0 Kudos
Jasemccarty
Immortal
Immortal

Hmmm...

85% Windows (all the same build?)

15% Linux (all same distro?)

Jase McCarty

http://www.jasemccarty.com

Co-Author of VMware ESX Essentials in the Virtual Data Center

(ISBN:1420070274) from Auerbach

Please consider awarding points if this post was helpful or correct

Jase McCarty - Field SA at PureStorage - @jasemccarty
0 Kudos
JCS725
Contributor
Contributor

Sorry no,

The Windows servers are

90% Windows 2003 ( most are sp2 but we have some that can't be patched)

5% Windows 2000 (SP level varies)

5% Windows 2008

Of the windows servers I would say 50% has a DB of some type on it. The rest are application, file or web servers.

Linux Servers are split 50 / 50 AS4 and AS5.

0 Kudos
weinstein5
Immortal
Immortal

I woudl start simple - with no resource pools - particularly if we are not talking a large number of vms - if you want to group them together then create two resource pools - leavng shares at normal with no reservations or limits -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
kjb007
Immortal
Immortal

Resource pools are not always set for technical reasons, but financial and political implications come into play here as well. Do you have separate business areas, customers? Who paid for the software? Who paid for the licenses? Everything comes into the mix. If all contributed equally, then you may want to consider administration points as well. Are they all managed by the same set of users?

I have setup configurations based on SLA (resource baed), as well as financial/political based, this group gets x% of shares and this gets y% of shares. And I've split them up by OS as well, before even getting to the others. Take everything into account early, so you don't have to go back and fix things later.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
JCS725
Contributor
Contributor

We have a flat fee for virtual servers that we add in their contract. After we get to 8:1 we've covered our costs.

Anything beyond 2gb of RAM and 50gb of disk space they pay extra for. We don't have any CPU limitations or restrictions in the contract although we have had a couple of DB servers that we've had to cap just because of poorly written code by the vendor.

All the VM's are managed by me. Right now I have 10 ESX servers with about 170 guests. I'm adding these 4 new hosts and want to make sure I do it right.

If I don't establish a resource pool and I put a host in maintainence mode will it shift the VM's off? I'm trying to make my life a little easier on patch days as well here.

Any more options or suggestions are greatly appreciated

0 Kudos
kjb007
Immortal
Immortal

Maintenance mode will work just fine.

I would keep it simple then. Create a global pool. From there, create a pool for each customer. That way, each customer is given the same priority, and one will not clobber another. If in the future, one were to pay for a higher SLA, their shares can be increased at that level. Inside of the customer level, I'd create separate pools that would allow me to increase priority based on application, or server type. That way, a db can have more than a web, if needed. They should all be expandable. Also, I would not overcommitt ram, unless I was pressed for RAM, which hopefully you aren't in a prod environment. So, the only thing left would be disk and cpu. Disk should be allocated as needed anyway, and cpu, when contended for, will at least satisfy the same level for each customer.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
JCS725
Contributor
Contributor

Do you have a link to some documentation that explains how to do this?

Doing this by customer will be difficult. While one agreement might have 1 or 2 VM's the next might have 50 or more. Also I have many agreements that are adding their first virtual servers soon and even more that are adding additional. What kind of time would I be spending assigning machines to resource pools?

Maybe I'm not understanding pools correctly. The way I'm imagining it, I would have to assign 10,000 mhz of cpu and 20gb of ram to group 1, X mhz and X ram to group 2 etc etc.. That seems like a PITA to keep up with. Is it?

Right now I have 2 racks of ESX hosts. In Rack A I have 7 Dell 2950's. In Rack B I have 3 Dell 2950's. In virtual center I have one data center. Under that DC I have folders for Rack A and Rack B with the corresponding ESX hosts underneath. Under the hosts I keep the guests organized myself.

My 4 new servers I've added to Rack B and created a cluster that contains those 4. My ultimate goal is to create Custer Rack A and Cluster Rack B with 7 ESX hosts in each. I want VCenter to balance the load because every once in a while I will get a rogue server that churns away and if I move it to a lesser populated host it's ok.

Also doing maintainence in my current setup is a whipping of the highest order. I have to move 10 machines off this host to this one, 10 to this one, once everything is off then run update manager and reboot. Then start the same dance for the next one.

Also I'm in a prod environment. I work for a University Hospital so many of these systems are related to patient care and I'm only allowed very small change windows once a month.

0 Kudos
kjb007
Immortal
Immortal

Do you really mean to separate your clusters solely by Rack? I don't think I would separate in this manner. If they are performing different functions, prod/pre-prod, then well and good, but rack a and rack b seems like it would not be the most efficient use of those resources. If they can be part of the same cluster, e.g., they meet vmotion requirements, then I would keep them in one cluster, and have DRS manage those resources for me at the cluster level.

For managing CPU, you can create resource pools for normal and high pools under a global pool. Those can be weighted differently using shares. I wouldn't use limits, or set a fixed value for the MHz. Let them be expandable, but use shares to manage contention. Put servers that require higher cpu cycles in the high group, and the rest in the normal pool. You can also create a 3rd pool for your problem hosts, and put low shares on it, so in case it starts going crazy, it will have less priority than the normal or high pools.

Hope that makes sense.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
JCS725
Contributor
Contributor

Well right now I've got 2 different SANs. Rack A and the 3 older servers in Rack B are connected to both the old SAN and new san. I'm slowly using storage vmotion to get guest VM's migrated off the old SAN and onto the new. Then we're taking the old SAN hardware to our DR site.

The 4 new hosts that are in rack b are only connected to the new SAN. Once I get all my guests migrated from the old san to the new I can vmotion them to the new servers I've installed.

I just thought it was best to divide the hosts and as to not have too many SAN datastores. Right now I've got 24 LUNS on the old SAN and 13 on the new SAN. So my original 10 ESX servers have 27 SAN attached datastores ranging in size from 300gb to 1tb. (the new SAN luns are all 500gb). I've talked with a couple of people and they have told me that I should keep the number of LUNS per pool to 16 - 20. That's the only reason I was going to create 2 clusters.

So I guess my point is, right now, vmotion will not work across all hosts because the new servers cant see the old SAN.

Do you have a link to a document that can walk me though the configuration of share based resource pools?

0 Kudos
kjb007
Immortal
Immortal

The resource management guide is the best place to start:

I'm not sure I agree with the LUNs per pool concept. Since they're separate LUNs, you shouldn't have issues with making those LUNs visible to all of your servers. If you were going over 20 hosts, I'd think about separate clusters. ESX can drive a lot more I/O, so I'm not sure what the basis of that statement was. You'll be able to more efficiently use all of your resources if you had them in a single pool. Especially if there's isn't some other reason for the demarcation between the two.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
JCS725
Contributor
Contributor

5196_5196.jpg

KJB,

I really appreciate your help.. I've attached the picture of my cluster. I've added 3 Resource Pools as you recommended. The majority of my machines will go in the normal pool while problem children go in normal, and if need be the ones needing more power will go in the high.. Does this look right? When I get ready to move a vm over do I just drag it to the resource pool rather than the host?

Would you recommend I do something in addition to the above or something different?

What about when I add another host to this cluster. Will it automatically become part of the pool?

I really appreciate your help

0 Kudos
kjb007
Immortal
Immortal

I would create a global group, and put all of these resource pools as children to the global pool.

As you add additonal hosts, the global pool will increase, and the resources will automatically be updated. That is the good thing about using shares, and not specifying values.

Yes, when you want to add vm's, you just drag them in to the pool you want. If you have many vm's, remember, that if you click the datacenter on the left hand side, on the right hand side of the vi client, select virtual machines, and then you can select multiple vm's at the same time with the shift-click or ctrl-click like windows explorer.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
JCS725
Contributor
Contributor

KJB,

I just sent you a private message.

Thanks again for the help

0 Kudos
kjb007
Immortal
Immortal

No problem. Hope all is working as expected.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
kalex
Enthusiast
Enthusiast

One more thing. If i remember correctly from reading the resource guide, Shares are used only if there is contention on the cluster. so if your cluster is not overloaded you shares will not kick in. so unless contention of resources happens on the cluster your VMs will get same CPU and memory resources. Once resource contention starts happening Cluster will start assigning shares based on Pools

Alex

0 Kudos
weinstein5
Immortal
Immortal

That is correct

Sent from my iPhone

On Feb 6, 2009, at 8:49 PM, "kalex" <communities-emailer@vmware.com

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
kjb007
Immortal
Immortal

You are partially correct. Yes, shares are only when contention occurs. Until that time, each vm will get what it asks for, not the smae CPU and memory resources. When there is contention, then CPU cycles will be given out based on shares, and in the case of overcommitting of memory, memory will be swapped and provided based on shares. This is why I don't overcommit memory if I can avoid it, especially in a prod scenario.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos