VMware Cloud Community
valicon
Contributor
Contributor
Jump to solution

Vsphere 5 deployment and server build question

I am planning to deploy vSphere 5 in our environment. I will be storing the VM's on a HP SAN. The original server build was spec'd out to be a Dell R510  with 2 Intel Xeon X5670 2.93 Ghz processors and 64GB ram. The server has 12 600GB drives. As I did not spec this server out I believe that it is overkill in the drive department since I will be storing my VM's on the SAN. My plan is to use two servers running ESXi 5 (one being the failover). I plan to virtualize 3-5 boxes and would like to have the vMotion and high availability option, so I have been told to purchase either the vSphere 5 Enterprise edition or the vSphere 5 Acceleration pack. What is the best recommendation in terms of hardware to use for these two servers, keeping in mind that I will be using a iSCSI HP SAN for VM storage. I was also told to install vCenter as a VM on the failover server, thoughts on that?

Finally does vSphere give me the ability to virtualize desktops as well?  Any help would would be great.  Thank you in advance.

Reply
0 Kudos
1 Solution

Accepted Solutions
weinstein5
Immortal
Immortal
Jump to solution

Yes that is correct - the more resources you can provide to your ESX servers the more VMs you can run - these resources are memory, CPU, Disk I/O and network bandwidth - in my experience the resources people run out of first are:

1) memeory - not having enough memeory host

2) Disk I/O - not planning enough disk bandwidth - e.g. using only only a single HBA or placing disk intensive VMs on the same LUN -

Also keep in mind with the two technologies - VMware DRS will ensure the VMs receive the approriate resources and VMware HA will ensure enough resources are available for failover - so there will be no need to keep one of your hosts idle - it can be used for production to insure proper functioning of you VMs

And in my experience I have been able to place 40-50 VMs on a host with 128 MB with no preformance issues

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

View solution in original post

Reply
0 Kudos
13 Replies
weinstein5
Immortal
Immortal
Jump to solution

You are right that is WAY overkill on the drives since you are storing your VMs on the SAN - the acceleration kit really is the same license just bindled with vCenter - since you are only running 2 servers you might look at buying seperate vSphere  Enterprise licenses with a vCenter standard license -

The hardware you mentioned is sufficiently powered to support only 5 virtual machines - in fact depending on the load of the virtual machine on how configure your VMs you can run 7-8 times the number of virtual machines on that hardware -

There is no problem running vCenter on a VM -

vSphere does give you the ability to virtualize desktops but I would take a look at VMware VIew to help virtualize and manage your virtual desktop environment -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
valicon
Contributor
Contributor
Jump to solution

Thanks for the reply. The second server is just going to be used as a failover server. How can I get 10 VM's out of one server, what build would I need?

Reply
0 Kudos
AureusStone
Expert
Expert
Jump to solution

Rather then running one server at around 100% utilisation and running the other one as failover you would typically run both hosts at <50% utilisation.  That way if either host failed you would have enough capacity to fail over the VMs.

This configuration would give you 64GB of usable RAM while maintaining HA.  So if you were running 10 VMs you could use on average 6.4GB of RAM per VM.  In reality it would be uncommon to use on average 6.4GB per VM, you would typically use much less.

Reply
0 Kudos
valicon
Contributor
Contributor
Jump to solution

I see. So if I bump up the RAM to 128GB I will be able to provision more VM's, yes?

Reply
0 Kudos
AureusStone
Expert
Expert
Jump to solution

Pretty much.

Most organisations run out of RAM before CPU.  So if you are running a very high CPU usage workload, then you will be CPU bound and adding extra memory will not help.  That is pretty uncommon through.

With 128GB per host in a 2 node cluster you will have 128GB of RAM to use.  So you will have to ensure whatever license you choose has sufficient vRAM for your requirements.

Reply
0 Kudos
valicon
Contributor
Contributor
Jump to solution

Thanks, now I am understanding much better Smiley Happy  One more question. If I am going to store the VM's on the SAN, can I not use 2 - 146GB drives in each server and RAID 1 them. I dont think I will need any more drives than that as I wont be storing any data on the servers. What is your take on that?  What is best prractice in a case like this?  Thanks again for your help!

Reply
0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

Yes that is correct - the more resources you can provide to your ESX servers the more VMs you can run - these resources are memory, CPU, Disk I/O and network bandwidth - in my experience the resources people run out of first are:

1) memeory - not having enough memeory host

2) Disk I/O - not planning enough disk bandwidth - e.g. using only only a single HBA or placing disk intensive VMs on the same LUN -

Also keep in mind with the two technologies - VMware DRS will ensure the VMs receive the approriate resources and VMware HA will ensure enough resources are available for failover - so there will be no need to keep one of your hosts idle - it can be used for production to insure proper functioning of you VMs

And in my experience I have been able to place 40-50 VMs on a host with 128 MB with no preformance issues

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
valicon
Contributor
Contributor
Jump to solution

Would you all think that I can use 2 servers each with 2 146GB drives, RAID 1, and no other drives in the server as we are using the SAN for VMs. Is this a feasible plan?

Reply
0 Kudos
JimPeluso
Enthusiast
Enthusiast
Jump to solution

Yes you can do that. That's find for installing the OS and running it. Espicially if you are storing everything on the SAN as reitirated above.

"The only thing that interferes with my learning is my education." If you found this information useful, please consider awarding points for "Correct" or "Helpful"
newtovms
Contributor
Contributor
Jump to solution

Valicon,

I'm deploying my first VM environment as we speak (hardware looks all nice and pretty in the rack) and I've gone with a very similar approach to what youa re looking at. I went with 2x Dell R710's with 48 gb RAM, Dual 6 core Xeons, and only 2x146gb 15k SAS drives in Raid1 just to host the hypervisor. The VM's will live on a Dell MD3200i with 12x 600gb 15k SAS Drives. I'm planning on grabbing a vSphere Essentials Plus kit and running 3 VM's on each host server with HA and vMotion so if one host dies the 3 VM's on it move over to the other one seemlessly. Like you, I only really have a need for 6 VM's at the moment.

valicon
Contributor
Contributor
Jump to solution

Newtovms,

Your situation is a lot like mine. Just curious why did you go with a Dell SAN?  I have looked at HP Lefthand and the HP 2000, both seem good but they are very expensive in relastion to the amount of storage you get. If you dont mind me asking what was the price and how many TB's do you have usable? We have been told to purchase the acceleration pack, any reason why you went with Essentials?  My setup is pretty much going to mirror what you are doing so I would be extremely interested in hearing more.  Thanks

Reply
0 Kudos
newtovms
Contributor
Contributor
Jump to solution

Short Answer - price

Same with the Essentials Plus kit. I wanted to ensure as much up time/redundancy as possible and the Essentials Plus kit contains vMotion etc. So my plan is to run 3 VM's on one physical R710, and 3 more on another. In the event that a physical box has issues the VM's running n that box will seemlessly por tover to the other physical server...well...according to the product literature Smiley Wink The R710's have 2x 6 core XEONs and 48gb of ram so its suitable for the Essentials kit under 5.0 And if I ever decide to deploy a 3rd physical server I am covered for that as well.

From memory the Md3200i with 12x600gb 15k SAS drives was around $8k or so - but then I had to order the second EMM as I forgot to add it to the initial config, so add another $4k

With a bit of help in this forum I am figuring out the correct network setup to get all this working together as we speak...I have about 9 blog and KB articles, 3 deployment guides/best practices from Dell and a couple of tables from excel scattered on my desk right now lol...slowly getting my head around it. Figuring out IP addressing of the vmnics in ESXi right now and vSwitch stuff.....

Reply
0 Kudos
valicon
Contributor
Contributor
Jump to solution

I am doing the exact same thing . I will have 4-5 VM's on one each box. My entire dilemma is the SAN. I also have  a need to store video that I dont need on high speed drives, so do I get a device that can do tier i and tier 2 storage or get two seperate devices? As far as the servers go I will most likely purchase the exact same config, maybe bump the ram up as one of my VM's will be IO intensive.

Reply
0 Kudos