VMware Cloud Community
icampbell79
Contributor
Contributor
Jump to solution

Best servers to use for ESX cluster

We are planning on setting up an ESX cluster in each of our 2 primary sites. Each site will run a number of production vmware servers but will also mount replicated vm's from the other production site for failover. The clusters will be SAN attached to Clariion EMC sans and we'll be replicating the vm's and data using EMC recoverpoint.

We think we want to have at least 2 and maybe 3 ESX servers in each site cluster. We want to have the ability to use VMotion and plan on having about 4 vm's initially moving up to around 15 or 20 within 12 months.

Initially, vmware servers will be Exchange (50 user) and BES, CRM, web servser and further down the track will be accounting servers and SQL DB servers.

Now, we already have a number of suitable dl360 G5 and dl380 G5 servers that we could use for this but are wondering what the best option is for our requirements. The other option we are considering is buying a couple of DL585's.

I'd appreciate anyone advice on the best option but my main question is:

What is the best practice for networking? If we want to use vmotion is it best practice to have a dedicated NIC on each ESX for VMotion?

Also, is it best practice to keep the service NIC seperate from the data NIC? If we went with the DL360s what have dual HBA's it would not be possible to install an additional NIC so we'd only have the 2 x on board NICs.

Woud a single data NIC be ok for our requirements or do we really need to install additional NIC cards?

Appreciate any responses.

Thanks,

Ian

0 Kudos
1 Solution

Accepted Solutions
VirtualNoitall
Virtuoso
Virtuoso
Jump to solution

As it sounds like you already buy HP I would strongly look at their blade offerings if you are already in that game or are thinking about it. Outside of them though the 585 willl have all the horse power and PCI slots you could need. I think that will be the determining factor. If you can get by with fewer PIC slots go with the DL365 or 385. If not go with the 585.

"What is the best practice for networking? If we want to use vmotion is it best practice to have a dedicated NIC on each ESX for VMotion?"

Yes that is a best practice and I would recommend you follow it.

"Also, is it best practice to keep the service NIC seperate from the data NIC? If we went with the DL360s what have dual HBA's it would not be possible to install an additional NIC so we'd only have the 2 x on board NICs. "

Yes, that is a best practice but we don't do that. Our service console data and vm data share the same vswitch and that vswitch as multiple adapters for load balancing and failover. If we split it out we likely wouldn't have service console redundancy so we made a choice to double up.

Could you make do with a dual ported HBA?

Woud a single data NIC be ok for our requirements or do we really need to install additional NIC cards?

Depends on your traffic and if you want redundancy. We teamed them because we were putting so many virtual machines on each host that we were worried about both load and a failure. Each case is a little different but the dcisions comes down to network load and the requirement for redundancy.

View solution in original post

0 Kudos
8 Replies
VirtualNoitall
Virtuoso
Virtuoso
Jump to solution

As it sounds like you already buy HP I would strongly look at their blade offerings if you are already in that game or are thinking about it. Outside of them though the 585 willl have all the horse power and PCI slots you could need. I think that will be the determining factor. If you can get by with fewer PIC slots go with the DL365 or 385. If not go with the 585.

"What is the best practice for networking? If we want to use vmotion is it best practice to have a dedicated NIC on each ESX for VMotion?"

Yes that is a best practice and I would recommend you follow it.

"Also, is it best practice to keep the service NIC seperate from the data NIC? If we went with the DL360s what have dual HBA's it would not be possible to install an additional NIC so we'd only have the 2 x on board NICs. "

Yes, that is a best practice but we don't do that. Our service console data and vm data share the same vswitch and that vswitch as multiple adapters for load balancing and failover. If we split it out we likely wouldn't have service console redundancy so we made a choice to double up.

Could you make do with a dual ported HBA?

Woud a single data NIC be ok for our requirements or do we really need to install additional NIC cards?

Depends on your traffic and if you want redundancy. We teamed them because we were putting so many virtual machines on each host that we were worried about both load and a failure. Each case is a little different but the dcisions comes down to network load and the requirement for redundancy.

0 Kudos
epping
Expert
Expert
Jump to solution

hi

how many vms are u looking at running in total. think u said 20-25 by the end of the year.

it does not sould like you should go blades to me, only makes sense if u are going to fill at least one enclosure and that could easy run 150-250 vms. also i think the dl585s are way over kill and too expensive for your environment. I would stick with the dl380 or dl385, each one should easy be able to handel 10-15 vms with the kind of workload u have. go with 2 in a cluster to start off and if u need more juice add a 3rd node.

3 xdl385s are cheaper than 1 dl585!!!

good luck

icampbell79
Contributor
Contributor
Jump to solution

Thanks for the responses. I'm starting to think that 2 x DL385's would be a good starting point and should be enough for our environment. In regards to the network configuration, there will be 2 x HBA's leaving 1 spare slot for a 2 port NIC PCI card.

Would this be the best network configuration:

1 port PCI NIC - 1 Port onboard NIC - teamed for Virtual machines

1 port PCI NIC - Dedicated VMotion/Backups

1 Port onboard NIC - Service/Console

We are planning on clustering the two dl385's. Would the above configuration leave a split brain situation for failover as we only have a single service/port per box?

Could this be resolved by installing a 4 port PCI NIC or could we share the VMotion port as a second service port given that we probably won't be using Vmotion regularly?

Thanks,

Ian

0 Kudos
VirtualNoitall
Virtuoso
Virtuoso
Jump to solution

Hello,

You can't go wrong with the DL385 Smiley Happy

I would do something that looks like this:

1 port PCI NIC - 1 Port onboard NIC - teamed for Virtual machines and Service/Console. Set one as active for the virtual machines and one as active for the Service Console. Then they will only share the same port if there is a failure. This works unless you need the combined bandwidth al the time for your virtual machines. ( if that is the case double up your SC and Vmotion and set each to active on one of the adapters )

1 port PCI NIC -1 Port onboard NIC - Dedicated VMotion/Backups Now you have redundancy here as well.

I don't like teaming up Vmotion with other services if I don't have to.

0 Kudos
Monoman
Enthusiast
Enthusiast
Jump to solution

Keeping it generic. We recently ordered four 2-way quad-core servers with 16GB RAM and 10 NIC ports. We were originally looking at two 4-way dual-core servers with 32GB RAM.

I think you get more for your money with dual quad-core. VI3 is licensed by the number of CPUs, not the number of cores.

Also, the price of big memory chips is very expensive. You reach a certain point and doubling the RAM is the same price as a server.

There have been plenty of threads discussing using a few bigger servers vs. more smaller servers.

0 Kudos
ngrundy
Enthusiast
Enthusiast
Jump to solution

For something like what you're after we would be running a config as such, note we are a Dell shop so i'll speak in those terms.

2x 2950 (Vi3 Enterprise)

\- 2x QLE2460 single port pci-e HBA's

\- 1x Intel Dual Port GigE pci-e

\- 16GB RAM

\- 2x Intel Core 2 Duo 1.6GHz

we use hitachi SAN's and go with the AMS200 in this case as it gives us FCAL/SATA intermix.

You might need to add a third node to your config.

Our general experience to date is that Memory runs out before any other resource in the box.

We have 4 Quad CPU single core Dell 6850's running 61VM's. Each host has ~11GB of RAM in use. CPU usage hovers around 15% with the occasional spike to 40%.

I typically read a lot of comments here about people buying Dual Quad Core boxes with 16GB of RAM. I find typically that for 16GB you only need 2-4 cores. There was a vmware whitepaper i read a while back that stated 8GB per CPU Core was a "best practice" ballpark to aim for. I've found this to be spot on the mark.

Unfortunately memory prices are about twice what you want them to be to get that 8:1 ratio.

If it helps, our last round of pricing for a set of vmware upgrades saw 16GB as the sweet spot for dual socket servers (Dell 1950 and 2950) and 32GB for four socket servers (Dell 6850/6950).

If you plan on using 4 way systems i'd suggest using AMD CPU's.

It might seem strange to have a CPU mix in your VMWare environment but for performance the Core architecture has much better CPU<-->Memory performance but once in the 4-way space the AMD Hypertransport takes off.

Hope the above helps in some way.

0 Kudos
RobBuxton
Enthusiast
Enthusiast
Jump to solution

We use DL385's but a couple of things to note.

The DL585 has a lot more memory slots, so by using smaller memory DIMMS you may be able to put together a cheaper server.

You may also want to consider using a 4 Port NIC, the price difference isn't that great and you get a bit more flexibility.

Our Network Setup is:

NIC1 & 2 Teamed and used for Service Console and VMkernel (VMotion)

NIC3, 4 & 5 Teamed and used for VM Guests

NIC 6 used for DMZ Guests

0 Kudos
MattG
Expert
Expert
Jump to solution

Remember while the DL-385 G1's only had 3 PCI slots, the G2's have 4 (one is a half height).

The extra slot can come in handy.

-MattG

-MattG If you find this information useful, please award points for "correct" or "helpful".
0 Kudos