pesinet
Contributor
Contributor

Choose between a Blabe Server or a DL Server for ESXi 4.0?

Jump to solution

Hello people,

We plan to buy 2 more Servers and an iSCSI SAN. We are start looking on the Blade Servers side for the simple reason that we just need 2 HDD (Mirror) to install the vSphere because the VMs will reside on the SAN and the HP ProLiant BL460c G6 it look like a good option but we never work with a Blade Server before. We have 4 DL 380 G5 QC with 64 GB RAM and the performance is beautiful. We don't want to make a desicion before some feedbacks from you guys. So any input will be appreciate.

Thanks

0 Kudos
1 Solution

Accepted Solutions
golddiggie
Champion
Champion

From what I've seen blade chassis draw significantly more power than 1U and 2U servers. Plus, not only do you HAVE to get the chassis, you then need to worry about getting the blades to fill them, unless you get them all at the same time. If you're looking for just a few servers (many companies are not looking to purchase 8-16 servers at one time) then the entire blade configuration makes no sense. With VMware, you can use three decent servers to take the place of 45+ physical servers.

At my previous job there were two IBM blade chassis, holding up to 12 blades each. The chassis drew over 8000W of power even when only populated with 10 blades. With three R710 servers, even with all the servers I shifted over to them, I had room left to take all the servers from the blades into the VMware hosts. Far less power, far less physical server density, etc.

You REALLY need to be careful about selecting blades. Yes, there will be times when they make sense, you just better be 200% sure that they are the right option now, and for the next 3-5 years. With technology changing so much over that many years, you better get a chassis that CAN be (easily) upgraded on the connections on the back end. Otherwise you'll be looking to rip everything out and change the entire configuration when you need to adopt new technologies. Unlike 2U (or 1U) servers which you can (typically) add at least a few PCI/PCIe cards into, extending their use. OR, simply change out for a much lower cost, and as few of them as you wish.

I can only imagine how many servers I could run on an ESX environment that was powered by 5 R710 servers (taking up the same 10U as the empty blade chassis). With a decent/properly configured SAN (which you'll need for those blades too) you can do a lot.

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!

View solution in original post

0 Kudos
9 Replies
Formatter
Enthusiast
Enthusiast

Blades are nice for extreemly high consolodation of VM,s and also if you dont buy it full of blades its easly expandable. However the cost is prohibitive. I prefer to have the DL servers because the cost is far less then the blade system. I know that you dontreally gain performance with the blade the only thing you gain is higher cost and a smaller package forthe server rack. In my opinion stick with the DL 380 G6 or something like that or Dell also has some nice systems like the T710.

JaredT
Contributor
Contributor

I have 64 BL490c G6's and 2 EVA 4400 SANs. I love them, and boot them diskless from boot LUNS on the SAN. Only 8 Setup so far, awaiting power to be able to fire them all up.

They are expensive, however I love the Onboard Administrator in the blade server and all of the add modules for the Fiber cards and switches. The intgration is very tight and easy to manage.

0 Kudos
golddiggie
Champion
Champion

If you have an existing rack with space in it, I would go with 2U servers to be the hosts. Look at the Dell R710 servers. You can load them up with (up to) four PCI/PCIe cards, gobs of RAM, get them with just two SAS drives (mirrored) for ESX/ESXi to reside upon and be good to go. WIth the onboard DRAC controller you can do bios level (and more) remote administration of the hosts. I would opt for the Dell customized ESX/ESXi installation on the hosts too. That provides additional functionality and reportings, and isn't as problematic as the HP customizations seem to have become. If you don't have any open rack space, then get the tower configuration. I would go with at least dual E5520 Xeon's inside the hosts, if not the 5600 series Xeon's... Lots of power still in the 5520 (and above) series...

I used three of the R710's in an environment, with first one (then a second) EqualLogic iSCSI SAN... Coupled with a new Gb switch (2900 series ProCurve, a second one added later for redundancy) everything ran like a top. Even with only 24GB of RAM on each host, they were able to handle over 30VM's (now with an Exchange 2010 server in the mix) without skipping a beat. HA was configured for a single host failure, but we could lose two and everything would run on the remaining host. The servers were powered by dual E5520 Xeon's... I would add at least a dual Intel Gb NIC to the configuration (giving you six network ports right off the bat) to make sure everything is running properly. I added another quad port card in preparation of making all connections redundant.

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
pesinet
Contributor
Contributor

Thanks for all the input. We are waiting to make a decision between Blade or DL. We have a very good results with HP Servers so, we will continues with them.

0 Kudos
milo2009
Enthusiast
Enthusiast

Hi golddiggie ,

I was wondering about your setup of 30 vm's and exchange 2010 with 24GB ram. Is this a production ot test enviroment?

0 Kudos
golddiggie
Champion
Champion

That's a production environment... The three (Dell R710) ESX hosts have no issues with all that's on them (including a Backup Exec server for Exchange 2010)...

I'm actually picking up some old 1U (well, older, most of them are from the 2007 time frame) servers to convert into iSCSI SAN's (up to three)... I'm also pricing out new ESX/ESXi host servers and managed to get a R710 down to the $2700-3500 range (depending on a few options selected). When it comes time to set up the new hosts, I'm giving serious consideration to boot from USB flash drives. I did my inital testing (yesterday) with my XPS 720 tower and I was able to run ESXi 4u1 just fine on it. I knew the SATA RAID controller is software based, so the drives don't show up as part of a RAID array. It doesn't matter since I'll be placing all my VM's on the SAN this week. I'll test out the 720 system to see if the processor has enough items in it to make using it as a secondary host (for CPU and RAM, since that's really all that matters) and have a hardware based cluster (instead of going ghetto and using one nested ESXi host). The other option is to use my PWS490 tower, but I like to use this one for my day-to-day activities. Especially after upgrading the processors to a pair of E5345's... :smileygrin:

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!
0 Kudos
Josh26
Virtuoso
Virtuoso

I'm a fan of blades, but you don't typically buy two of them.

HP's smallest blade chassis is the C3000, which has eight slots. The more cost effective option is the c7000, which has 16 slots. If you buy the infrastructure now, you will love how easy it is to just add blades, and very cheap.

If you plan on buying the infrastructure now just for two blades, and don't forsee any future purchases, this will become a very costly exercise.

0 Kudos
golddiggie
Champion
Champion

From what I've seen blade chassis draw significantly more power than 1U and 2U servers. Plus, not only do you HAVE to get the chassis, you then need to worry about getting the blades to fill them, unless you get them all at the same time. If you're looking for just a few servers (many companies are not looking to purchase 8-16 servers at one time) then the entire blade configuration makes no sense. With VMware, you can use three decent servers to take the place of 45+ physical servers.

At my previous job there were two IBM blade chassis, holding up to 12 blades each. The chassis drew over 8000W of power even when only populated with 10 blades. With three R710 servers, even with all the servers I shifted over to them, I had room left to take all the servers from the blades into the VMware hosts. Far less power, far less physical server density, etc.

You REALLY need to be careful about selecting blades. Yes, there will be times when they make sense, you just better be 200% sure that they are the right option now, and for the next 3-5 years. With technology changing so much over that many years, you better get a chassis that CAN be (easily) upgraded on the connections on the back end. Otherwise you'll be looking to rip everything out and change the entire configuration when you need to adopt new technologies. Unlike 2U (or 1U) servers which you can (typically) add at least a few PCI/PCIe cards into, extending their use. OR, simply change out for a much lower cost, and as few of them as you wish.

I can only imagine how many servers I could run on an ESX environment that was powered by 5 R710 servers (taking up the same 10U as the empty blade chassis). With a decent/properly configured SAN (which you'll need for those blades too) you can do a lot.

VMware VCP4

Consider awarding points for "helpful" and/or "correct" answers.

Hosted Systems Engineer IV (VMware environment)
Brewing beer again!

View solution in original post

0 Kudos
JaredT
Contributor
Contributor

I currently have 8 BL490c's running ESXi 4U1 in a c3000 chassis. In the power consumption of the Onboard Administrator ot shows a max limit of 4378 Watts AC. We are currently using 1354 Watts AC. All 8 servers are running and serving VMs.

0 Kudos