I have been asked to draft a hardware spec requirement for a new Vmware Virtual Infrastructure 3.01 that will host around 10 VMs. Most of them are going to be 64 bit OS. My company prefers HP servers and I am researching the system compatibility guide to select a server for VI 3.01. For initial stages, we are going to have only 4 vms on the server and later a SAN is going to be used for additional VMs. Can any one tell me a HP server best suited for my case?
The HP DL385G2 with AMD Opterons is a great server at a great price. I've been using DL585 and DL385 servers and just got a new DL385G2 with SAS disk and have been very impressed with it.
Here's some links that may help...
VMware ESX Server: A comprehensive guide to how ESX virtualizes HP ProLiant servers - http://h20331.www2.hp.com/ActiveAnswers/downloads/vmwareESXserver_virtualize_ProLiant_1005.pdf
HP ProLiant server sizer for VMware ESX Server - http://h71019.www7.hp.com/activeanswers/cache/120132-0-0-0-121.html
IT Consolidation using VMware CapacityPlanner on HP ProLiant servers - http://h71019.www7.hp.com/ActiveAnswers/cache/70314-0-0-225-121.html?jumpid=reg_R1002_USEN
I agree with Eric, the DLx85G2 are great workhorses.
If you have the budget go for the DL585G2's - they are better expandable then the smaller ones
Message was edited by:
replaced the DL586 with DL585
Don't the X80's (intel series) have the nested pages, and other Virtualization support in the processors that the AMDs don't have yet? I remember something vaguely about it from VMWorld 2006.
When the Quad Core AMD Barcelona DL-385 CPUs are released hopefully as soon as September (???) the 385 will be the de facto standard for HP AMD servers:
\- 8 Cores
\- 16GB of inexpensive memory (32GB+ of more expensive memory)
\- 4 PCI slots for NIC and HBA redundancy options.
The DL-585 will have it's place, but I think you will see less people buying them.
I think you will see blades will over take standard rack mount servers in medium to large size vi deployments. The HP BL685s are already a very sweet platform with 8 ( 4 x dual core )cores and 32 GB of RAM. They also have 2 onboard nics plus 3 mezzanine slots for HBA and NIC redundancy.
Actually the BL685c has 4 onboard network cards (as does the other full hegiht server, the BL480c)
From the QuickSpec:
Four (4) integrated network adapters consisting of:
Two (2) embedded NC373i Multifunction Gigabit Server Adapters with TCP/IP Offload Engine, including support for Accelerated iSCSI through optional ProLiant Essentials Licensing Kits
Plus one (1) additional 10/100 NIC dedicated to iLO 2 Management
Is HW redundancy a non issue with blades?
I am still leary of literally putting all of my servers in one basket.
Individual servers give redundancy at the server HW level and I can easily move the server to another rack for rack/power redundancy.
I can see entertaining blades if I was going to buy 16+ servers at once, but adding a max of less than 8 hosts a year does not seem like a good fit. Also, you really need to buy and configure a second enclosure to give better redundancy.
We have two separate power runs to each rack regardless of standard rack mount servers or blades so power redundancy is a non issue.
The chassis are fully redundant and offer better power consumption and lower cooling needs as compare to equivalent rack mount servers
Cable consolidation through, consolidated iLo ports, integrated ethernet and fibre switches can also amount to large savings; reducing the drain on core switch ports.
We worked very hard with the vendor on pricing and our blade environment is cheaper than a standard rack mount environment.
Also have a look at the virtual connect modules. There is some very cool functionality there with the virtualization of MACs and WWNs.
Basically a rack for us is a series of 4 chassis that will fill up as needed. As I said before it is not for small shops if you just don't have the capacity requirements but for most others you will see a move in this direction
Also, have a look at HPs datacentre consolidation project. They are moving from something like 86 datacentre to 6 that will be built on a foundation of Blades + SAN + VMware. That was the ultimate validation for us and our direction once we saw we were doing exactly what they were.
I tend to agree.
When it comes down to it, I don't think HP is any better than DELL. It's just \*better* than DELL.
People who have had the opportunity to mess with both brands know what I'm talking about.