VMware Cloud Community
komanek
Enthusiast
Enthusiast

servers from two vendors in one 5.x cluster

Hi all,  I have a cluster of four IBM x3650 m2 hosts currently running vSphere 4.1 Enteprise Plus. It will be soon upgraded to vSphere 5.0. We will also need to span this accross two buildings and use some redundancy so we decided to use VMWare Site Recovery Manager. Therefore we need to split the cluster in two smaller ones and add two more host servers to have three hosts in each cluster. The problem is, my employer wants to save some money so he is not willing to buy the two new servers from IBM, because he got a better (in sense of price) offer to buy Dell PowerEdge R710 servers.  If it will happen, I will be forced to have cluster A with 3x x3650 m2 (one Intel E5520 CPU per server, iSCSI qLogic HBA) and cluster B with 1x x3650 m2 (one Intel E5520 CPU per server, iSCSI qLogic HBA) + 2x R710 (one Intel E5645 CPU per server, Broadcom NIC with TOE for iSCSI). Both clusters will be using the same datastore infrastructure to able to switch VMs from one cluster to another vie Site recovery manager.  I understand I will need to switch on EVC in cluster B and I think Broadcom NICs for iSCSI are useless for us because of the lack of jumbo frames we are using in our SAN. But are there some other drawbacks of this solution or is it even supported to mix different hardware wendors in one vSphere cluster ? The alternative for me would be two pieces of IBM x3650 M3 with Intel E5620 CPU (or better with E5520 or with upgrade to E5620 on the third - old- host) with no hardware support for iSCSI, but I need arguments because of the higher costs.  Thanks in advance for opinions.  David

0 Kudos
4 Replies
golddiggie
Champion
Champion

You might need to enable EVC in both clusters for SRM to function 100%. I'm no SRM expert, by any stretch, so I would recommend reaching out to your VMware rep, or tech support for the info.

As for the mix of IBM and Dell servers... I think you'll be just fine there. I like Dell servers, especially the R710 model over the IBM x3650 line. IMO/IME, the IBM servers are more of a PITA and take longer to boot/reboot. With the higher Xeon processor in the R710, I would expect them to perform better than the IBM servers with their processors. I would advise getting enough NIC's in the hosts though. Just the onboard, IMO/IME, really isn't enough with iSCSI in the mix. I would add at least a single quad port Intel NIC into the hosts...

komanek
Enthusiast
Enthusiast

Thank you very much, it sounds godd and nice.

I heard or read some rumours about complications with Dell drivers for VMWare. Is it true the latest versions are not included in the original ESXi installation media, so deployment with customized images is a need every time there is update for ESXi hosts applied. Is it true or not ?

Thanks again,

David

0 Kudos
golddiggie
Champion
Champion

I would try using the general release of ESXi first. If you don't get any weird hardware in the hosts, or make sure the drivers are 'inbox' via the HCL, you'll be fine. This is why I would get actual Intel branded NIC's (not a Dell or IBM modified model) to add to the hosts.

I had no issue using the general release for ESX/ESXi 4.x on the R710 servers.

komanek
Enthusiast
Enthusiast

Thank you very much, I will ask our suppliers to compare prices for R710 with Intel NICs and x3650m3 + upgraded CPU for x3650m2 and something will come out from that comparison. The best thing is there is no big need to worry about mixing different hardware under vSphere.

0 Kudos