VMware Cloud Community
sgng
Contributor
Contributor

IBM Blade HS20 HP ProLiant DL and HBA

We have a IBM Blade Center HS20 with a couple of Blades and also a HP ProLiant DL580 G2 running ESX 3.0.1. Currently they are all running out of local storage and future expansion is an issue now.

We are still working with NetApp on a couple of iSCSI SAN storage models for price points and review, but would like to know some HBA choices for IBM Blade and HP ProLiant DL.

A few requirements for the HBA:

1. copper iSCSI and not fiber

2. for servers that require really high availability, dual port (multipath I/O) instead of single port card

3. support booting from SAN

4. driver support for Windows 2003 x32/x64 and maybe Vista

Between Emulex and Qlogic, which one fits more? Also, does IBM Blade needs a special HBA to fit into its chassis?

Is it recommended for a light usage server to rely only on 1 gbps Ethernet connectivity (standard nic) instead of HBA?

Thanks. Advice appreciated!

0 Kudos
7 Replies
bertdb
Virtuoso
Virtuoso

the IBM blades do not have regular PCI slots, so you'll need a specific daughtercard from IBM (or use software iSCSI over regular ethernet connections). For the bladecenter chassis, you'll need ethernet switch modules to carry the iSCSI/ethernet traffic to the outside. That's where your "copper not fiber" requirement is important.

The IBM blades have 4 connections to the backplane, each of those connections goes to a different switch module in the back of the chassis. Getting redundancy out of that is not difficult.

booting from SAN is only possible with hardware iSCSI cards, not software iSCSI over a NIC. But do you really want to boot your ESX from iSCSI LUNs ? You have local disks in all servers, so that seems unnecessary.

If you're talking about booting Virtual Machines that are stored on the iSCSI SAN, that's never a problem, has nothing to do with hardware support.

driver support in windows is useless, as you'll be running ESX, right ? Windows will see a VM environment, including a VM network card, and a VM SCSI card. It's ESX that needs to talk to the iSCSI storage.

sgng
Contributor
Contributor

Thanks bertdb.

The reason we want boot from SAN option and W2K3 driver support is that we have a lot of Blades and HP ProLiant DL running W2K3 Enterprise/Standard. But you are right, if the servers have local storage then I just can't see the need to boot from SAN. Maybe there is other need for SAN booting which I need to talk to the others in my group.

So I guess we should be able to connect both IBM Blades and HP ProLiant DL series to a centralized iSCSI SAN as long as we can find a right HBA for them?

Thanks again.

0 Kudos
bertdb
Virtuoso
Virtuoso

yes, that should be perfectly possible. Get HBA's from the hardware vendor that are supported by the software you run natively on the server.

mreferre
Champion
Champion

Notice that, as it was already mentioned, you can even use the ESX 3.0.x sw initiator if you don't want to get new hw.

The HW HBA will give you slightly better performance and slightly less host CPU utilization (but we are not in the +100% ... not even close to that..) so you might want to valuate the benefit of the above Vs buying new hardware, install, verify etc etc

The SW initiator is just a couple of mouse clicks away .... Smiley Happy

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
sgng
Contributor
Contributor

Thanks King.

Is there any benchmark out there I can refer to for SW initiator evaluation?

If I intend to put 10 VMs in a IBM Blade (with two single core Xeon 3 GHz CPU + 9 GB of RAM), will there be any performance hit if we go for the SW initiator route?

Thanks.

0 Kudos
mreferre
Champion
Champion

No benchmarks that I am aware of. If I remember well I have seen a very informal document (from VMware) reporting some data about the differences between hw iSCSI / sw iSCSI / NFS .... I noticed that there was a wider gaps among the three in terms of CPU utilization rather than throughput ...... and yet it was in the range of 10-15% maximum.

It would be difficult to say whether or not you will have issues ... rather than the # of vm's and the specs of the servers it would be interesting to point to their usage pattern. If they are going to be appl server eating up memory and CPU but very little in terms of (disk) I/O you won't even notice the difference among FC, Scsi, NFS, hw iSCSi or sw iSCSI......

On the other hand if they moderately hammer the disk (i.e. mail, db servers etc etc) you might start appreciating the difference between the various technologies ....... as always in terms of performance .... it depends.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
hp_Bladefanatic
Enthusiast
Enthusiast

For hp DL/BL Server you always have multifunction NICS ( NC373) onboard, so you have capabilities to do TCP/IP Offloading on iSCSI (not possible for ESX nowadays) or even you want to do boot from SAN in the Future!

my 2 cents

Michael

0 Kudos