VMware Cloud Community
Subversive
Contributor
Contributor

Using ESX initiators rather than a physical HBA a viable solution?

Hi, we are planning to implement ESX in the near future. The configuration will be as follows: 2 Poweredge 2950's for ESX hosts, and a Dell AX150i for shared storage.

As the PE 2950's are rack mount, they only have 2 PCI-x slots. I will be putting 2 dual NIC's in each one, but I don't want to waste a slot on an HBA. Is using the built-in ESX iSCSI initiator going to be an okay solution or will the performance hit be overly large this way?

Reply
0 Kudos
11 Replies
Faustina
Enthusiast
Enthusiast

Software iSCSI initiators are slower than hardware iSCSI initiators , so you will get a hit on the disk I/o performance.

Think again.

Reply
0 Kudos
Paul_Lalonde
Commander
Commander

Actually, this isn't necessarily accurate. Depending on the server of course, software iSCSI usually performs just as good as the hardware iSCSI HBA. We're only talking about 1Gbps of throughput with Gigabit Ethernet, and software iSCSI can easily hit that.

For an apples to apples comparison, treat software iSCSI as fast as hardware iSCSI. Where the real difference is felt is in CPU utilization. The hardware iSCSI solution will barely burden the server's CPU, whereas you can expect anywhere from 10-40% of CPU utilization on one CPU core for software iSCSI.

Paul

Reply
0 Kudos
Subversive
Contributor
Contributor

So if the ESX hosts are screaming fast such as these (Dual 3.0 Ghz Xeons with 1333Mhz FSB) then it could still be a viable solution?

The main reason I don't want to give up a slot for a HBA is I need the host to talk to 3 separate subnets, plus a connection for the service console, so I'm going to be pretty tight for NICs.

Reply
0 Kudos
Jae_Ellers
Virtuoso
Virtuoso

From my testing of iSCSi I can say that you can get very reasonable throughput w/ SW initiators. However, if your disk IO loads are high you will generate high CPU overhead on the host. Pushing all the data I could (~110MB/s) resulted in up to a 35% CPU overhead with 4 VMs.

Following is a chart of host cpu during IOmeter testing with 4 vms. FC is 2 Gb fiber on 2 hbas, itoe is iscsi w/ 4052, inic is iscsi via sw initiator on one e1000 nic, ld is local disk

-=-=-=-=-=-=-=-=-=-=-=-=-=-=- http://blog.mr-vm.com http://www.vmprofessional.com -=-=-=-=-=-=-=-=-=-=-=-=-=-=-
Reply
0 Kudos
glynnd1
Expert
Expert

Is there a reason behind not using the PCIe slot? The 2950 can be configured with either 3 PCIe slots or 2 PCI-X slots and 1 PCIe slot.

Are you using trunking on your connections to the three subnets?

Could you post your planned configuration for all of your network connections?

Reply
0 Kudos
Subversive
Contributor
Contributor

Okay, I did not realize there was a 3rd slot. This might solve the issue. I've been stuck at a client for the past two weeks while trying to put this plan together, so I was relying on memory.

As far as my network configuration goes, we are not using trunking, they are all using separate switches. So, I need at least 1 physical nic from each host to connect to each subnet. As well, I need a nic for service console (I had thought one of the onboard nics would be good for this purpose). Do I also need a direct connection between the hosts for vmotion?

However, if I can have two dual port nics, as well as a physical HBA, then that gives me effectively 6 physical nics per host (so I could do some nic teaming for the subnets that could use it), plus connectivity to the iScsi SAN. Does that sound logical?

Reply
0 Kudos
glynnd1
Expert
Expert

You have two NICs on board, so the the two dual NIC cards you have 6 - and still on free slot.

For VMotion you do require a gigabit connection, some have shared this, but given that you have just two hosts I would just do a cross over between the two and keep the traffic off the network.

So it looks like you need 5 NICs plus iSCSI stuff, but should you lose a switch or NIC you'll have some down time - can't have everything.

Reply
0 Kudos
Subversive
Contributor
Contributor

So, I could do a crossover for vmotion, using 1 nic on each host. I could use the 2nd onboard nic on each host for the service console. That leaves me 4 physical connections to connect to 3 subnets, so I could choose the one that has the most traffic and team 2 nics up. The HBA would take care of the iScsi stuff, and I should be good to go!

Reply
0 Kudos
doubleH
Expert
Expert

just be carefull on the crossover cable for vmotion. even though it will work vmware's documentation states this type of setup is for demo or testing purposes only.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Reply
0 Kudos
Subversive
Contributor
Contributor

Well, I could just set up a 2 port vlan on a gigabit switch and have the 2 hosts plugged into it. I should have plenty of free ports on the switch, so it wouldn't be a big deal from that perspective. That would be an okay way to do it as well, yes?

Reply
0 Kudos
doubleH
Expert
Expert

yepper

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
Reply
0 Kudos