VMware Cloud Community
ws_6
Contributor
Contributor

AX4-5i and ESX

Ok, I am trying to find documentation for ESX and the EMC AX4-5i (iSCSI). I can find information for ESX 3.0.1 but not 3.5. Here are the questions I have:

We are looking at setting up an AX4-5i w/ 3 VMware ESX servers. I have a couple of questions:

-right now we have 2 lan switches each running a different subnet (10.3.1.0/25 and 10.3.1.128/25). One port of each SP goes to each subnet.

-we want a fully redundant link from each ESX server to each subnet. either using 2 dualport iscsi hba's or 2 nic's (each nic would connect to each subnet).

Ok, here are the questions:

-i can't find the documentation for ESX 3.5 (only 3.0.1 and older). does ESX support 2 hba's per server for an AX4-5i?

-will VMware support 2 iSCSI accelerator cards instead using a software initiator?

0 Kudos
3 Replies
kjb007
Immortal
Immortal

To start, you can only have 2 iSCSI HBA's, so only 1 dual-port would fit. Unless you plan to boot-from-san using the iSCSI hba's, then you don't really NEED the iSCSI hba's, and the regular nics will suffice.

If you have enough nics, then you can have two per segment, and have the redundancy you need, but you can only have 1 sw iSCSI hba, so you won't be able to use them solely for iSCSI. As long as your iSCSI segments are routable, then you can use those two pNICs for iSCSI, 4 would be overkill in that scenario.

Just to add, you can use your iSCSI hw initiators as well and use those for one segment, and then use sw for the other, if you don't route your iSCSI network.

-KjB

Message was edited by: kjb007 : added hw iscsi info

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
ws_6
Contributor
Contributor

Thanks for the info but I still got some questions (I am new to iscsi, vmware, and san's). We are not planning to boot from SAN. How much of a speed hit will we take from using regular nics over the hba's in ESX? right now we are planning on using internal nic for vmotion, 1 for console, getting a 4 port for vm net traffic, and then the other 2 PCIe slots for SAN (either hba's or nics). The san is sitting in its own subnet (each SP is in a separate subnet, attached Visio's of each).

Right now we have the san attached to 2 windows servers (not vm's). Both are attached to the san using 2 2-port nics. One nic is active and the other is a hot spare (AX4-5i will only run active-passive on the SP). We are seeing less than 10% of the theoretical speed from the san right now (I do have a case open with EMC right now but no answers yet). From other people I have talked to they are thinking it is something to do with the MS iSCSI initiator and that I need hba's. I can't get the Intel nic's to offload TCP (I have cases open with both Intel and dell w/ no response; the servers are AMD Opteron based).

What about something like this as an alternative to hba's or regular nics: these will count as nic's to vmware right? Do you think they will work better than the Intel nic's we have now? I have a call into them to see if it will work in ESX.

0 Kudos
kjb007
Immortal
Immortal

The speed on sw and hw is pretty much the same. You won't see a noticeable difference. If you don't need the boot-from-san option, then that's one hba benefit that goes away. TOE doesn't work all together that well yet, so that won't help much there either. The one thing that you get with hw that you don't get with sw, is that with sw, the esx console will have to issue the scsi commands, which causes additional load on the esx service console, but again, it's not much of a hit, and you won't necesarilly see a speed difference in I/O. I've seen good performance with the ms iSCSI initiator as well, so I'm not sure that is your problem. If you can, try to take the switches out of the picture, meaning plug direclty into the array if you can. I'm not sure of the SEN1800, you'll have to check the HCL for that.

I would team the two internals together, and then use active/active for both console and vmotion, just use NIC 1 and then 2 for service console in the teaming section, and then use NICs 2 and then 1 for vmotion.

I wouldn't put all of my traffic on a 4 port NIC, if you have a failure, you lose all of your vm traffic. I would use the additional slots to add NICs, and then team ports from different PCI slots to maximize redundancy.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos