This topic could have been placed under esxi 4.x as well but I figured I would start here.
Has anyone performed extensive testing on the difference in performance between running say for example, 2 physical NIC's with MPIO (Equallogic in this case), versus 4 phsical NIC's with and without MPIO? Our ESX hosts (18) are setup with 2 nics for iSCSI and Guest iSCSI traffic but utilizing the Equallogic MPIO module for the SAN iSCSI. We used to dedicated 2 x NIC's for SAN iSCSI and 2 for Guest iSCSI but the utilization did not justify the amount of cabling in the rack (all hosts in one rack - split PDU's). So I reduced to 2 NIC's to clean up cabling. Traditionally we have seen less then 25-40% utilization across the iSCSI/SAN vSwitch. However as we are now deploying larger hosts with 2 x 6core CPU's with 288gb of memory I am planning on increasing the physical NIC count to 4 for iSCSI/SAN traffic, ideally with MPIO enabled. My issue however is the limit of concurrent iscsi connections per Equallogic pool, MPIO takes up a considerable amount when enabled on each host.
So in essence, I am wondering if someone has already done the leg work in determining if 4 NIC's versus 2 with and without MPIO offers a considerable performance improvement. Mainly in random IO latency and response times versus sustained throughput, which really isnt an issue with iSCSI traffic (at least in our environment like I am sure it is for most).
I am going to try and perform some testing but wanted to ask before I spent a considerable amount of time on something that may have already been done.