I have some hp proliant dl 380 g5 - g7 servers. They have 2 or 4 internal gbit ports (Broadcom NexXtreme II BCM5708 or NC382i) adapters.
I also have 2 intel nics in the servers (2 82571EB Dual or 2 82576 Quad Port).
I'll enable jumbo frames when migrating to esxi 5.
Which should I choose for software iSCSI Traffic, internal or Intel?
Should I use internal and intel at the same time?
For failover reason I should share the ports on at least two nics?
The internal on the g6 is called a "Quad Port", so for failover I shouldn't use it alone?
I had an incident where all on board cards failed at the same time, a week later, all vnics on a physical PCI quad card failed also.
So I believe for performance, using internal onboard card might be better, but for high availability, use one internal, and on from the PCI slot.
Best of luck,
Refrence: Personal experience.
All those nics have pic-e bus, so the bottleneck is not the bus itself, being all that nics at 1 gbits, so I do not see a performance problem in mixing onboard and addon nics on a vswitch.
I agree instead on redundancy, have seen many times chipsets failing and vswitch falling because all nics were chosen from the same chipset.
Regards,
Luca.
Its good to use intel nic for Virtual Machine Network traffic if it supports intel VMDQ. you well get better network throughput since the packet sorting is offloaded to the NIC. You can find more details on this technology at following vedio.