Hi All I'm in the process of setting up new ESXi blade servers with a dual port 10Gb Emulex card capable of multichannel (effectively making 8 virtual NICS). Now when I enable multi-channel 8 vNICS show up in VMWare but they are all displaying as 10GB connections, and I have confirmed the switch is set to autonegotiate. As I understand it each of the 8 NICs should really be 2.5 Gb, but thats not what displaying.
I was wondering if each of these NICS is capable of using 10GB worth of bandwidth if its available, or are they actually capped at 2.5GB each? (is there anyway to test this?)
(Side Note: I know I'm suppose to adjust the bandwidth % in the bios/uefi, but when I do that it forces me to tag a differnt vlan on each function, which breaks my mgmt connection and disconnects my server from the VDS. I'm trying to figure that one out, but was mainly asking about the throughput capability if the % is set to 0, like it is now)
So it sounds like you are talking about IBM Bladecenter with Virtual Fabric adapter or VFA. Please let me know if its not a VFA(II) adapter. But if it is, I think you are seeing the maximum of the vNIC because the phyiscal port can go up to 10Gb. In my opinion it be nice if it displayed the bandwidth set from a switch independent or virtual fabric mode. Each physical 10GbE port will present 4 vNICs to the OS. Since we have two ports you see 8 vNICs. The bandwidth for each vNIC can be configured in increments of 100Mb up to a maximum bandwith of the physical port. The vNICs on a given physical port share the 10Gb bandwidth port.
Maybe this guide can help. Starting on page 22 it talks about the architecture .