VMware Cloud Community
Roveer
Enthusiast
Enthusiast

ESXI 6.0 on a Dell R710 with a Mellanox MNPA19-XTR 10GB MELLANOX CONNECTX-2 Card

So I'm pretty new to ESXI, but I've been making my way around setting up all kinds of VM's.

I wanted to do some 10Gb testing so I threw 2 MNPA19-XTR 10GB MELLANOX CONNECTX-2​ NIC's into my Dell R710 that's running Esxi 6.0.

The were identified by Esxi and when I assigned them to VM's I had to use VMXNET3 as the adapter type.  That's what it shows up as in the guest OS as well.

Am I doing this correctly?

I set up 2 cards, each assigned to it's own WIN10 VM each with it's own 8GB RAMDISK.  Transferring 6GB file between them on the 10Gb Cards was only yielding 250-300 MB/s, far short of the 600-800 MB/s I have seen in video's.  Iperf between the machines is only showing 2.98Gb/s, but iperf -P 5 (run 5 simultaneous transfers), gave me 9.61 Gbps.  I understand that maybe trying to do this solely on VM's might be a problem, but I'm trying to make sure that I'm configuring correctly. I've done the driver tuning (settings) per some video's I've watched which did bump things up, but not by much.

I noticed there are ESXI drivers for Mellanox cards, but the drivers for the Connectx-2 cards say they are for esxi 5.5.  I installed them on Esxi, but they do not show up anywhere.

What is the correct method for implementing 10Gb drivers into Esxi?  I've seen posts where people say use the VMXNET3 it's fine, but I'm not seeing anywhere near the throughput I was hoping for (understand, this is a lab test).

--- edit ---

I just tried my same test between Win 10 and Server 2012 R2 (same thing both VM's).  At first I was only getting 1Gb speeds and noticed that it was sending across the 1Gb nic's.  I made a hosts entry and mapped the drive using it (I had originally used the 10Gb IP address figuring it would use it, but apparently it didn't).  Once I did hosts/map thing I got this:  Now that's what I'm talking about!!!

--- edit 2---

I just threw the Win10 VM back in place of the Server 2012 R2 (so now it's Win10 on both sides, and did my little hosts thing and basically got the same 800+ MB/s speeds.  Very strange.  I map the drive using IP Address and it goes slow, but you map using hosts name and it goes full line speed?  Not sure I understand that.

--- edit 3 ---

Just went back and looked with it mapped via direct IP.  It wasn't using the 10Gb NIC at all it was using the 1Gb.  But how was it getting 300+MB/s over gig?  That maxes out at 125 MB/s?  Compression of SMB 3.0?  Either way, if I use the hosts name to map the drive I see the traffic on the 10Gb NIC and it's flying!!!

10GB.jpg

Thanks,

Roveer

0 Replies