VMware Cloud Community
EricNg98
Contributor
Contributor
Jump to solution

End to End NIC Teaming on ESXi with redhat linux guest OS

I am currently running into a situation of lack of bandwidth for my backup server running inside the VM environment. For increasing the bandwidth coming from a number of backup clients to my backup server inside ESXi 5. I would like to setup 2 x NIC bonding inside my redhat server and form NIC Teaming using IP Hash of my ESXi host.

Is that the workable way of providing an end to end 2G connection to the outside world for my redhat backup media server? or there is an other way round?

0 Kudos
1 Solution

Accepted Solutions
SteveFuller2011
Enthusiast
Enthusiast
Jump to solution


I don't believe you need to add a second NIC to the Redhat VM that's acting as your backup server as a single NIC for the VM can accommodate more than 1G load.

I've got a couple of Redhat servers both with a single VMXNET NIC. The following shows what ethtool sees for the NIC on one of those Redhat servers.

[sfuller@rhel7 ~]$ sudo ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes

When I run a TCP transfer (using thrulay) from one server to the other you can see it averages 2.6Gbps over the 10-second transfer.

[sfuller@rhel7 ~]$ thrulay -t10 rhel6
# local window = 219136B; remote window = 219136B
# block size = 8192B
# Path MTU = 1500B, MSS = 1448B
# test duration = 10s; reporting interval = 1s
SID     begin,s  end,s  Mb/s     RTT,ms: min   avg   max
(0)     0.000    1.000 2550.656    0.286    0.540    7.141
(0)     1.000    2.000 2421.206    0.290    0.656   20.833
(0)     2.000    3.000 2828.721    0.300    0.542    8.399
(0)     3.000    4.000 2844.359    0.298    0.541    8.061
(0)     4.000    5.000 2727.592    0.280    0.567   14.584
(0)     5.000    6.000 2072.464    0.303    0.734  202.781
(0)     6.000    7.000 2872.825    0.283    0.529    1.413
(0)     7.000    8.000 2846.425    0.287    0.535    5.918
(0)     8.000    9.000 2869.543    0.299    0.536    1.436
(0)     9.000   10.000 2841.009    0.294    0.532   12.221
(0)#    0.000   10.000 2687.479    0.280    0.566  202.781


So this shows we can get more than 1Gbps throughput on a single "GE NIC" on a Redhat VM.

What you will need to do is, as you mention, change the load balancing on the ESX host the Redhat VM is running on to use route based on IP hash. This will allow the Redhat VM to use more than a single pNIC on the ESX host. You also need multiple clients, whose LSB of their IP addresses XOR differently, but you've stated you've got multiple clients and so unless you're really unlucky with the IP addresses those clients have you should be good there.

Hope this helps and good luck.

View solution in original post

0 Kudos
2 Replies
SteveFuller2011
Enthusiast
Enthusiast
Jump to solution


I don't believe you need to add a second NIC to the Redhat VM that's acting as your backup server as a single NIC for the VM can accommodate more than 1G load.

I've got a couple of Redhat servers both with a single VMXNET NIC. The following shows what ethtool sees for the NIC on one of those Redhat servers.

[sfuller@rhel7 ~]$ sudo ethtool eth0
Settings for eth0:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
        Supports auto-negotiation: No
        Advertised link modes:  Not reported
        Advertised auto-negotiation: No
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: off
        Link detected: yes

When I run a TCP transfer (using thrulay) from one server to the other you can see it averages 2.6Gbps over the 10-second transfer.

[sfuller@rhel7 ~]$ thrulay -t10 rhel6
# local window = 219136B; remote window = 219136B
# block size = 8192B
# Path MTU = 1500B, MSS = 1448B
# test duration = 10s; reporting interval = 1s
SID     begin,s  end,s  Mb/s     RTT,ms: min   avg   max
(0)     0.000    1.000 2550.656    0.286    0.540    7.141
(0)     1.000    2.000 2421.206    0.290    0.656   20.833
(0)     2.000    3.000 2828.721    0.300    0.542    8.399
(0)     3.000    4.000 2844.359    0.298    0.541    8.061
(0)     4.000    5.000 2727.592    0.280    0.567   14.584
(0)     5.000    6.000 2072.464    0.303    0.734  202.781
(0)     6.000    7.000 2872.825    0.283    0.529    1.413
(0)     7.000    8.000 2846.425    0.287    0.535    5.918
(0)     8.000    9.000 2869.543    0.299    0.536    1.436
(0)     9.000   10.000 2841.009    0.294    0.532   12.221
(0)#    0.000   10.000 2687.479    0.280    0.566  202.781


So this shows we can get more than 1Gbps throughput on a single "GE NIC" on a Redhat VM.

What you will need to do is, as you mention, change the load balancing on the ESX host the Redhat VM is running on to use route based on IP hash. This will allow the Redhat VM to use more than a single pNIC on the ESX host. You also need multiple clients, whose LSB of their IP addresses XOR differently, but you've stated you've got multiple clients and so unless you're really unlucky with the IP addresses those clients have you should be good there.

Hope this helps and good luck.

0 Kudos
MKguy
Virtuoso
Virtuoso
Jump to solution

Like Steve said, you don't need a 2nd vNIC in the guest as they aren't bound by regular physical limitations and can even exceed the bandwidth presented by the virtual link.

If you aren't doing so already with your VM, I highly suggest switching to vmxnet3. It emulates a 10Gbit link in the guest too and can significantly decrease the CPU load generated by lots of traffic.

Using iperf, I've seen results of 25+Gbit/s between RHEL VMs on the same host (so with one "10Gbit vNIC" obviously), similar to this:

http://twitpic.com/66wbdo

-- http://alpacapowered.wordpress.com