VMware Cloud Community
iNik26
Enthusiast
Enthusiast

Nested ESXi 6.7 poor network performances on 10G network

Hello,

i'm trying to setup a nested lab on 2 physical host with ESXi 6.7 U1 installed. I've configured DVS and a dpg dedicated to Nested ESXi hosts.

To enable MAC learning for the specific dpg i've followed (thanks William):

https://www.virtuallyghetto.com/2018/04/native-mac-learning-in-vsphere-6-7-removes-the-need-for-prom...

Checking if it's enabled:

PS C:\Users\Administrator.LAB> Get-MacLearn -DVPortgroupName @("vds01 - vESXi-Trunk")

DVPortgroup            : vds01 - vESXi-Trunk

MacLearning            : True

NewAllowPromiscuous    : False

NewForgedTransmits     : True

NewMacChanges          : False

Limit                  : 4096

LimitPolicy            : DROP

LegacyAllowPromiscuous : True

LegacyForgedTransmits  : True

LegacyMacChanges       : False

Seems good.

But if i try to test network performances (first on the same physical host) i see quite poor results:

iperf client:

[root@m-esxi01:/usr/lib/vmware/vsan/bin] ./iperf3  -i -t300 -c 172.28.101.2 -fm

Connecting to host 172.28.101.2, port 5201

[  4] local 172.28.101.1 port 59316 connected to 172.28.101.2 port 5201

iperf3: getsockopt - Function not implemented

[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd

[  4]   0.00-10.00  sec  2.02 GBytes  1738 Mbits/sec    0   0.00 Bytes      

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth       Retr

[  4]   0.00-10.00  sec  2.02 GBytes  1738 Mbits/sec    0             sender

[  4]   0.00-10.00  sec  2.02 GBytes  1738 Mbits/sec                  receiver

iperf Done.

[root@m-esxi01:/usr/lib/vmware/vsan/bin]

iperf server:

[root@m-esxi02:/usr/lib/vmware/vsan/bin] ./iperf3.copy -s

-----------------------------------------------------------

Server listening on 5201

-----------------------------------------------------------

Accepted connection from 172.28.101.1, port 44099

[  5] local 172.28.101.2 port 5201 connected to 172.28.101.1 port 59316

iperf3: getsockopt - Function not implemented

[ ID] Interval           Transfer     Bandwidth

[  5]   0.00-1.00   sec   232 MBytes  1.94 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   1.00-2.00   sec   183 MBytes  1.54 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   2.00-3.00   sec   208 MBytes  1.75 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   3.00-4.00   sec   212 MBytes  1.78 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   4.00-5.00   sec   198 MBytes  1.66 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   5.00-6.00   sec   203 MBytes  1.71 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   6.00-7.00   sec   199 MBytes  1.67 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   7.00-8.00   sec   200 MBytes  1.67 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   8.00-9.00   sec   217 MBytes  1.82 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]   9.00-10.00  sec   197 MBytes  1.65 Gbits/sec                 

iperf3: getsockopt - Function not implemented

[  5]  10.00-10.10  sec  23.0 MBytes  1.92 Gbits/sec                 

- - - - - - - - - - - - - - - - - - - - - - - - -

[ ID] Interval           Transfer     Bandwidth

[  5]   0.00-10.10  sec  0.00 Bytes  0.00 bits/sec                  sender

[  5]   0.00-10.10  sec  2.02 GBytes  1.72 Gbits/sec                  receiver

-----------------------------------------------------------

Is that a "normal" results for nested environments? If not, what can i check to improve performances?

Thanks, kind regards,

Nicola

0 Kudos
1 Reply
sjesse
Leadership
Leadership

I believe it is, I did some research recently on past forum posts, and people believe its an undocumented limit in vsphere and workstation. Technically there is no limit, and things like slow disks and full cpu queues can slow it down but some people could never get the bandwith that they expected.

0 Kudos