VMware Networking Community
vSohill
Expert
Expert
Jump to solution

Edge service maximum BW capcity

Hi,

Is there a capacity lemtaion for Edge service Gateway. For instance X-larage 6 CPU and 8 GB on host with 25G uplink, will the BW be 25 GB same question if I have ESXI with 100GB uplink with Edge BW 100Gb ?

Thank you

Reply
0 Kudos
1 Solution

Accepted Solutions
sk84
Expert
Expert
Jump to solution

If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?

NSX components use the vmxnet3 adapter for their vNICs. So, it depends on a number of factors. Basically the vmxnet3 adapter emulates a 10GBASE-T network card in the operating system. The operating system is therefore generally limited to a maximum speed of 10 Gbps. However, within the same ESXi host, different virtual machines can communicate much faster because physical signaling restrictions do not apply in a virtualized environment between the VMs on the same host.

In addition, since vSphere 6.0 VMware has introduced a feature for the vmxnet3 adapter to use 40 Gbit network card more efficient. To increase the maximum speed and throughput of a single vNIC it can now use multiple hardware queues instead of one. This is done by a special setting in the vmx file and the physical network card must support RSS mode for this to work.

See: Network Improvements in vSphere 6 Boost Performance for 40G NICs - VMware VROOM! Blog - VMware Blogs

And this functionality has been further enhanced in vSphere 6.7:

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/whats-new-vs...

But there is currently no broad support for the use of virtual 40 and 100 Gbit network cards, to my knowledge. ESXi hosts can handle it, but a single vNIC only in a limited way.

What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?

You can set the type of each ESG interface to "Trunk" and add subinterfaces to this ESG interface. These sub-interfaces have the same capabilities as normal interfaces and can be connected to a logical switch, vlan or a distributed port group. This allows you to connect your ESG to 200 logical switches or distributed port groups instead of a maximum of 10.

See: Add a Sub Interface

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.

View solution in original post

Reply
0 Kudos
4 Replies
sk84
Expert
Expert
Jump to solution

There are recommended configuration maximums and the maximum throughput is mainly limited by firewall performance and load balancer performance.

See:

https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.4/NSX%20for%20vSphere%20Recommended%20Configurat...

https://anthonyspiteri.net/nsx-bytes-updated-nsx-edge-feature-and-performance-matrix-2/

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.
Reply
0 Kudos
vSohill
Expert
Expert
Jump to solution

Thank you,

If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?

What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?

Reply
0 Kudos
sk84
Expert
Expert
Jump to solution

If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?

NSX components use the vmxnet3 adapter for their vNICs. So, it depends on a number of factors. Basically the vmxnet3 adapter emulates a 10GBASE-T network card in the operating system. The operating system is therefore generally limited to a maximum speed of 10 Gbps. However, within the same ESXi host, different virtual machines can communicate much faster because physical signaling restrictions do not apply in a virtualized environment between the VMs on the same host.

In addition, since vSphere 6.0 VMware has introduced a feature for the vmxnet3 adapter to use 40 Gbit network card more efficient. To increase the maximum speed and throughput of a single vNIC it can now use multiple hardware queues instead of one. This is done by a special setting in the vmx file and the physical network card must support RSS mode for this to work.

See: Network Improvements in vSphere 6 Boost Performance for 40G NICs - VMware VROOM! Blog - VMware Blogs

And this functionality has been further enhanced in vSphere 6.7:

https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/performance/whats-new-vs...

But there is currently no broad support for the use of virtual 40 and 100 Gbit network cards, to my knowledge. ESXi hosts can handle it, but a single vNIC only in a limited way.

What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?

You can set the type of each ESG interface to "Trunk" and add subinterfaces to this ESG interface. These sub-interfaces have the same capabilities as normal interfaces and can be connected to a logical switch, vlan or a distributed port group. This allows you to connect your ESG to 200 logical switches or distributed port groups instead of a maximum of 10.

See: Add a Sub Interface

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.
Reply
0 Kudos
vSohill
Expert
Expert
Jump to solution

Thank you Sebastian,

Reply
0 Kudos