Hi,
Is there a capacity lemtaion for Edge service Gateway. For instance X-larage 6 CPU and 8 GB on host with 25G uplink, will the BW be 25 GB same question if I have ESXI with 100GB uplink with Edge BW 100Gb ?
Thank you
If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?
NSX components use the vmxnet3 adapter for their vNICs. So, it depends on a number of factors. Basically the vmxnet3 adapter emulates a 10GBASE-T network card in the operating system. The operating system is therefore generally limited to a maximum speed of 10 Gbps. However, within the same ESXi host, different virtual machines can communicate much faster because physical signaling restrictions do not apply in a virtualized environment between the VMs on the same host.
In addition, since vSphere 6.0 VMware has introduced a feature for the vmxnet3 adapter to use 40 Gbit network card more efficient. To increase the maximum speed and throughput of a single vNIC it can now use multiple hardware queues instead of one. This is done by a special setting in the vmx file and the physical network card must support RSS mode for this to work.
See: Network Improvements in vSphere 6 Boost Performance for 40G NICs - VMware VROOM! Blog - VMware Blogs
And this functionality has been further enhanced in vSphere 6.7:
But there is currently no broad support for the use of virtual 40 and 100 Gbit network cards, to my knowledge. ESXi hosts can handle it, but a single vNIC only in a limited way.
What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?
You can set the type of each ESG interface to "Trunk" and add subinterfaces to this ESG interface. These sub-interfaces have the same capabilities as normal interfaces and can be connected to a logical switch, vlan or a distributed port group. This allows you to connect your ESG to 200 logical switches or distributed port groups instead of a maximum of 10.
See: Add a Sub Interface
There are recommended configuration maximums and the maximum throughput is mainly limited by firewall performance and load balancer performance.
See:
https://anthonyspiteri.net/nsx-bytes-updated-nsx-edge-feature-and-performance-matrix-2/
Thank you,
If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?
What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?
If we assumed that are no limitation from the Firewalls and LB , Well the Edge throughput be almost equal to the physical network card ?
NSX components use the vmxnet3 adapter for their vNICs. So, it depends on a number of factors. Basically the vmxnet3 adapter emulates a 10GBASE-T network card in the operating system. The operating system is therefore generally limited to a maximum speed of 10 Gbps. However, within the same ESXi host, different virtual machines can communicate much faster because physical signaling restrictions do not apply in a virtualized environment between the VMs on the same host.
In addition, since vSphere 6.0 VMware has introduced a feature for the vmxnet3 adapter to use 40 Gbit network card more efficient. To increase the maximum speed and throughput of a single vNIC it can now use multiple hardware queues instead of one. This is done by a special setting in the vmx file and the physical network card must support RSS mode for this to work.
See: Network Improvements in vSphere 6 Boost Performance for 40G NICs - VMware VROOM! Blog - VMware Blogs
And this functionality has been further enhanced in vSphere 6.7:
But there is currently no broad support for the use of virtual 40 and 100 Gbit network cards, to my knowledge. ESXi hosts can handle it, but a single vNIC only in a limited way.
What is mean " With a trunk, an ESG can have up to 200 subinterfaces" Is it the links between the Edge and vDS portgroups or DLR ?
You can set the type of each ESG interface to "Trunk" and add subinterfaces to this ESG interface. These sub-interfaces have the same capabilities as normal interfaces and can be connected to a logical switch, vlan or a distributed port group. This allows you to connect your ESG to 200 logical switches or distributed port groups instead of a maximum of 10.
See: Add a Sub Interface
Thank you Sebastian,