VMware Networking Community
niceguy001
Enthusiast
Enthusiast
Jump to solution

nsx-t why use bare metal edge?

the direct answer is for high performance, which is mentioned in official documentation.

the second reason I believe is to simplify the networking between transport nodes and edge transport node.

But I wonder what kind of workloads are recommended to utilize physical edge nodes? container services or traditional services such as web and database?

in the NSX for vSphere environment there's only a formation to deploy the edge node and it seemed that the virtual appliance is able to handle all the N-S traffic.

so, does anyone know in what kind of situation does the throughput bottleneck matter for a datacenter to use bare metal edge?

Reply
0 Kudos
1 Solution

Accepted Solutions
oziltener
VMware Employee
VMware Employee
Jump to solution

Hi Niceguy001

Another good aspect is that I might be easier to integrate two BM edge node than a dedicated vSphere cluster to run VM-based edge nodes, in case you prefer purpose built vSphere clusters instead a collapsed cluster approach.

A dedicated vSphere cluster typically required some kind of shared datastore and when you deal with vSAN, your vSphere cluster requires 4 hosts (3 is the requirement, but 4 hosts provide you some operational flexibility). On the top, you need networks for vMotion, vSAN or IP storage etc.

In the case of BM edge node, you can easily use the local SDD in Raid 1.

And when you deal with BM edge nodes, it might provide you most likely an additional level of design flexibility to deploy these BM edge node as part of the Tier-0 Gateway in A/S mode, as your BM edge nodes typically deal with 40Gbps interfaces; this single 40Gbps interface might be enough for all North-South traffic. When your Tier-0 Gateway is run in A/S mode, you can enable Tier-0 stateful services as example an edge firewall. In case with VM-based edge nodes, I see very often these edge nodes are deployed in A/A respective leveraging ECMP to provide the required North-South bandwidth. As example to provide 40Gbps North-South bandwidth you might need 2 or more VM-based edge node in A/A mode. That design choice with A/A respective ECMP mode let you not run stateful service on the Tier-0 Gateway.

Hope this helps

Oliver

View solution in original post

Reply
0 Kudos
10 Replies
mauricioamorim
VMware Employee
VMware Employee
Jump to solution

It is not only throughput that might make for a choice of a bare metal Edge. Some services like load balancing and VPN also impact the choice of the Edge. If you have high demands in these services bare metal edges might be necessary. Take a look at config max and you will see that the numbers for these services are quite higher on a bare metal.

https://configmax.vmware.com/guest?vmwareproduct=VMware%20NSX-T&release=NSX-T%20Data%20Center%202.4....

daphnissov
Immortal
Immortal
Jump to solution

The other design reason for using bare-metal edges is convergence time. If one edge fails, the other will take over in under a second (in the neighborhood of 900ms). If these edges are virtual machines the best they can do is sub-3 seconds.

oziltener
VMware Employee
VMware Employee
Jump to solution

Hi Niceguy001

Another good aspect is that I might be easier to integrate two BM edge node than a dedicated vSphere cluster to run VM-based edge nodes, in case you prefer purpose built vSphere clusters instead a collapsed cluster approach.

A dedicated vSphere cluster typically required some kind of shared datastore and when you deal with vSAN, your vSphere cluster requires 4 hosts (3 is the requirement, but 4 hosts provide you some operational flexibility). On the top, you need networks for vMotion, vSAN or IP storage etc.

In the case of BM edge node, you can easily use the local SDD in Raid 1.

And when you deal with BM edge nodes, it might provide you most likely an additional level of design flexibility to deploy these BM edge node as part of the Tier-0 Gateway in A/S mode, as your BM edge nodes typically deal with 40Gbps interfaces; this single 40Gbps interface might be enough for all North-South traffic. When your Tier-0 Gateway is run in A/S mode, you can enable Tier-0 stateful services as example an edge firewall. In case with VM-based edge nodes, I see very often these edge nodes are deployed in A/A respective leveraging ECMP to provide the required North-South bandwidth. As example to provide 40Gbps North-South bandwidth you might need 2 or more VM-based edge node in A/A mode. That design choice with A/A respective ECMP mode let you not run stateful service on the Tier-0 Gateway.

Hope this helps

Oliver

Reply
0 Kudos
niceguy001
Enthusiast
Enthusiast
Jump to solution

Hi oziltener

thanks for the detailed reply! it helped a lot!

after you mentioned about the A/A mode of VM edge nodes which run ECMP and therefore do not run stateful services,

I started to check the document again and found that the installation guide (2.2 or 2.4) said "in active-standby mode the gateway can also provide stateful services".

is there any differences between my understanding and your experiences?  or the installation guide is basically correct but the throughput would be affected by stateful services?

thanks!Smiley Happy

Reply
0 Kudos
comahony
VMware Employee
VMware Employee
Jump to solution

Hi niceguy001, when the T0 Gateway is running in A/A (or ECMP mode) for the Service Router (SR), you can run stateful services on the Service Router. We are able to run stateless services in A/A mode, as stateless Edge Firewall. Please add the link with the information about stateful. I assume it is a documentation bug. I will open a documentation bug.

Cheers Oliver

harikrishnant
Contributor
Contributor
Jump to solution

NSX-T doesn't have a hardware VTEP support as of now. So for use cases like Overlay to VLAN bridging where the overlay workloads demands higher data transfers to an external storage on VLAN network, its good to do the bridging on baremetal edges leveraging DPDK acceleration.

Secondly, when u use NSX loadbalancers with SSL Offloading, its good to have baremetal edges as it supports higher TPS.

egekara
Contributor
Contributor
Jump to solution

I suggest you read VMWare NFV 3.0 for a full picture:

https://docs.vmware.com/en/VMware-vCloud-NFV/3.0/vmware-vcloud-nfv-30.pdf

Copied from this document:

Table 8‑1. Edge Node Options

Edge Node Type        Use

VM form-factor           - Production deployment with centralized services like NAT, Edge firewall, and load balancer. n

                                   - Workloads that can tolerate acceptable performance degradation loss with virtual edges.

                                   - Can tolerate lower failure convergence by using BFD (3 seconds).

                                   - Lower cost options instead of dedicated bare-metal nodes

                                   - Test proof of concept and trial setups.

Bare metal form factor

                                   - Production deployment with centralized services like NAT, Edge firewall, and load balancer.

                                   - Higher throughput more than 10Gbps.

                                   - Faster failure convergence using BFD (less than 1 second).

NFV workloads mainly, mobile networks with real time high availability requirements.

regards,

George

harikrishnant
Contributor
Contributor
Jump to solution

I published a blog on NSX-T Edges Form factor comparison - Baremetal vs VM. I covered all the details I had, let me know if this helps.

https://vxplanet.com/2019/06/13/nsx-t-edges-baremetal-vs-vm-comparison/

niceguy001
Enthusiast
Enthusiast
Jump to solution

hi guys,

thanks for the answers!

everyone of you gave me beautiful answers, i appreciate and am currently considering the best one by throwing a dice.

I have last question and hope someone could help to clarify:

what is the minimum NIC ports required for a bare-metal edge server? is it same as VM edge, which is 2 NIC ports?

(I do know that it is not recommended to use less than 4 NIC ports, however I just curious about this.)

thanks for the reply and thank for all the answers!

Reply
0 Kudos
egekara
Contributor
Contributor
Jump to solution

Hello,

The least you will need:

  • one dedicated for management plane eth0.
  • one for overlay network to achieve connectivity towards hosts.
  • one for VLAN uplink connectivity to TOR

This is a total of 3. If you require high availability for overlay and VLAN uplink you will need two instead of one for each so total of 5. I haven't seen an example on bare metal on which overlay and uplink share same pNIC. Go with the recommended.

Check also this, there are specific hardware requirements for bare metal (this is 2.3 related so check your version for the appropriate):

NSX Edge Installation

George