VMware Networking Community
LukaszDziwisz
Hot Shot
Hot Shot

NSX on UCS throughput/latency issue

 

Hello Everyone,

 

My organization is in the process of deploying NSX-T. We have stood up completely brand new vCenter for that purpose and we are currently migrating VMs from old vCenters to the new.  Below is not fully detailed design but hopefully you can get the idea of that is involved:

 

LukaszDziwisz_0-1616793376948.png

 

 

In Site A we only have single Catalyst 4510 and in Site B we have 2 Nexus 9K switches with VPC peerlink between them. The whole deployment is done on Cisco UCS and blade servers. What is not shown on the picture is that in between Edges and TORs we have FI 64108s and Nexus 5K switches in case if that matters.

Our vCenter consists of Management Cluster built out of 3 hosts from Site A and 3 hosts from SiteB, so the total is 6 hosts. They all share stretched Fiberchannel storage on Pure Flashblades. Management cluster hosts Edge Nodes, NSX Managers and vCenter and is not NSX prepared. It has 2 vDS switches one for Management, vMotion and Transit vlan that hosts TEPs and another for Edge Uplinks that are simply trunk port groups.

Next, we have 2 compute clusters that are NSX prepared, one for hosts out of Site A and one for hosts out of Site B. Storage is not shared in this case

We have two Tier0 for the ability of having workloads egressing their local sites.

As far as UCS is concerned, we didn’t really do anything with it besides enabling Jumbo Frames using the following article:

https://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-b-series-blade-servers/1176...

We are currently on 4.1 (2A) firmware all across the board. Our B series blades are Cisco UCS B200 M4 with 1340 Vics.

Ok I hope that this good start as far as details go, now to my main question.

 

As we’ve been migrating VMs we have discovered a problem at the Site B with very low throughput and additional latency.  We are using vmware provided iperf VMs for testing and troubleshooting.

At Site A if I measure throughput between a Segment and Transit vlan VM or even a VM that lives in my old vCenter I get on average 5-8 Gbits/sec, however doing the same test in Site B I’m getting barely 2-3 Gbits/sec. Also we are noticing that pinging between nsx segment and something outside of NSX that lives in the same datacenter I’m seeing below 1ms latency which is what I would expect, however in site B it is higher, see below example:

Latency

Site A

LukaszDziwisz_1-1616793376953.png

 

Site B

LukaszDziwisz_2-1616793376958.png

 

 

 

 

 

 

 

 

Throughput:

Site A

LukaszDziwisz_3-1616793376975.png

 

 

Site B

LukaszDziwisz_4-1616793376987.png

 

 

Additional tests:

  • Throughput between two vlans 1510 and 1511 at Site B yields roughly 9.5-9.8 Gbits/second
  • Throughput between 2 segments at Site B is roughly 7-8 Gbits/second
  • Throughput on my old vcenter as site B running on the same UCS and connected to the same physical switches is roughly 7-9 Gbits/second
  • Added vlan backed segment on the same site and put a test machine in there and measured throughput to VLAN 1511 and VLAN 1510 and getting 7-8 Gbits/second

I have a support ticket opened with both Cisco and VMware, Cisco verifying that everything looks good on our Nexuses and that they are not the problem. As for vmware we have already spent couple of hours on troubleshooting and settings validation and cannot find the problem. We have redeployed my B-EN03 thinking that there might be something wrong with it and it didn’t help. We also deleted and recreated whole Tier0 Green to no avail

Both UCS clusters are identical at both sites, firmware is the same, configurations are the same as well. Hosts are all at the same ESXi version and running the same enic drivers. The only difference appears to be a pair of Nexus 9Ks versus Catalyst 4510 at site A.

I have found a case study fairly similar to our situation for deploying NSX on UCS and Nexus 9K but that is for NSX-V and I’m not too sure if all of it applies to our case. The only difference I see is that we have configured our MTU at 9000 on Nexus interfaces and the document calls for 9214

I’m hoping that there might be someone here who might have run into the same issue and found a solution or if anyone can point us as to where the problem might be. If anybody needs any more details please let me know and I will try to provide it to the best of my ability.

 

Thank you in advance

Labels (3)
Reply
0 Kudos
7 Replies
Sreec
VMware Employee
VMware Employee

1. What is the latency from Edge in Site-B to the BGP peer? 

2. From your explanation overlay connectivity in Site-B is reporting optimal latency, is that correct? 

3. Do we have HSRP  configured in VPC? 

4. BGP transit VLANS are in VPC or in static Port Channel?

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
LukaszDziwisz
Hot Shot
Hot Shot

@Sreec 

 

1. What is the latency from Edge in Site-B to the BGP peer? 

This is sub 1ms. The 2ms was when traffic was inside the edge node. 

2. From your explanation overlay connectivity in Site-B is reporting optimal latency, is that correct? 

Yes the latency is fine. It is just the latency when we cross through the EN SR

3. Do we have HSRP configured in VPC? 

So for this one we started with HSRP on vlans 1510 and 1511, then during troubleshooting we took it out and we are peering with the vlan interface. For vlan 1510 we are peering with TOR left and for vlan 1511 peering with vlan interface on TOR right. In both scenarios latency and bandwidth degradation is present

4. BGP transit VLANS are in VPC or in static Port Channel?

Yes they're in VPC.

 Please let me know if you have any additional questions

Reply
0 Kudos
Sreec
VMware Employee
VMware Employee

Thanks for giving that clarity.

If feasible you should do the below tests 

1. Take BGP transit VLAN out of VPC configuration and create a static port-channel and test the latency. 

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
LukaszDziwisz
Hot Shot
Hot Shot

We can certainly do that but this will require some additional changes and time to complete. 

From the way I understand your suggestion you want us to to take those vlans 1510 and 1511 out of VPC and instead put them in static port channel, correct?

Both of those vlans are a part of the large port channel tat carries pretty much all vlans so it sounds like we would need to cable up separately out 5k to 9Ks and add those new physical ports to the static port channel and allow those 2 vlans on it. 

 

Is this what you are suggesting to try?

Reply
0 Kudos
shank89
Expert
Expert

It is generally not recommended to have the ports facing a host in a VPC or port-channel as it can cause some funny issues with BGP and packets.  This could be one of the symptoms of doing this, ie BFD using one port and peering over another etc.

It makes the behaviour a little less predictable. 

Shashank Mohan

VCIX-NV 2022 | VCP-DCV2019 | CCNP Specialist

https://lab2prod.com.au
LinkedIn https://www.linkedin.com/in/shankmohan/
Twitter @ShankMohan
Author of NSX-T Logical Routing: https://link.springer.com/book/10.1007/978-1-4842-7458-3
Reply
0 Kudos
Sreec
VMware Employee
VMware Employee

Yes, that is exactly what I want to test it. Also, you can create an SVI/VRF on 5K and test the latency from 5K to 9K. 

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
LukaszDziwisz
Hot Shot
Hot Shot

Hello Everyone,

 

I apologize for a delay with the update but I'm happy to say that the issue has been resolved. 

So the whole problem appeared to be our Edge Node TEPs and Host TEPs living in two different vlans and being multisite the vlan master was at the primary site A on our TOR therefore the issue was not observed on site A but was present all the time on site B. Since they were on different vlans the traffic passing Edge node had to be routed to get to the host tep and was always traveling through VLAN master which was again in site Blue. We were able to resolve this by simply putting all Edge Node TEPS and HOST TEPs on the same stretched vlan therefore eliminating need for routing to be in play.

 

Thanks to VMware they were able to get some deep packet tracing and that is what led us into it. On the VM level we would never see this as an extra hop.

Reply
0 Kudos