ngkin2010
Contributor
Contributor

Transition from traditional physical network to NSX-T 3.0

Jump to solution

Hi,

I am a traditional networking guy, and start studying on the NSX-T. After few weeks of studying, I am now familiar to the NSX-T components and able to build a NSX-T environment from scratch.

However, I am still struggling on how to step-by-step move from traditional networking to NSX-T 3.0 with minimal impact to existing traditional infrastructure.

The 'minimal impact' means:

  • we start with a minimal change on the existing infrastructure, and expand the change step-by-step in later stages.  
  • no need to change IP address & VLAN-ID of the existing subnet (shared by bare-metal & VM) 
  • no big-bang migration; VM / bare-metal migrate from traditional network to NSX-T batch by batch.

Glad if anyone can share some document / idea about it.

 

Here is my thought, I don't know if it's valid or not.

 

1) Regarding to the 1st requirement:

  • we start with a minimal change on the existing infrastructure, and expand the change step-by-step in later stages.
      
    1. We create a smallest size of POC environment on existing infrastructure

      1. Install three NSX Managers VM on existing ESXi hosts, and form NSX manager cluster
      2. We choose 1 of the existing ESXi host (with production workload) on DMZ network
      3. Configure the selected host as Host transport Node
      4. Also deploy the Edge Node on the selected host
      5. Create new overlay segment (with unused IP subnet)
      6. Form BGP neighbor with existing infrastructure's router
      7. Now, we are have a POC environment on 1 of the ESXi host (with production workload). 


    2. Scale out the POC by configuring more ESXi host as Host transport Node

      1. Decrease the testing VM's MTU to 1400 or lower
      2. Now, we can test the GENEVE tunnel connectivity


    3. Scale out the POC by deploying more Edge nodes, and form Edge cluster.

      1. Now, we can test the ECMP (active/active cluster)



    4. At the later stage when everything is working smooth on NSX-T

      1. We change the infra's MTU 1500 to MTU1600

2) Regarding to the 2nd & 3rd requirement:

  • no need to change IP address & VLAN-ID of the existing subnet (shared by bare-metal & VM)
     
    1. We create segment (same IP subnet to the existing physical network) on NSX-T on T1-DR
    2. We create CSP for bridging the segment to the physical subnet
    3. We shutdown the VLAN interface on physical router
    4. We migrate vSphere PortGroup to N-VDS segment on VM
    5. We migrate the default gateway on bare metal to NSX-T T1-DR's address
    6. Advertise the segment via Edge Node's BGP
    7. Now, we migrated 1 subnet from traditional network to NSX-T
    8. Repeat until all subnet migrated

 

 

I am not sure if my thought is valid or not. Glad if anyone can share yours experience 🙂 

 

  1.  
0 Kudos
2 Solutions

Accepted Solutions
Sreec
VMware Employee
VMware Employee

1) Regarding to the 1st requirement:

  • we start with a minimal change on the existing infrastructure, and expand the change step-by-step in later stages.
      
    1. We create a smallest size of POC environment on existing infrastructure

      1. Install three NSX Managers VM on existing ESXi hosts, and form NSX manager cluster
      2. We choose 1 of the existing ESXi host (with production workload) on DMZ network
      3. Configure the selected host as Host transport Node
      4. Also deploy the Edge Node on the selected host
      5. Create new overlay segment (with unused IP subnet)
      6. Form BGP neighbor with existing infrastructure's router
      7. Now, we are have a POC environment on 1 of the ESXi host (with production workload). 

        Since this is a collapsed cluster for a POC , you can deploy a single node NSX manager (starting from NSX T 3.1) if needed  
    2. Scale out the POC by configuring more ESXi host as Host transport Node

      1. Decrease the testing VM's MTU to 1400 or lower
      2. Now, we can test the GENEVE tunnel connectivity

        May I know the reason for decreasing VM MTU ? 
    3. Scale out the POC by deploying more Edge nodes, and form Edge cluster.

      1. Now, we can test the ECMP (active/active cluster)
        Yes, but no firewall on T0 statefull services 
    4. At the later stage when everything is working smooth on NSX-T

      1. We change the infra's MTU 1500 to MTU1600
        Infra MTU must be large enough to support extra encapsulation overhead, this is a prerequiste . So either start with 1600 or 9000 

2) Regarding to the 2nd & 3rd requirement: 

  • no need to change IP address & VLAN-ID of the existing subnet (shared by bare-metal & VM)
     
    1. We create segment (same IP subnet to the existing physical network) on NSX-T on T1-DR
    2. We create CSP for bridging the segment to the physical subnet
    3. We shutdown the VLAN interface on physical router
    4. We migrate vSphere PortGroup to N-VDS segment on VM
    5. We migrate the default gateway on bare metal to NSX-T T1-DR's address
    6. Advertise the segment via Edge Node's BGP
    7. Now, we migrated 1 subnet from the traditional network to NSX-T
    8. Repeat until all subnet migrated                                                                                                                                                                         So in this testing you are routing the bridged subnet . I hope that is the exact requirement.  If bridging is the only use case, you don't need to route those subnets. I would also prefer VDS backed NSX-T integration instead of NVDS .The N-VDS NSX-T host switch will be deprecated in a future release https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/rn/VMware-NSX-T-Data-Center-30-Release-Notes...  

 

 

 

 

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered

View solution in original post

0 Kudos
Sreec
VMware Employee
VMware Employee
  • May I know the reason for decreasing VM MTU ? 
    • The MTU in the physical network is 1500; so we choose to decrease the MTU on VM to 1400 first for POC.
    • Until we confirm everything okay in POC, we can arrange downtime for changing physical network's MTU (usually enable jumbo frame on infra need huge downtime...).

Majority of the case we can change MTU on fly . There are few limitations with server profiles (blade servers for specific vendors ) and legacy stack switches . MTU change is must in NSX environment. We don't support fragmentation of packets . So decreasing MTU won't help . There are transit VNI path between T1 and T0 and tunnel connectivity will show as down . DVS MTU ->TOR MTU>L3 if we are terminating overlay networks also should have consistent  MTU. 

    Bridging 
    Yes, we can use DVS portgroup mapped to VNI using the bridge profile. Ensure VLAN tagging is correct across the transit path and hypervisor nodes. 
Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered

View solution in original post

0 Kudos
4 Replies
Sreec
VMware Employee
VMware Employee

1) Regarding to the 1st requirement:

  • we start with a minimal change on the existing infrastructure, and expand the change step-by-step in later stages.
      
    1. We create a smallest size of POC environment on existing infrastructure

      1. Install three NSX Managers VM on existing ESXi hosts, and form NSX manager cluster
      2. We choose 1 of the existing ESXi host (with production workload) on DMZ network
      3. Configure the selected host as Host transport Node
      4. Also deploy the Edge Node on the selected host
      5. Create new overlay segment (with unused IP subnet)
      6. Form BGP neighbor with existing infrastructure's router
      7. Now, we are have a POC environment on 1 of the ESXi host (with production workload). 

        Since this is a collapsed cluster for a POC , you can deploy a single node NSX manager (starting from NSX T 3.1) if needed  
    2. Scale out the POC by configuring more ESXi host as Host transport Node

      1. Decrease the testing VM's MTU to 1400 or lower
      2. Now, we can test the GENEVE tunnel connectivity

        May I know the reason for decreasing VM MTU ? 
    3. Scale out the POC by deploying more Edge nodes, and form Edge cluster.

      1. Now, we can test the ECMP (active/active cluster)
        Yes, but no firewall on T0 statefull services 
    4. At the later stage when everything is working smooth on NSX-T

      1. We change the infra's MTU 1500 to MTU1600
        Infra MTU must be large enough to support extra encapsulation overhead, this is a prerequiste . So either start with 1600 or 9000 

2) Regarding to the 2nd & 3rd requirement: 

  • no need to change IP address & VLAN-ID of the existing subnet (shared by bare-metal & VM)
     
    1. We create segment (same IP subnet to the existing physical network) on NSX-T on T1-DR
    2. We create CSP for bridging the segment to the physical subnet
    3. We shutdown the VLAN interface on physical router
    4. We migrate vSphere PortGroup to N-VDS segment on VM
    5. We migrate the default gateway on bare metal to NSX-T T1-DR's address
    6. Advertise the segment via Edge Node's BGP
    7. Now, we migrated 1 subnet from the traditional network to NSX-T
    8. Repeat until all subnet migrated                                                                                                                                                                         So in this testing you are routing the bridged subnet . I hope that is the exact requirement.  If bridging is the only use case, you don't need to route those subnets. I would also prefer VDS backed NSX-T integration instead of NVDS .The N-VDS NSX-T host switch will be deprecated in a future release https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.0/rn/VMware-NSX-T-Data-Center-30-Release-Notes...  

 

 

 

 

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
0 Kudos
ngkin2010
Contributor
Contributor

Dear Stree,

Thanks so much for your reply!! 🙂

Sound my idea is basically valid. But do you have some real world example that how does enterprise migrate to NSX-T from traditional network. It's great if there is another more practical way to do it...

Regarding to your response:

  • Since this is a collapsed cluster for a POC , you can deploy a single node NSX manager (starting from NSX T 3.1) if needed  
    • Cool, good to know if single instance in version 3.1 can do both Edge & Host node. 

 

  • May I know the reason for decreasing VM MTU ? 
    • The MTU in the physical network is 1500; so we choose to decrease the MTU on VM to 1400 first for POC.
    • Until we confirm everything okay in POC, we can arrange downtime for changing physical network's MTU (usually enable jumbo frame on infra need huge downtime...).

 

  • So in this testing you are routing the bridged subnet . I hope that is the exact requirement.  If bridging is the only use case, you don't need to route those subnets. I would also prefer VDS backed NSX-T integration instead of NVDS .The N-VDS NSX-T host switch will be deprecated in a future release
    • Usually enterprise want to migrate their bare-metal servers / physical appliance to NSX-T without changing IP.
    • Here is my idea:
    • subnet-migration.png 
    • The segment 192.168.1.0/24 is on overlay transport zone, but the T1-router gateway (192.168.1.1) need to share to with physical appliance.
    • Could VDS help in this situation? or I do need to have a virtual switch VM as follow?
    • subnet-migration-2.png

 

 

  •  
0 Kudos
Sreec
VMware Employee
VMware Employee
  • May I know the reason for decreasing VM MTU ? 
    • The MTU in the physical network is 1500; so we choose to decrease the MTU on VM to 1400 first for POC.
    • Until we confirm everything okay in POC, we can arrange downtime for changing physical network's MTU (usually enable jumbo frame on infra need huge downtime...).

Majority of the case we can change MTU on fly . There are few limitations with server profiles (blade servers for specific vendors ) and legacy stack switches . MTU change is must in NSX environment. We don't support fragmentation of packets . So decreasing MTU won't help . There are transit VNI path between T1 and T0 and tunnel connectivity will show as down . DVS MTU ->TOR MTU>L3 if we are terminating overlay networks also should have consistent  MTU. 

    Bridging 
    Yes, we can use DVS portgroup mapped to VNI using the bridge profile. Ensure VLAN tagging is correct across the transit path and hypervisor nodes. 
Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
0 Kudos
ngkin2010
Contributor
Contributor

Hi Sree,

Thanks a lot for answering my questions!! 🙂

I think I have to read on more detail about the bridging (portgroup to VNI mapping).

Thanks again, and have nice day~

 

0 Kudos