VMware Networking Community
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Some VXLAN integration questions

Hello

I am new to NSX and have a few practical questions 🙂

Here is the scenario :

     -We have 2 physical Datacenters : DC-A and DC-B. They are uplinked via layer2 ethernet links.

     -At DC-A we have OLD_CLUSTER hosts : traditional layout of 4 hosts attached to a SAN

     -At DC-A as well as in DC-B we have NEW_CLUSTER hosts : hyperconverged 6 hosts with vSAN enabled

     -On top of that, we have 1 vCenter managing OLD_CLUSTER and NEW_CLUSTER

     -Then we have a vCloud director managing that vCenter.

The goal is to migrate the vCloud director VM workload currently running on OLD_CLUSTER to NEW_CLUSTER

Now, part of this migration plan is to enable VXLAN on both clusters. Hence some questions :

     1) Do I have to use the same VLAN for VXLAN on OLD_CLUSTER and NEW_CLUSTER ? This by keeping in mind that, physically, all hosts are connected to the same, trunked, layer2 network segment. I guess it has to be the same VLAN since all related VM's within vcloud will be connected to the same VXLAN network pool ?

     2) What about those logical switches in NSX ? Is it required with VXLAN ? I saw that enabling VXLAN does those things :

               -Adjusting the MTU of the selected DVS switch to 1600

               -create, on this same DVS, a port group

               -in this port group, per host, a vxlan vmkernel interface

          Now, in // of that, I understand that basically creating a "logical switch" within NSX creates nothing else than a port group within the distributed switch.

          Besides, enabling VXLAN on 1 cluster is apparently not requiring any logical switch conf...Still I was able to read on different places people mixing the logical switches and VXLAN topic which is confusing me.

          So : is there any link between VXLAN and NSX logical switches ? At which level ?

     3) Down to the physical layer : my supposal is that if the vxlan vmkernel is sending/receiving ethernet frames of 1600 bytes, it means that all my physical uplinks also needs to be configured with a MTU of 1600. Do I understand this correctly ?

     4) Finally, is it possible to enable VXLAN in DHCP and define, afterwards, a static IP on corresponding vxlan vmkernels ? Because once VXLAN is enabled, I don't see any way to change the ip allocation setting from DHCP to IP Pool.

Thanks 🙂

1 Solution

Accepted Solutions
Sreec
VMware Employee
VMware Employee
Jump to solution

1)

VM's will run either on OLD or on NEW cluster, they won't be between. Once all vm from an org VDC will be migrated to NEW_CLUSTER, that's it, they are not going back to the other side.

vSAN is configured in streched cluster yes (3 hosts on DC-A, 3 on DC-B and a witness on Site-C), both datacenters have redundant 10Gbe connectivity. (By the way, all vSAN vmkernels will be on a dedicated DVS and each host will be connected to this DVS by 2*10Gbe links. The idea is to totally isolate the vsan traffic from anything else). Why did you hope it was not streched ?

This is perfect for VSAN design. Since you never mentioned stretched VSAN- i concluded it is a unique VSAN site :Smiley Wink

The time that the migration is ongoing, yes, VCD will have some VM's on OLD_CLUSTER their data running on the old SAN and other VM's on NEW_CLUSTER data on vSAN. (In order to do the migration from SAN to vSAN, 1 host in NEW_CLUSTER has an HBA trough which the hosts is able to see SAN datastores in addition to VSAN).

,

Now I am confused about the relation between : the VLAN of the VXLAN, the external network and the fact that the VXLAN VLAN needs to be trunked. At the moment all org VDC are using vCDNI (i didn't do the config at the time), and I don't see the VLAN it is using nowhere else than in vmware ? I feel I am missing some important part here.

External networks will be ideally mapped to single VLAN backed vSphere port-group(We still can't trunk this as far as i know-> VMware Documentation Library  ) . So we will have to create unique vlan backed Port-Groups for external networks and VM's in vapps can connect to them directly which is one of the method.

Well VCDNI is almost the same kind of isolation what VXLAN also gives,but overall technology/deployment and config is totally different. Vlans are optional for VCDNI so i believe you are not using VLAN that is why it is not showing for VCDNI.

For sure you need to migrate them to VXLAN and you can follow steps in Migrating VCDNI Networks to VXLAN Networks in vCloud Director (2148381) | VMware KB  . Ensure that you are migrating it prior to Host/VC upgrade to 6.5

VCDNI – Tom Fojta's Blog

4)

I did express myself badly. What I meant is when, in Network&Security>Installation>Host preparation, you enable VXLAN you can choose how the vxlan vmknic gets it's IP from. For the OLD_CLUSTER I choosed "DHCP" but I am not able to change it somewhere back to "IP Pool". So, instead of unconfiguring VXLAN on that production cluster, if I can't just give it some static ip ?

I don't fully remember if there is a straight way to achieve it . It would be slightly tricky since VCD is also involved. Most likely API call will be required to do this change.

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered

View solution in original post

Reply
0 Kudos
4 Replies
Sreec
VMware Employee
VMware Employee
Jump to solution

1) Do I have to use the same VLAN for VXLAN on OLD_CLUSTER and NEW_CLUSTER ? This by keeping in mind that, physically, all hosts are connected to the same, trunked, layer2 network segment. I guess it has to be the same VLAN since all related VM's within vcloud will be connected to the same VXLAN network pool ?

From your explanation what i understood is you have a mix of Traditional Storage and VSAN which is absolutely fine and i hope vsan is not stretched in this case ? Does VCD use both traditional storage cluster and VSAN cluster ?  I cannot really comment whether you should same VLAN for old and new cluster. If your Org-vdc is spanned across multiple cluster(OLD+NEW) and VM's on both the cluster need access to same external network using one of the network connection(VAPP,Routed,Direct etc) you certainly need the same VLAN or you can use a new VLAN and create external network. Also if we have unique org-vdc to cluster mapping they can still share same external network . Below diagram is just a sample one - and we have a  External network  . All we need  is add the respective VLAN numbers in vCD external network configuration

pastedImage_0.png

     2) What about those logical switches in NSX ? Is it required with VXLAN ? I saw that enabling VXLAN does those things :

               -Adjusting the MTU of the selected DVS switch to 1600

               -create, on this same DVS, a port group

               -in this port group, per host, a vxlan vmkernel interface

          Now, in // of that, I understand that basically creating a "logical switch" within NSX creates nothing else than a port group within the distributed switch.

          Besides, enabling VXLAN on 1 cluster is apparently not requiring any logical switch conf...Still I was able to read on different places people mixing the logical switches and VXLAN topic which is confusing me.

          So : is there any link between VXLAN and NSX logical switches ? At which level ?

You are right enabling VXLAN will do above things(Will adjust MTU only if it is below 1500)  and from VCD perspective you have an option to map the right network pool to each org-vdc so that VM's can leverage underlying network virtualization technology. In your case VXLAN network pool. A VXLAN network pool is created when you create a provider virtual datacenter.(PVDC) and you use same network pools accross multiple org-vdc.

Logical Switches/VNI/Virtual Wire/VXLAN switches all are same- from vSphere perspective they are port groups. You can also have external network as VXLAN switches with VCD instead of normal vSphere portgroups.

     3) Down to the physical layer : my supposal is that if the vxlan vmkernel is sending/receiving ethernet frames of 1600 bytes, it means that all my physical uplinks also needs to be configured with a MTU of 1600. Do I understand this correctly ?

Yes, MTU from End-End if we are using VXLAN

     4) Finally, is it possible to enable VXLAN in DHCP and define, afterwards, a static IP on corresponding vxlan vmkernels ? Because once VXLAN is enabled, I don't see any way to change the ip allocation setting from DHCP to IP Pool.

vCloud Director supports three types of networks.

External networks

organization vDC networks

vApp networks

There are multiple ways through which we can leverage dhcp .From VM perspective you can certainly leverage NSX-EDGE DHCP feature or you can connect the VCD VM- all the way to external network and have a DHCP server there so that i can receive an ip during the bootstrap process. If you go with first option connectivity would be like - VM(VAPP) to NSX-edge(DHCP) connected to External Network OR VM directly connected to external network

pastedImage_25.png

pastedImage_26.png

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Hello, thanks a lot for those answers !

1)

VM's will run either on OLD or on NEW cluster, they won't be between. Once all vm from an org VDC will be migrated to NEW_CLUSTER, that's it, they are not going back to the other side.

vSAN is configured in streched cluster yes (3 hosts on DC-A, 3 on DC-B and a witness on Site-C), both datacenters have redundant 10Gbe connectivity. (By the way, all vSAN vmkernels will be on a dedicated DVS and each host will be connected to this DVS by 2*10Gbe links. The idea is to totally isolate the vsan traffic from anything else). Why did you hope it was not streched ?

The time that the migration is ongoing, yes, VCD will have some VM's on OLD_CLUSTER their data running on the old SAN and other VM's on NEW_CLUSTER data on vSAN. (In order to do the migration from SAN to vSAN, 1 host in NEW_CLUSTER has an HBA trough which the hosts is able to see SAN datastores in addition to VSAN).

,

Now I am confused about the relation between : the VLAN of the VXLAN, the external network and the fact that the VXLAN VLAN needs to be trunked. At the moment all org VDC are using vCDNI (i didn't do the config at the time), and I don't see the VLAN it is using nowhere else than in vmware ? I feel I am missing some important part here.

2)

Ok, I understand that all I need to do is enable VXLAN on a cluster, a corresponding port group gets created and that's it.

3)

OK Smiley Happy

4)

I did express myself badly. What I meant is when, in Network&Security>Installation>Host preparation, you enable VXLAN you can choose how the vxlan vmknic gets it's IP from. For the OLD_CLUSTER I choosed "DHCP" but I am not able to change it somewhere back to "IP Pool". So, instead of unconfiguring VXLAN on that production cluster, if I can't just give it some static ip ?

Reply
0 Kudos
Sreec
VMware Employee
VMware Employee
Jump to solution

1)

VM's will run either on OLD or on NEW cluster, they won't be between. Once all vm from an org VDC will be migrated to NEW_CLUSTER, that's it, they are not going back to the other side.

vSAN is configured in streched cluster yes (3 hosts on DC-A, 3 on DC-B and a witness on Site-C), both datacenters have redundant 10Gbe connectivity. (By the way, all vSAN vmkernels will be on a dedicated DVS and each host will be connected to this DVS by 2*10Gbe links. The idea is to totally isolate the vsan traffic from anything else). Why did you hope it was not streched ?

This is perfect for VSAN design. Since you never mentioned stretched VSAN- i concluded it is a unique VSAN site :Smiley Wink

The time that the migration is ongoing, yes, VCD will have some VM's on OLD_CLUSTER their data running on the old SAN and other VM's on NEW_CLUSTER data on vSAN. (In order to do the migration from SAN to vSAN, 1 host in NEW_CLUSTER has an HBA trough which the hosts is able to see SAN datastores in addition to VSAN).

,

Now I am confused about the relation between : the VLAN of the VXLAN, the external network and the fact that the VXLAN VLAN needs to be trunked. At the moment all org VDC are using vCDNI (i didn't do the config at the time), and I don't see the VLAN it is using nowhere else than in vmware ? I feel I am missing some important part here.

External networks will be ideally mapped to single VLAN backed vSphere port-group(We still can't trunk this as far as i know-> VMware Documentation Library  ) . So we will have to create unique vlan backed Port-Groups for external networks and VM's in vapps can connect to them directly which is one of the method.

Well VCDNI is almost the same kind of isolation what VXLAN also gives,but overall technology/deployment and config is totally different. Vlans are optional for VCDNI so i believe you are not using VLAN that is why it is not showing for VCDNI.

For sure you need to migrate them to VXLAN and you can follow steps in Migrating VCDNI Networks to VXLAN Networks in vCloud Director (2148381) | VMware KB  . Ensure that you are migrating it prior to Host/VC upgrade to 6.5

VCDNI – Tom Fojta's Blog

4)

I did express myself badly. What I meant is when, in Network&Security>Installation>Host preparation, you enable VXLAN you can choose how the vxlan vmknic gets it's IP from. For the OLD_CLUSTER I choosed "DHCP" but I am not able to change it somewhere back to "IP Pool". So, instead of unconfiguring VXLAN on that production cluster, if I can't just give it some static ip ?

I don't fully remember if there is a straight way to achieve it . It would be slightly tricky since VCD is also involved. Most likely API call will be required to do this change.

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Hello

Thanks for previous answers it helped a lot 🙂

Just updating this topic to let people know how this was finally done.

1) I used the same vlan id for the vxlan preparation of my hosts.

2) Not sure from where my confusion with the logical switches and the vxlan came from, nothing to do with using vxlan between hosts.

3) Of course, this was more or less obvious.

4) Finally I decided to go with the IP Pool option. Since there is no option to switch from DHCP to IP Pool, I had to unprepare my vxlan prepared hosts first. This was not possible, for some reason I could not remove my cluster from the default created vxlan pool, the option was greyed out. I had to delete the pool and create a new one. But then re-assiated my hosts which then received ip's from the pool.

Reply
0 Kudos