VMware Networking Community
MihirP
Enthusiast
Enthusiast

IP selection for Workload VMs in NSX

Hello All,

I am new to NSX, do not have production experience in it. Worked on vCloud with NSX-V implementation as POC, but some how could complete it. Currently I have to work on POC of Implementing VCF with NSX-T.

 

I have read NSX articles, and have posted queries on VMware forum earlier also, got useful reply also, but somehow I am not able to clear my doubt and want to have a clear "permanent" understanding for below issue. So hoping for better guidance here.

 

Right now going with basic concepts where I am considering 2 domains;

 

1st > Management Domain

2nd > Workload Domain

 

All Management Domain VMs will have routable IP in 10.x.x.x range, so they will be accessible from outside.

NOTE: For now here "outside" I do not mean Internet world, but just within Company Network.

 

Confusion is with Workload Domain. >> Which IP range is to be allocated to workload domain VMs and How it should be allocated <<

If routable IP is to be allocated to these VMs, then I don't think there is any meaning of using NSX. Everywhere I have read in example it is given 172.x.x.x or 192.x.x.x range which are non-routable IPs, and if these range is used, then these VMs will not be accessible within Company Network, only VMware console has to be used then.

I might have asked something very silly, but as I am newbie in NSX, I am getting very confused here.

 

Hope I have clearly asked my question. Appreciate expert's help here.

 

Thanks.

Labels (1)
Reply
0 Kudos
5 Replies
p0wertje
Hot Shot
Hot Shot

Hi,

It is not always easy to answer the question you have. I see your point. You just have to ask yourself the question in what you want.

- if the ips for the vms are not routable, but you still need access to them, you could for example use DNAT
- if still want a way to manage the vms, you could create a segment with a (stepstone) vm on routable ip space and from there hop to the other segments.
- you could create a separate management network via the service interface on the T1
- you could choose to make the ips on the vms routable within your company network. You mention that "then I don't think there is any meaning of using NSX". I disagree with that, because you still benefit of the easy programmability of nsx.

How we do it in our company: We use rfc-1918 ip space (10.x 172.16 192.168) on the vms. All the ips are routable within our company network.
On the t1 we use no-nat between the networks and have a SNAT statement for all the other 'internet' traffic to a public IP.
We choose to do that on the T1 because of multiple customers, but we still use 1 T0 ecmp to connect everything. 

It all depends on you use case and requirements to get to a clear answer on your question.

 

Cheers,
p0wertje | VCIX6-NV | JNCIS-ENT | vExpert
Please kudo helpful posts and mark the thread as solved if solved
Romaguera
Contributor
Contributor

Hi All,

I'm considering implementing NSX however we're also doing ACI very soon and wondering if anyone out there has done both in their environments and what are the benefits of doing both or not doing both? If you've done ACI, why not NSX as well ?

 

Thanks !

 

 

 

MyLowesLife

Reply
0 Kudos
p0wertje
Hot Shot
Hot Shot

Hi,

I think they can complement each other. I am not an ACI guy though. Maybe @shank89 is more into the cisco 🙂
ACI can provide the underlay and is very good in also connecting Bare Metal servers etc.
On top of that you can run NSX-T just fine. NSX has allot of power inside the Esxi hypervisor, where ACI has not.
I found a blog about integration of ACI and NSX https://www.mvankleij.nl/post/aci-nsxt/


Cheers,
p0wertje | VCIX6-NV | JNCIS-ENT | vExpert
Please kudo helpful posts and mark the thread as solved if solved
Reply
0 Kudos
Sreec
VMware Employee
VMware Employee

In a nutshell, you have the feasibility of running a single IP fabric(ACI) which can further be extended if required. If your physical network still follows the legacy approach it will become tedious especially in the multi-tenant platform. I have worked on many ACI+NSX  projects and this has been always an interesting topic to discuss.  Answered a similar topic here ->https://communities.vmware.com/t5/VMware-NSX-Discussions/SDN-Cisco-ACI-Vs-NSX/td-p/1426645 . For NSX all we need is IP connectivity and MTU 🙂 , if you are switching from legacy networking to SDN, there will be a good number of physical migrations in the scope and it needs careful planning. 

Cheers,
Sree | VCIX-5X| VCAP-5X| VExpert 6x|Cisco Certified Specialist
Please KUDO helpful posts and mark the thread as solved if answered
Reply
0 Kudos
shank89
Expert
Expert

Hi,

As mentioned they can compliment each other, but if done poorly will cause you grief.

A lot of the time the ACI underlay ends up becoming a glorified spine and leaf design.

If you are running automation or anything else on the VMware stack, you'll see most benefits from running a majority of your networking from NSX.

 

this design guide might be useful for you https://nsx.techzone.vmware.com/sites/default/files/resource/design_guide_deploying_nsx_data_center_...

Shashank Mohan

VCIX-NV 2022 | VCP-DCV2019 | CCNP Specialist

https://lab2prod.com.au
LinkedIn https://www.linkedin.com/in/shankmohan/
Twitter @ShankMohan
Author of NSX-T Logical Routing: https://link.springer.com/book/10.1007/978-1-4842-7458-3
Reply
0 Kudos