VMware Cloud Community
benjamin000
Enthusiast
Enthusiast
Jump to solution

$$$ Offered for vExpert

We are need our VIO and NSX configured to NOT use NAT. I have an existing working VIO with NSX with NAT but now I want to move away from NAT so that instances are assigned public IPs directly on the interface and not an internal IP.

If you are experienced in completing this and actually KNOW how to complete this then I am happy to pay someone to get this done.

You can DM me and I will provide additional details as well as my other contact information.

Regards Ben McGuire
Tags (2)
1 Solution

Accepted Solutions
admin
Immortal
Immortal
Jump to solution

Hi Ben,

there is no negative impact on the current running VMs (edges), they will continue to operate without any issue.

The exact tasks to perform are the following:

-----------------------------------------------------------------------------------------

1) Using SSH, log into VMware Integrated OpenStack Manager.

2) From VMware Integrated OpenStack Manager, use SSH to log into one of the controller nodes (i.e. controller01).

ssh controller01

3) Switch to root user.

sudo su - 

4) Edit /etc/neutron/plugins/vmware/nsxv.ini file, modifying bind_floatingip_to_all_interfaces = False parameter into bind_floatingip_to_all_interfaces = True. Save the file.

5) Restart VIO-Controller-0, from vCenter Server:

a.Navigate to the vSphere Web Client.

b.In the Inventories tab, click VMware Integrated OpenStack.

c.Click OpenStack deployments.

d.Click on your deployment.

e.Select VIO-Controller-0.

f.From the All Actions dropdown, click on Restart Services.

6) Once VIO-Controller-0 has been completely restarted, modify /etc/neutron/plugins/vmware/nsxv.ini file on controller02, restarting VIO-Controller-1.

7) Be sure that ingress and egress rules are correctly configured in Security Groups, associating them to the involved Instances.

😎 IF floating IP address were already associated to the involved Instances, disassociate and associate them back.

9) Test if Instances are able to communicate each other using their floating IP addresses.

-----------------------------------------------------------------------------------------

The change includes existing VMs and not.

Cheers,

Domenico

View solution in original post

Reply
0 Kudos
32 Replies
admin
Immortal
Immortal
Jump to solution

Hi Benjamin,

I don't know if I have understood well the case scenario but an idea could be just creating one external network for each external IP address and playing with firewall rules to defin ALL-> ALL.

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Hello

Thank you for your response.

Your advice may sound good in theory but I dont think it will work as there needs to be routing configured. BGP needs to be setup in NSX along with a few other things which is above my pay grade.

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi Benjamin,

Is your goal letting instances (getting floating ip) communicate trough external IP address and not using the internal one?

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Hello

Yes my goal is to allocate a public IP to the interface of the instance and NOT use NAT at all.

OVH uses this setup in their Cloud service whereas they assign an IP address directly to the instance and not use floating IPs at all. Instead of the private natted ip they replace this with a public IP which is what I am trying to acheive.

I have another post where someone trys to explain it however they did not elaborate enough.

You can view the post here   VIO public IPs and NOT use NAT

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi Benjamin,

I took some time before replying you, in order to get more ideas Smiley Happy.

For my understanding, in VIO, you cannot attach an instance directly to an external network but it should go trough a router.

As per the above, Logical Routing in VMware VIO can be used in 3 different modes:

A) Centralized-Shared

Different tenants share the same NSX Edge-VM

Pros: Limit the # of NSX Edge-VM

Technical Note:

A brand new Shared Edge is created (from the backup-xx edges*) in case of :

•New Logical Router has overlapping IP subnets with existing subnets in Shared Edge

•The Shared Edge reached 10 interfaces

•Staticroutes are configuredon the LogicalRouter

--------------------------------------------------------------------

  B) Centralized-Exclusive

Each tenant has its own dedicated NSX Edge-VM

Pros: Guaranteed performance + Load Balancing

-------------------------------------------------------------------

C) Distributed

Each tenant has its own dedicated NSX DLR + Edge-VM

Pros: Guaranteed performance + DistL3

Considering the 3 above modes (hoping were useful), the first thing you can try, in order to let VMs communicate with each other on the same internal network using their public IPs/Floating IPs:

1) SSH to controller node

2) Make sure you have root priviledges (sudo su)

3) Navigate to /etc/neutron/plugins/vmware

4) Edit the following parameter to True in nsxv.ini file (as currently its showing false as per VIO Logs uploaded by you) :

bind_floatingip_to_all_interfaces = True

Please, try this on a test environment first and let me know if it can be a possible solution.

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

HI Dom

You are correct in saying that VIO instances cannot be attached directly to external networks.

Currently we use Centralized/Shared Router which all instances share.

From my understanding BGP needs to be configured within NSX as even when I disable SNAT in VIO and create a new network just for the public subnet the instance do get assigned a public IP on the instance interface however there is no external access do to the instance not knowing where the next hop needs to go. This is where BGP comes in and also the DLR. I am not privy to the workings of BGP in NSX but from my own research BGP is required as dynamic routing is needed for a no-NAT network.

I am moving to Exclusive routers when all this is done as this will provide better performance as you state.

I was a little confused about your statement regarding instances communicating with each other.....this is something that I do not want as the whole purpose of NSX is segmentation and if I want instance to be able to talk I can always use security groups in OpenStack.

I posted the same request on LinkedIn and received a couple of replies but naturally they need a network diagram and as complex as my network is it will take me a few days to do. I am surprised that no one is able to provide a definitive answer as I am sure that a NAT setup in OpenStack is not ideal and even OVH in their public Cloud does not use NAT as I have a couple of VMs on their network so I know that it is possible however they do not use VIO so it is like comparing apples with oranges but the underlying issue is there.

4) Edit the following parameter to True in nsxv.ini file (as currently its showing false as per VIO Logs uploaded by you) :

bind_floatingip_to_all_interfaces = True

I am a little hesitant to try this as currently I have about 200 VMs running so this may affect the running VMs or maybe not....???

The problem is not having a dev setup as the amount of servers needed for a VIO/NSX setup would not warrant it but maybe in a year or so I will be able to see the value but paying another $2000 per month for servers is not within the budget. It would be great if I could somehow test on VMwares HOL which may work but highly unlikely.

If you know any VIO/NSX guys that wants some work as I know that for someone that knows what they are doing the setup is not that difficult and if I were not running live Vms I would try it myself but as I said I do not have the luxury of having a dev setup to be able to do that.

Thanks Dom I appreciate you taking the time to response and provide some guidance it is always appreciated.

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

before going deeper in your last replying, I forgot to askyou  before:

OVH is using Openstack or vCloud Director?

because if they use vCloud Director... it is easy to associate an external IP address to a VM  Smiley Wink

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

OVH uses OpenStack for their Public Cloud but they do use vCloud Director as they did buy out the vcloud arm of VMware.

SO we are not confused that question does not apply to me i am guessing as I merely leases OVH Servers....I just mentioned about OVH using OpenStack as I use their public cloud VMs for my own websites so I have them separate from my own VIO installation. Sounds funny but I do that do that if my site is ever DDoSed which it has been tried before it does not affect my VIO as my VIO is on a private network anyhow.....anyway that is enough about the workings of my network as I have given too much away in a pubic forum for my liking. Smiley Happy

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi Benjamin,

last time I was not so precise... sorry about that.

When I wrote regarding instances communicating each other, I meant that by default two VMs on the same internal network will not be able to communicate via their floating IPs.

This is because the NAT rule setting the floating IP is created on the Edge's external interface, and when we are trying to connect to the floating IP from an internal VM we check the internal Edges interface only for DNAT rules and not the external interface as the external interface is not being used as an ingress point to the Edge.

So in order to allow the internal VMs to communicate via their floating IPs we need to create a hairpin NAT, which is done by duplicating the SNAT and DNAT rules on the Edge's internal interface.

The problem is that we cannot do this through VIO and it would need to be done directly in NSX, meaning if a change were made to the Edge through VIO those manual settings done in NSX would be overwritten.

For this reason, I recommended:

bind_floatingip_to_all_interfaces = True

that it should allow VMs connected to the same internal network using their public IPs/Floating IPs to communicate each other.

I have the possibility to test it, I'll let you know! no worries!

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Wow that would be really helpful if you could test it as that appears so simple if it works.

I will reply in more details tomorrow as it is late here.

Thanks Again Smiley Happy

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi Benjamin,

I just finished my tests and I can confirm you it is working!

As previously mentioned, if you set "bind_floatingip_to_all_interfaces = True" in nsxv.ini, you are able to let VMs communicate trough the floating IP addresses.

The following are my tests:

We have two instances:

NikoVM-2:

Internal 192.168.200.52

Floating x.x.x.221

NikoVM-1:

Internal 192.168.200.51

Floating x.x.x.220

From the external network (this checks floating on NikoVM-1):

1) Accessing from outside to x.x.x.220

login as: root

root@x.x.x.220's password:

Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-108-generic x86_64)

Testing internal network:

2) Pinging NikoVM-2 internally:

root@nikovm-1:~# ping 192.168.200.52

PING 192.168.200.52 (192.168.200.52) 56(84) bytes of data.

64 bytes from 192.168.200.52: icmp_seq=1 ttl=64 time=0.584 ms

64 bytes from 192.168.200.52: icmp_seq=2 ttl=64 time=0.507 ms

Testing floating IP address from an internal:

3) Pinging NikoVM-2 externally:

root@nikovm-1:~# ping x.x.x.221

PING x.x.x.221 (x.x.x.221) 56(84) bytes of data.

64 bytes from x.x.x.221: icmp_seq=1 ttl=63 time=0.894 ms

64 bytes from x.x.x.221: icmp_seq=2 ttl=63 time=1.05 ms

Testing connecting trough floating IP address.

4) Accessing to NikoVM-2 via ssh:

root@nikovm-1:~# ssh root@x.x.x.221

The authenticity of host 'x.x.x.221 (x.x.x.221)' can't be established.

ECDSA key fingerprint is 51:8d:e0:e9:cc:38:03:fa:0b:12:9d:2c:15:1c:76:50.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'x.x.x.221' (ECDSA) to the list of known hosts.

root@x.x.x.221's password:

Welcome to Ubuntu 14.04.5 LTS (GNU/Linux 3.13.0-108-generic x86_64)

Please, let me know what you think about it Smiley Wink

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Hello. It sounds promising.

What ip is displayed on the instance interface?

Am i correct in thinking that there will be no natted traffic so all traffic directed to the public ip will hit the interface of the instance?

Does this do away with NAT?

Its late here so ill test tomorrow.

If this works you have provided a simple solution to an otherwise difficult problem.

Regards Ben McGuire
Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

these are the IP displayed:

Instance Name IP Address
NikoVM-1 

192.168.200.51

Floating IPs:

x.x.x.220

NAT cannot be removed, considering an instance cannot be directly attached to an external network.

This trick, allows you to have communication (using the floating IP address) between VMs part of the same network and not (as per design) using the internal address only:

As I mentioned in my last comment:

Two VMs:

NikoVM-2:

Internal 192.168.200.52

Floating x.x.x.221

NikoVM-1:

Internal 192.168.200.51

Floating x.x.x.220

one NSX edge connected to:

- external network (to x.x.x.y/20)

- internal network (to 192.168.200.0/24)

- metadata

Hoping the above can help you to solve your problem

Cheers,

Domenico

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Hello

I think the confusion is that i do not want communication between VM as this would not br wise having 2 customers being able to talk to one another.

I am a little confused when you say that NAT cannot be removed. This from what i have read is possible as many other providers are already doing it. I know OVH with their public cloud they dont use floating ips at all....they attach public ips where the usual internal nat ip would be and this allows the public ip to be assigned to the instance interface and also OpenStack has thr fu ction to disable SNAT which from what i have read needs to be disabled. I have created a new network and router with nat disabled and by having public subnets on the external network they do get assigned to the instance but as you say this should not be possible.

I have an NSX expert that is helping with this as i think BGP needs to be setup in NSX to allow for external routing.

In any event ill test your setup tomorrow and let you know as it might be exactly what is needed.

Regards Ben McGuire
Reply
0 Kudos
ZeMiracle
Enthusiast
Enthusiast
Jump to solution

Hello,

We have the exact same problem with our VIO deployment.

Our customer ask us to be able to communicate between project instance with the floating IP, i'm very happy to discover there is a NSXv plugin setting that allow to bind the NAT to all interface

bind_floatingip_to_all_interfaces = True

Can you tell me if this setting will be applied on existing Tenant router or do we need to redeploy from VIO ?

Cedric.

Reply
0 Kudos
ZeMiracle
Enthusiast
Enthusiast
Jump to solution

I don't know if it can help, but, in our VIO deployment, we are able to provide "full" routed ip to our project, but it need some NSX configuration.

Into VIO, we create a IP Pool with a routed Subnet.

We create a network and a subnet using this pool and use NSX to manualy (or with script) attach this network to a DLR.

This way, instance get a IP from the routed ip pool.

Cedric.

Reply
0 Kudos
benjamin000
Enthusiast
Enthusiast
Jump to solution

Hello

I believe the solution about binding all IPs to all interfaces will work but I am yet to test it as I need to work out the impact of such a change will have on existing Instances. Maybe if you make the change you can advise ?

Into VIO, we create a IP Pool with a routed Subnet.

We create a network and a subnet using this pool and use NSX to manualy (or with script) attach this network to a DLR.

Could you please elaborate on this as I too have read that subnet pools will work but again I am not game to try it in a live environment.

I have been looking for a guaranteed solution to this for about 6 months so any advise or advise from someone that has actually implemented such a setup like you have would be most helpful.

Regards Ben McGuire
Reply
0 Kudos
ZeMiracle
Enthusiast
Enthusiast
Jump to solution

To achive full routed environment, we create a DLR with NSX and we manually plug VIO virtual Network into.

Here the command to create a IP Pool into VIO (you need a up to date cli installe).

(openstack) subnet pool create --pool-prefix <NETWORK CIDR> --description "mettre une description" --default-prefix-length 28 <IP POOL NAME>

When creating Networks and Subnet, you have to tell you want to take the subnet from the IP Pool.

$(openstack) network create --project [PROJECT NAME] --project-domain [DOMAIN] [NETWORK NAME]

Here the command to create the subnet, but you sould have a

openstack subnet create [SUBNET NAME] --subnet-pool [SUBNET POOL NAME] --network [NETWORK NAME]

When the network is created, you have to manually (or with a script) plug the network into the DLR...

In this configuration, you use no floating ip at all.

Cedric.

Reply
0 Kudos
admin
Immortal
Immortal
Jump to solution

Hi Cedric,

I'm so glad this setting has been useful for you!

Once applied, you have to redeploy nothing!

Setting is automatically applied on all the edges.

Cheers,

Domenico