Jeremy_VE
Enthusiast
Enthusiast

IP Assignment with vCAC External Network Profile

Jump to solution

I've integrated vCAC and NSX and am noticing that the Edge Service Router that gets deployed as part of a multi-machine blueprint is receiving 2 IPs on its single "uplink" interface from the External network profile.  Below is my setup and what is going on.  Any help in understanding why this is happening would be helpful.  Not a huge deal as this is a learning lab, but when I go to implement this in a production environment I need to know if this is expected behavior or if something is wrong as this effectively cuts the # of possible deploy-able networks in half.  Again, not a big deal as this "transport network" (the segment between the manually deployed Edge Gateway and the dynamic Edge Service Routers) will exist entirely within the vSphere environment and can be as large as a class A network if needed, however this is a huge waste of IP spaces which I'd like to resolve if possible.

Topology:

I have a NSX Edge and virtual wire (NSX L2 switch) already deployed in the environment.  Within vCAC I have:

- a reservation that is linked to the dvPortGroup created by the NSX L2 switch

- an External network profile that is used to configure the uplink port of dynamically deployed NSX Edge Service Routers to connect to the LAN segment between the dynamic ESR and the already deployed NSX Edge/L2 switch

- a 1-Many NAT network profile that is used to configure the virtual machines deployed from vCAC blueprints

- a vCAC vSphere VM blueprint pointing to a snapshot of a VM within the vSphere environment (linked clone deployment)

- a vCAC Multi-Machine blueprint that contains the above blueprint and assigns a network interface to the VM and using the 1-Many NAT network profile to configure the IP settings on the VM.  The MM blueprint only contains a single VM for the purpose of testing the dynamic network creation and IP assignment features/integration between vCAC and NSX.

Resulting Topology once VM is deployed:

                        NSX Edge Gateway (manually deployed)

                                                    v

                                                    v

                   NSX L2 Switch/Virtual Wire (manually deployed

                                                    v

                                                    v

NSX Edge Service Router (deployed as part of vCAC blueprint deployment)

                                                    v

                                                    v

        Virtual Machine (deployed as part of vCAC blueprint deployment)

In theory what should happen is that when I request a resource from the MM blueprint:

1. The ESR is deployed with 2 interfaces: 1 for the External network configured with an available IP on the corresponding subnet, and 1 for the internal NAT network configured with the IP of the default gateway which is configured in the NAT network profile.

2. Rules for NAT and traffic handling are automatically configured within the ESR

3. The VM is deployed and configured with a NIC with the appropriate IP configurations as specified within the NAT network profile.


What actually happens:
1. The ESR gets deployed with 2 NICs; 1 NIC for the uplink to the External network which gets 2 IPs from the 13 subnet (instead of 1) and 1 NIC for the default gateway of the NAT'd network configured with the IP of the NAT network default gateway.  Steps 2 and 3 still occur.

1 Solution

Accepted Solutions
GrantOrchardVMw
Commander
Commander

Hi Jeremy,

This is expected behaviour. If you were to deploy an Edge manually, you would be asked for a "management IP", and then an IP for use by the uplink. This is where the second IP comes from. If you were to use one-to-one NAT, you would get an additional IP for every VM on the NAT'd segment.

Cheers,

Grant

Grant http://grantorchard.com

View solution in original post

6 Replies
GrantOrchardVMw
Commander
Commander

Hi Jeremy,

This is expected behaviour. If you were to deploy an Edge manually, you would be asked for a "management IP", and then an IP for use by the uplink. This is where the second IP comes from. If you were to use one-to-one NAT, you would get an additional IP for every VM on the NAT'd segment.

Cheers,

Grant

Grant http://grantorchard.com

View solution in original post

Jeremy_VE
Enthusiast
Enthusiast

Thanks Grant.

I kind of figured that it was hardcoded in the MM Blueprint deployment code when I saw in the Recent Tasks section within vCAC the execution of the "Add secondary IP to NIC" workflow call to vCO.  This was confirmed when I looked at the edge router and saw the NAT rules using the secondary IP as the "translated to" IP address.

Is there a best practice if I was to create a workflow manually as to either use the initially configured IP or if a secondary IP is recommended due to performance reasons?

Jeremy

0 Kudos
GrantOrchardVMw
Commander
Commander

Take a look at the NAT rules, and figure out which IP it is using (I'd assume the secondary). The management IP should not be used for traffic handling.

Hope that helps!

Grant

Grant http://grantorchard.com
Jeremy_VE
Enthusiast
Enthusiast

It is using the secondary.  Was wondering as to the design considerations for this "external" network, especially if the requirement is to be able to dynamically deploy a large amount of NAT'd networks as this could potentially require a large IP space (>/24) and could potentially be an issue in environments IP ranges are at a premium.  Not something I need solved, just good info to know.

Jeremy

0 Kudos
GrantOrchardVMw
Commander
Commander

Yeah I see what you're saying. I need to delve into the changes with NSX 6.1 and the use of Logical Routers to understand what other options we have. Inherently we still have the limitation of NAT, it would just be nice if we could perhaps identify a different interface/IP Range for the management IP of the Edge.

Grant

Grant http://grantorchard.com
0 Kudos
Jeremy_VE
Enthusiast
Enthusiast

It would be nice to have more granular control in linking the virtual interfaces on the edge to a physical NIC  on the virtual machine.  If this was possible you would be able to inherently increase the security by placing the switch management interface on a separate dedicated network.

Jeremy

0 Kudos