VMware Cloud Community
faillax23
Enthusiast
Enthusiast

VIO-NSX Deployment

I'm migrating from VDS to NSX deployment.

I installed NSX 6.2 in the same vio-manager cluster with 3 controllers and everything is ok.

When i deploy VIO2 i receive the error like "NSX failed on the first controller" reason timeout....

I read this thread VIO deployment with NSX in SoftLayer fails to start neutron but i've just installer 6.2 revision...so what it is my problem?

Do you find something else?

0 Kudos
4 Replies
KarolSte
Enthusiast
Enthusiast

Hi,

This error message usually means that there was an error when neutron driver tried to communicate with NSX. Please login to the first VIO controller VM and check /var/log/neutron/ logs (or use viocli deployment getlogs) to see what was the problem - feel free to paste the log here.

Best Regards,

Karol

0 Kudos
faillax23
Enthusiast
Enthusiast

This is my log, thank you very much

VIO-manager log

2015-11-24 18:49:22,174 p=337 u=jarvis |  TASK: [config-controller | start neutron on first controller] *****************

2015-11-24 18:49:22,393 p=337 u=jarvis |  changed: [172.18.50.178]

2015-11-24 18:49:22,394 p=337 u=jarvis |  TASK: [config-controller | wait for neutron to start on first controller for NSX] ***

2015-11-24 19:04:22,709 p=337 u=jarvis |  failed: [172.18.50.178] => {"elapsed": 900, "failed": true}

2015-11-24 19:04:22,709 p=337 u=jarvis |  msg: Timeout when waiting for 127.0.0.1:9696

2015-11-24 19:04:22,709 p=337 u=jarvis |  ...ignoring

2015-11-24 19:04:22,710 p=337 u=jarvis |  TASK: [config-controller | stop neutron if port 9696 is not ready] ************

2015-11-24 19:04:23,051 p=337 u=jarvis |  changed: [172.18.50.178]

2015-11-24 19:04:24,082 p=337 u=jarvis |  ok: [172.18.50.179]

2015-11-24 19:04:24,093 p=337 u=jarvis |  TASK: [config-controller | fail if port 9696 is not ready] ********************

2015-11-24 19:04:24,127 p=337 u=jarvis |  failed: [172.18.50.178] => {"failed": true}

2015-11-24 19:04:24,127 p=337 u=jarvis |  msg: the neutron server start failed

2015-11-24 19:04:24,133 p=337 u=jarvis |  failed: [172.18.50.179] => {"failed": true}

2015-11-24 19:04:24,133 p=337 u=jarvis |  msg: the neutron server start failed

2015-11-24 19:04:24,143 p=337 u=jarvis |  FATAL: all hosts have already failed -- aborting

A detail of controller01 log (900 seconds of the same error)

2015-11-24 18:49:25.539 24957 DEBUG vmware_nsx.neutron.plugins.vmware.vshield.edge_utils [-] Failed to deploy Edge for router backup-a917bd3e-53bb edge_deploy_result_s$

2015-11-24 18:49:25.546 24957 ERROR vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver [-] NSXv: deploy edge failed.

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver Traceback (most recent call last):

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/p$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     async=False)[0]

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/p$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     return self.do_request(HTTP_POST, uri, request, decode=False)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/p$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     headers, encodeParams)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/lib/python2.7/dist-packages/retrying.py", line 68, in $

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     return Retrying(*dargs, **dkw).call(f, *args, **kw)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/lib/python2.7/dist-packages/retrying.py", line 223, in$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     return attempt.get(self._wrap_exception)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/lib/python2.7/dist-packages/retrying.py", line 261, in$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     six.reraise(self.value[0], self.value[1], self.value[2])

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/lib/python2.7/dist-packages/retrying.py", line 217, in$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     attempt = Attempt(fn(*args, **kwargs), attempt_number, False)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/p$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     return client(method, uri, params, headers, encodeParams)

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver   File "/usr/local/lib/python2.7/dist-packages/vmware_nsx/neutron/p$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver     raise cls(uri=uri, status=status, header=header, response=respo$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver RequestBad: Request https://172.18.50.200/api/4.0/edges is Bad, res$

2015-11-24 18:49:25.546 24957 TRACE vmware_nsx.neutron.plugins.vmware.vshield.edge_appliance_driver

0 Kudos
KarolSte
Enthusiast
Enthusiast

Is there anything else in the neutron log that would say why that request was bad?

0 Kudos
faillax23
Enthusiast
Enthusiast

Hi Karol, my problem was solved...in my configuration of NSX i forgot to setup Segment ID Pool

Now it works!

But now i have a new problem related to instances (with VDS deploy i dont have this problem)

Everything works very well, security groups, dhcp and so on but the instances are unable to fetch metadata from http endpoint, they fetch metadata only from config-drive.

Metadata routing vm exists!

0 Kudos