midoshokry
Contributor
Contributor

VIO 7.1 Deployment MariaDB helm chart failed

Hi,

I'm building a VIO lab for testing. I have an issue with networking part (understanding issue). Below is briefly the network subnets I'm using:

pod_cidr10.0.0.0/16
service_cidr172.16.0.0/24
management network192.168.1.241,192.168.1.246      /24
private openstack endpoint192.168.1.222
api_network192.168.1.225,192.168.1.238     /28
Main home network where vCenter, ESXi and DNS connected192.168.1.0/24

 

After deploying the VIO manager. I've created a sample deployment but the mariadb chart fails due to connection issues. Below are logs from one of mariadb galera cluster pod:

 

root@vio-manager [ ~ ]# kubectl logs -n openstack mariadb-server-2
2021-06-27 17:26:52,951 - OpenStack-Helm Mariadb - INFO - This instance hostname: mariadb-server-2
2021-06-27 17:26:52,951 - OpenStack-Helm Mariadb - INFO - This instance number: 2
2021-06-27 17:26:56,006 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2ce0f55400>: Failed to establish a new connection: [Errno 113] No route to host',)': /version/
2021-06-27 17:26:59,078 WARNING Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2ce0f55518>: Failed to establish a new connection: [Errno 113] No route to host',)': /version/
2021-06-27 17:27:02,150 WARNING Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2ce0f555f8>: Failed to establish a new connection: [Errno 113] No route to host',)': /version/
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 157, in _new_conn
    (self._dns_host, self.port), self.timeout, **extra_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py", line 84, in create_connection
    raise err
  File "/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py", line 74, in create_connection
    sock.connect(sa)
OSError: [Errno 113] No route to host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 672, in urlopen
    chunked=chunked,
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 376, in _make_request
    self._validate_conn(conn)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 994, in _validate_conn
    conn.connect()
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 334, in connect
    conn = self._new_conn()
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 169, in _new_conn
    self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.VerifiedHTTPSConnection object at 0x7f2ce0f556d8>: Failed to establish a new connection: [Errno 113] No route to host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/tmp/start.py", line 57, in <module>
    kubernetes_version = kubernetes.client.VersionApi().get_code().git_version
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/apis/version_api.py", line 55, in get_code
    (data) = self.get_code_with_http_info(**kwargs)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/apis/version_api.py", line 124, in get_code_with_http_info
    collection_formats=collection_formats)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 334, in call_api
    _return_http_data_only, collection_formats, _preload_content, _request_timeout)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 168, in __call_api
    _request_timeout=_request_timeout)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/api_client.py", line 355, in request
    headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/rest.py", line 231, in GET
    query_params=query_params)
  File "/usr/local/lib/python3.6/dist-packages/kubernetes/client/rest.py", line 205, in request
    headers=headers)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/request.py", line 76, in request
    method, url, fields=fields, headers=headers, **urlopen_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/request.py", line 97, in request_encode_url
    return self.urlopen(method, url, **extra_kw)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/poolmanager.py", line 330, in urlopen
    response = conn.urlopen(method, u.request_uri, **kw)
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 762, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 762, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 762, in urlopen
    **response_kw
  File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 720, in urlopen
    method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
  File "/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py", line 436, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='172.16.0.1', port=443): Max retries exceeded with url: /version/ (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f2ce0f556d8>: Failed to establish a new connection: [Errno 113] No route to host',))

 

I have two question:

  • Why Galera members are trying to connect to "172.16.0.1"? This is the IP of default kubernetes service. What might be wrong network configuration I did?
  • Is there any routing requirements between pod_cidr and management network and API networks?

Below is the pods status after lunching the sample deployments:

 

root@vio-manager [ ~ ]# kubectl get po -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cluster-api-system cluster-api-controller-manager-0 1/1 Running 1 5h17m 10.0.38.230 vio-manager.lab.local <none> <none>
cluster-api-system cluster-api-provider-controller-manager-0 1/1 Running 1 5h17m 10.0.38.226 vio-manager.lab.local <none> <none>
default vio-api-app-0 1/1 Running 1 5h16m 10.0.38.231 vio-manager.lab.local <none> <none>
default vio-docker-registry-8898b4bdc-jxbl5 1/1 Running 1 5h19m 10.0.38.228 vio-manager.lab.local <none> <none>
default vio-docker-registry-ca-9h7s5 1/1 Running 1 5h19m 10.0.38.221 vio-manager.lab.local <none> <none>
default vio-docker-registry-ca-9rs27 1/1 Running 0 3h40m 10.0.74.193 controller-q5bzjvlnf8 <none> <none>
default vio-docker-registry-ca-f8xgc 1/1 Running 1 3h39m 10.0.95.203 controller-2zfrpw9xpd <none> <none>
default vio-docker-registry-ca-jmgnd 1/1 Running 0 3h42m 10.0.28.7 controller-k4dfztcrzl <none> <none>
default vio-helm-repo-75f6cddff5-9fl7g 1/1 Running 1 5h19m 10.0.38.237 vio-manager.lab.local <none> <none>
default vio-ingress-cntl-nginx-ingress-controller-589dbdc9b4-slp7d 1/1 Running 1 5h16m 10.0.38.233 vio-manager.lab.local <none> <none>
default vio-ingress-cntl-nginx-ingress-default-backend-56f58c97b5-l7nl9 1/1 Running 1 5h16m 10.0.38.222 vio-manager.lab.local <none> <none>
default vio-operator-64b8c94d68-lkxv2 1/1 Running 1 5h16m 10.0.38.223 vio-manager.lab.local <none> <none>
default vio-swagger-app-0 1/1 Running 1 5h16m 10.0.38.227 vio-manager.lab.local <none> <none>
default vio-webui-6bdccf74c5-zgkp7 1/1 Running 1 5h17m 10.0.38.224 vio-manager.lab.local <none> <none>
default vio-webui-auth-proxy-0 1/1 Running 1 5h17m 10.0.38.225 vio-manager.lab.local <none> <none>
kube-system calico-kube-controllers-897c76cbb-2lktw 1/1 Running 1 5h19m 10.0.38.232 vio-manager.lab.local <none> <none>
kube-system calico-node-cwhll 1/1 Running 0 3h40m 192.168.1.243 controller-q5bzjvlnf8 <none> <none>
kube-system calico-node-frx6b 1/1 Running 0 3h45m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system calico-node-w4zw2 1/1 Running 1 3h39m 192.168.1.242 controller-2zfrpw9xpd <none> <none>
kube-system calico-node-xd8sl 1/1 Running 0 3h42m 192.168.1.241 controller-k4dfztcrzl <none> <none>
kube-system coredns-589sh 2/2 Running 0 3h41m 192.168.1.241 controller-k4dfztcrzl <none> <none>
kube-system coredns-9hn55 2/2 Running 0 3h40m 192.168.1.243 controller-q5bzjvlnf8 <none> <none>
kube-system coredns-h5hzj 2/2 Running 2 5h19m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system coredns-xjhs4 2/2 Running 2 3h39m 192.168.1.242 controller-2zfrpw9xpd <none> <none>
kube-system etcd-vio-manager.lab.local 1/1 Running 1 3h44m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system kube-apiserver-vio-manager.lab.local 1/1 Running 1 3h44m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system kube-controller-manager-vio-manager.lab.local 1/1 Running 0 3h44m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system kube-proxy-7ppmx 1/1 Running 0 3h42m 192.168.1.241 controller-k4dfztcrzl <none> <none>
kube-system kube-proxy-csj94 1/1 Running 1 5h20m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system kube-proxy-hb5pp 1/1 Running 1 3h39m 192.168.1.242 controller-2zfrpw9xpd <none> <none>
kube-system kube-proxy-r5f4n 1/1 Running 0 3h40m 192.168.1.243 controller-q5bzjvlnf8 <none> <none>
kube-system kube-scheduler-vio-manager.lab.local 1/1 Running 1 3h44m 192.168.1.248 vio-manager.lab.local <none> <none>
kube-system tiller-deploy-6ddb5b6b8d-vxwvp 1/1 Running 1 5h19m 10.0.38.229 vio-manager.lab.local <none> <none>
openstack cluster-controller-6fb766b55-pvhxr 1/1 Running 1 5h17m 10.0.38.219 vio-manager.lab.local <none> <none>
openstack create-objects-neutron-neutron1-jp9rtwdbvh-sjqjw 0/1 Completed 0 3h46m 10.0.38.204 vio-manager.lab.local <none> <none>
openstack create-viocluster-viocluster1-c687c2ab-33be-429c-becb-5e33zxlqv 0/1 Completed 0 3h45m 10.0.38.239 vio-manager.lab.local <none> <none>
openstack disc-vcenter-vcenter1-3858524e-aa86-484a-a546-572320200cb15kcb2 0/1 Completed 0 3h46m 10.0.38.203 vio-manager.lab.local <none> <none>
openstack helm-mariadb-mariadb1-svx5xhcmvs-mwx9c 0/1 Error 0 152m 10.0.74.226 controller-q5bzjvlnf8 <none> <none>
openstack helm-memcached-memcached1-wf76z5nq98-dxj48 0/1 Error 0 146m 10.0.28.38 controller-k4dfztcrzl <none> <none>
openstack helm-memcached-memcached1-wf76z5nq98-tb44j 1/1 Terminating 0 152m 10.0.95.240 controller-2zfrpw9xpd <none> <none>
openstack helm-rabbitmq-rabbitmq1-54k5xvhtvj-qk4j9 0/1 Error 0 146m 10.0.74.228 controller-q5bzjvlnf8 <none> <none>
openstack helm-rabbitmq-rabbitmq1-54k5xvhtvj-s267w 1/1 Terminating 0 152m 10.0.95.239 controller-2zfrpw9xpd <none> <none>
openstack license-controller-5f75f6b7d7-6lkp4 1/1 Running 1 5h17m 10.0.38.220 vio-manager.lab.local <none> <none>
openstack mariadb-ingress-85d965c899-cbf6d 0/1 Init:0/1 0 3h2m 10.0.74.204 controller-q5bzjvlnf8 <none> <none>
openstack mariadb-ingress-85d965c899-qqtjt 0/1 Init:0/1 0 3h2m 10.0.28.24 controller-k4dfztcrzl <none> <none>
openstack mariadb-ingress-error-pages-7dd75d8dfd-b4mjg 0/1 CrashLoopBackOff 63 3h2m 10.0.74.253 controller-q5bzjvlnf8 <none> <none>
openstack mariadb-ingress-error-pages-7dd75d8dfd-fjmht 0/1 CrashLoopBackOff 63 3h2m 10.0.28.63 controller-k4dfztcrzl <none> <none>
openstack mariadb-server-0 1/1 Terminating 0 3h2m 10.0.95.210 controller-2zfrpw9xpd <none> <none>
openstack mariadb-server-1 0/1 CrashLoopBackOff 15 58m 10.0.28.53 controller-k4dfztcrzl <none> <none>
openstack mariadb-server-2 0/1 CrashLoopBackOff 15 58m 10.0.74.243 controller-q5bzjvlnf8 <none> <none>
openstack mariadb1-etcd-79f4fbc576-6wvkx 0/1 Pending 0 146m <none> <none> <none> <none>
openstack mariadb1-etcd-79f4fbc576-mrzkk 0/1 Running 0 3h2m 10.0.28.23 controller-k4dfztcrzl <none> <none>
openstack mariadb1-etcd-79f4fbc576-p4nfk 1/1 Terminating 0 3h2m 10.0.95.209 controller-2zfrpw9xpd <none> <none>
openstack mariadb1-etcd-79f4fbc576-zxjh4 0/1 Running 0 3h2m 10.0.74.203 controller-q5bzjvlnf8 <none> <none>
openstack node-config-manager-s8c4h 1/1 Running 0 175m 192.168.1.241 controller-k4dfztcrzl <none> <none>
openstack node-config-manager-sdxkz 1/1 Running 0 175m 192.168.1.242 controller-2zfrpw9xpd <none> <none>
openstack node-config-manager-vbgmw 1/1 Running 0 175m 192.168.1.243 controller-q5bzjvlnf8 <none> <none>
openstack openstack-controller-849f99fdd5-wsh7n 1/1 Running 1 5h17m 10.0.38.236 vio-manager.lab.local <none> <none>
openstack osdeployment-nfv-vio 0/1 Completed 0 151m 10.0.38.244 vio-manager.lab.local <none> <none>
openstack patching-controller-6d6cfc5d6-8k7rb 1/1 Running 1 5h18m 10.0.38.234 vio-manager.lab.local <none> <none>
openstack rabbitmq1-rabbitmq-0 0/1 Terminating 0 155m <none> controller-2zfrpw9xpd <none> <none>
openstack rnb-controller-67cc7fc69c-bpnkt 2/2 Running 2 5h17m 10.0.38.238 vio-manager.lab.local <none> <none>
openstack status-controller-564886c86d-vbj2m 2/2 Running 2 5h17m 10.0.38.235 vio-manager.lab.local <none> <none>
openstack valid-barbican-barbican1-w6sv2sxbdz-l92sf 0/1 Completed 0 3h47m 10.0.38.250 vio-manager.lab.local <none> <none>
openstack valid-cinder-cinder1-9jbpghg88m-cpzn6 0/1 Completed 0 3h47m 10.0.38.208 vio-manager.lab.local <none> <none>
openstack valid-glance-glance1-w8r8z7bcz5-zhqsp 0/1 Completed 0 3h47m 10.0.38.210 vio-manager.lab.local <none> <none>
openstack valid-heat-heat1-ts7v4d8bvn-xc852 0/1 Completed 0 3h47m 10.0.38.209 vio-manager.lab.local <none> <none>
openstack valid-horizon-horizon1-rngphjxfwb-qx99l 0/1 Completed 0 3h47m 10.0.38.252 vio-manager.lab.local <none> <none>
openstack valid-keystone-keystone1-tnnjwvbqc7-w94v6 0/1 Completed 0 3h47m 10.0.38.253 vio-manager.lab.local <none> <none>
openstack valid-mariadb-mariadb1-wtxfd969n2-rptvp 0/1 Completed 0 3h47m 10.0.38.213 vio-manager.lab.local <none> <none>
openstack valid-memcached-memcached1-66dmh9pd2m-txr29 0/1 Completed 0 3h47m 10.0.38.215 vio-manager.lab.local <none> <none>
openstack valid-neutron-neutron1-4885z8qttw-pndfk 0/1 Completed 0 3h47m 10.0.38.205 vio-manager.lab.local <none> <none>
openstack valid-nova-nova1-p4nzkj8h4d-rws2h 0/1 Completed 0 3h47m 10.0.38.254 vio-manager.lab.local <none> <none>
openstack valid-novacompute-compute-bf680b8f-c46-qqtlpbndlz-vq2gb 0/1 Completed 0 3h47m 10.0.38.207 vio-manager.lab.local <none> <none>
openstack valid-octavia-octavia1-xbsvbnlnqh-7ps97 0/1 Completed 0 3h47m 10.0.38.200 vio-manager.lab.local <none> <none>
openstack valid-openvswitch-openvswitch1-2mcwhw28rm-bg98p 0/1 Completed 0 3h47m 10.0.38.216 vio-manager.lab.local <none> <none>
openstack valid-osdeployment-nfv-vio-vdv2cp96jq-87fcb 0/1 Completed 0 3h46m 10.0.38.211 vio-manager.lab.local <none> <none>
openstack valid-placement-placement1-xd8lrdcnk2-gxcc6 0/1 Completed 0 3h47m 10.0.38.199 vio-manager.lab.local <none> <none>
openstack valid-rabbitmq-rabbitmq1-jvmjqwvl6m-tlf98 0/1 Completed 0 3h47m 10.0.38.217 vio-manager.lab.local <none> <none>
openstack valid-vcenter-vcenter1-v6cf2mv7nn-dxflf 0/1 Completed 0 3h46m 10.0.38.198 vio-manager.lab.local <none> <none>
openstack valid-viocluster-viocluster1-c687c2ab-33be-429c-becb-5e33fpzjzt 0/1 Completed 0 3h45m 10.0.38.240 vio-manager.lab.local <none> <none>
openstack valid-vioingress-vioingress1-w76gfp7wns-79t94 0/1 Completed 0 3h47m 10.0.38.249 vio-manager.lab.local <none> <none>
openstack valid-viomachineset-controller1-cwz24cmrrt-tqzz6 0/1 Completed 0 3h47m 10.0.38.251 vio-manager.lab.local <none> <none>
openstack valid-viomachineset-manager1-tmr8nq7bd4-27tnx 0/1 Completed 0 3h47m 10.0.38.247 vio-manager.lab.local <none> <none>
openstack valid-viosecret-viosecret1-wq8gpf4mmf-9llr6 0/1 Completed 0 3h50m 10.0.38.243 vio-manager.lab.local <none> <none>
openstack valid-vioshim-vioadmin1-bsspd92558-cllvj 0/1 Completed 0 3h46m 10.0.38.197 vio-manager.lab.local <none> <none>
openstack valid-vioutils-vioutils1-pg67f8hhwn-js6tc 0/1 Completed 0 3h46m 10.0.38.193 vio-manager.lab.local <none> <none>
root@vio-manager [ ~ ]# helm ls -a
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
cluster-api 1 Sun Jun 27 14:22:31 2021 DEPLOYED cluster-api-7.1.0+17987093 cluster-api-system
mariadb1 1 Sun Jun 27 16:37:18 2021 FAILED mariadb-7.1.0+17987093 openstack
rabbitmq1 1 Sun Jun 27 16:43:53 2021 FAILED rabbitmq-7.1.0+17987093 openstack
vio-api 1 Sun Jun 27 14:23:19 2021 DEPLOYED vio-api-7.1.0+17987093 default
vio-ingress-cntl 1 Sun Jun 27 14:23:37 2021 DEPLOYED nginx-ingress-7.1.0+17987093 0.24.1 default
vio-lcm 1 Sun Jun 27 14:22:22 2021 DEPLOYED vio-lcm-controllers-7.1.0+17987093 1.0 openstack
vio-operator 1 Sun Jun 27 14:23:30 2021 DEPLOYED vio-operator-7.1.0+17987093 default
vio-patching-controller 1 Sun Jun 27 14:21:33 2021 DEPLOYED vio-patching-controller-7.1.0+17987093 openstack
vio-webui 1 Sun Jun 27 14:22:43 2021 DEPLOYED vio-webui-7.1.0+17987093 default
vioutils1 1 Sun Jun 27 16:45:00 2021 DEPLOYED vioutils-7.1.0+17987093 1.0 openstack
root@vio-manager [ ~ ]# viocli get deployment
PUBLIC VIP PRIVATE VIP HIGH AVAILABILITY
nfv-vio (192.168.1.223) 192.168.1.222 Enabled
NODE NAME ROLE VALIDATION STATUS IP
controller-2zfrpw9xpd Controller Success NotRunning 192.168.1.242
controller-k4dfztcrzl Controller Success Running 192.168.1.241
controller-q5bzjvlnf8 Controller Success Running 192.168.1.243
vio-manager.lab.local Manager Success Running 192.168.1.248
SERVICE CONTROLLER READY FAILURES
barbican pending... - -
cinder pending... - -
glance pending... - -
heat pending... - -
horizon pending... - -
ingress pending... - -
keystone pending... - -
mariadb mariadb-server 0/3 -
mariadb-ingress 0/2 -
mariadb-ingress-error-pages 0/2 -
mariadb1-etcd 0/3 -
memcached pending... - -
neutron pending... - -
nova pending... - -
nova-compute pending... - -
octavia pending... - -
openvswitch pending... - -
placement pending... - -
rabbitmq rabbitmq1-rabbitmq 0/1 -
vioshim pending... - -
vioutils node-config-manager 2/2 -
OpenStack Deployment State: PROVISIONING

 

Shokry,
Telco Cloud Architecture and Container orchestration addict
Tags (2)
0 Kudos
0 Replies