VMware Cloud Community
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

vCloud 8.20 integration of 6.5U1 host's

Hello everyone,

I am currently migrating from old 5.5 to new 6.5U1 host's and have an issue with the host preparation within vCloud. So :

-vCloud director is updated to the latest 8.20 release.

-Within vCenter, I created a new cluster in which I joined my 6 new 6.5U1 host's.

-New and old host's are also joined to the same distributed switch and have, on the physical network, the exact same inter vlan's access rules : the vCloud director appliance (on vlan X) is able to reach new host's (on vlan Y) on the network (I even temporary changed the inter vlan rule to any/any). by the way, old hosts are of course also in vlan Y.

Now, here comes the issue : in my Provider VDC, I added the new cluster as a ressource pool. Then :

-First thing making me tick is, during that step, this warning : "At least one external network that is considered accessible to the Provider VDC is not accessible by the selected resource pool." >>> no way ! I quadrupled checked from end to end, the network config is the exact same for old and new host's. (I noticed, within vcloud/Provider VDC, that a distributed port group in the external network's hasn't the same name as in the distributed switch in vcenter : maybe it's only that ?)

-Anyways, my main problem is this error message during the host preparation : "Invalid argument "version" for remote call to host. A specified parameter was not correct: version"

I rebooted the host's, checked that SSH is open, tried the same operation while having the host's in maintenance mode, rechecked also the interop matrix between vcloud director 8.20 and esxi 6.5U1 > it's all good but still : always the same error.

Maybe I'm missing something obvious here ? Thanks for your precious help 🙂

Reply
0 Kudos
1 Solution

Accepted Solutions
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Updating this topic for anybody who may come accross the same issue.

Long story short, as stated in the official compatibility matrix, vCD 8.20 is perfectly compatible with vSphere 6.5U1.

The fact that I wasn't able to join a fresh installed 6.5U1 host was coming from to the fact that I was still using a "vCloud Director Network Isolation" network pool.

vCDNI pools are not compatible with vSphere 6.5.

Since my vCenter was already running in 6.5 (meaning the "migrate to VXLAN" function was not supported anymore) I had to manually migrate all vDC's networks to the VXLAN pool (power off all vapps > de-associate any vdc network from vm's/vapp itself > delete vcdni vdc networks > switch the vdc to vxlan > recreate vdc's networks > assign to vm's > power-on vapps) and finally delete the vCDNI pool which enabled me to join the fresh installed hosts.

Of course, prior of doing that, I had to enable vxlan on my hosts. That basically consists, via Network&Security, of adjusting the MTU virtually (on the VDS) and physically to 1600 between hosts, creating a VXLAN vmkernel within each related host.

Pay attention that, at the step where you delete your old vCDNI VDC networks, if any IPSec tunnel is defined in the vDC associated edge gateway it gets deleted without any error/warning !!! So unlike the annoying, supposed, "avoiding-to-cut-the-branch-on-which-you-are-sitting" error message which doesn't allow you to delete any vDC network without prior having de-associated any linked vapp's/vm's, vCloud Director does not give a shit about any active IPSec tunnels , it will simply be deleted <_< Which, for me, does not make any sense but OK I guess I had to know it..."be ready for any" like vmware said Smiley Wink .... Anyways, simply be aware of that, it may save your ass in case you would end up finding out that you actually have no up-to-date documentation of that tunnel conf ....... 😉

I hope this will help someone Smiley Happy

View solution in original post

Reply
0 Kudos
12 Replies
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Nobody ?

Just thinking about that, could it be linked to the fact I am using vSAN 6.6 on those new host's ?

Because on the interop matrix page, when checking vcloud director 8.20 against vsan, the latest "green checked" version of vSAN is 6.2.

Now when making the same check with vcloud director 9, vsan 6.6 is green checked.

So it might anyways be a good idea to upgrade vcloud director to version 9 but I'm doubting about the fact it will solve the issue

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

I just did a test to exclude that vSAN part by completely disabling vSAN and rebooting the hosts and retrying to add them to vcloud : exact same error message.

So I'm beginning to doubt that regarding this precise error, the issue is coming from vSAN.

Nobody has a clue ? At least does someone have vcloud director 8.20 with esxi 6.5u1 hosts running ?

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Hello

Really ? 120 Views and 0 reply ? Wow O_o

Meanwhile I did the following discoveries/tests :

-Took 2 hosts out of the 6 and tried following releases of ESXi : 6.5U1 Dell custom, 6.5 Dell custom, 6.5U1 VMware, 6.0U3 Dell custom.

-With 6.0U3 the hosts are getting prepared as usual without any issue. With any other version past 6.0U3 the preparation process fails

-Cloned the relying infra and upgraded vCD to 9.0 > same issue

The issue seems definitely related to the release of ESXi. Now why is the compatibility matrix stating, as from versino 8.20, that vCD is compatible with ESXi 6.5 and ESXi 6.5U1 ?

Furthermore, version 9.0 of vCD is officially even supporting vSAN 6.6.1....

How could this be written down, for 2 different versions of vCD if it's not the case ? And even better : nobody on communities.vmware.com has a clue ???

I opened a support case the 11th october, same result : no answer !!!

...I am begining to ask myself if some companies, apart from us, are even using vCD ? Seriously ? Did nobody try to integrated ESXi 6.5 within vCD for almost a year ???

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

wait a second, just thinking about that but what if the compatibility matrix was done based on hosts that were upgraded from 6.0U3 to 6.5 or 6.5U1 ?

maybe since the vCD agent is already installed in the hosts it works ?

Reply
0 Kudos
mhampto
VMware Employee
VMware Employee
Jump to solution

What is the Support request number opened for this?

If you start one of the hosts at 6.0 (If supported) and upgrade to 6.5 does the same issue appear?

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

17595407610 is the request number.

yes, like I was suspecting, starting from 6.0U3, integrating in vcd 8.20 and finally upgrading to 6.5U1 works !

but...because in such situations we all know very well that there is almost always a "but", vCD network isolation is not working anymore on 6.5U1 upgraded hosts and ... we are using it.

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

Updating this topic for anybody who may come accross the same issue.

Long story short, as stated in the official compatibility matrix, vCD 8.20 is perfectly compatible with vSphere 6.5U1.

The fact that I wasn't able to join a fresh installed 6.5U1 host was coming from to the fact that I was still using a "vCloud Director Network Isolation" network pool.

vCDNI pools are not compatible with vSphere 6.5.

Since my vCenter was already running in 6.5 (meaning the "migrate to VXLAN" function was not supported anymore) I had to manually migrate all vDC's networks to the VXLAN pool (power off all vapps > de-associate any vdc network from vm's/vapp itself > delete vcdni vdc networks > switch the vdc to vxlan > recreate vdc's networks > assign to vm's > power-on vapps) and finally delete the vCDNI pool which enabled me to join the fresh installed hosts.

Of course, prior of doing that, I had to enable vxlan on my hosts. That basically consists, via Network&Security, of adjusting the MTU virtually (on the VDS) and physically to 1600 between hosts, creating a VXLAN vmkernel within each related host.

Pay attention that, at the step where you delete your old vCDNI VDC networks, if any IPSec tunnel is defined in the vDC associated edge gateway it gets deleted without any error/warning !!! So unlike the annoying, supposed, "avoiding-to-cut-the-branch-on-which-you-are-sitting" error message which doesn't allow you to delete any vDC network without prior having de-associated any linked vapp's/vm's, vCloud Director does not give a shit about any active IPSec tunnels , it will simply be deleted <_< Which, for me, does not make any sense but OK I guess I had to know it..."be ready for any" like vmware said Smiley Wink .... Anyways, simply be aware of that, it may save your ass in case you would end up finding out that you actually have no up-to-date documentation of that tunnel conf ....... 😉

I hope this will help someone Smiley Happy

Reply
0 Kudos
ArchangelDF
Contributor
Contributor
Jump to solution

Thanks for the info, I have currently just ran into this issue.

From what i understand you are saying is that if we add a new hosts and before prepping them in VCD 9.0 we have make sure that VXLAN and the host preps from NSX is setup ?

We have not setup NSX or vshield earlier version with VCD and have not run into this issue only now when upgrading from VCD 5.5 to 9.0

thanks in advance.

Reply
0 Kudos
sur_sumit
Contributor
Contributor
Jump to solution

@thgreyprnc

I am also facing the exactly the same issue. we have a VCDNI pool which is not vxlan backed, but we already migrated to 6.5. Now we want to add new esxi 6.5 to a new cluster, but the "prepare host" fails in the VCD.

I read your post with the solution, and it was helpful, but i was not able to understand exactly how to manually migrate deom the current pool to a VXlan backed pool.

It will be very helpful, if you can explain it a little.

Thanks,

Sumit

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

@sur_sumit

No prob, it happened so often that I was helped on the internet without having the good opportunity to give something back, I tought this was the right one Smiley Happy

What i suggest you to do is :

1) in a first step create a new, test org VDC, in which you create a vapp with 2 vm's to which you associate a test network backed by your existing vCDNI pool.

2) be sure to manually place each of those vm's on a separate physical host (the goal will be to validate that vxlan packets are flowing between physical hosts)

3) run a ping -t each vm from each other and validate that everything is working

4) power off that vapp

5) from the org vdc > my cloud > vapp > virtual machine > properties > hardware > network > none

6) from the org vdc > my cloud > vapp > networking > delete the org vdc network

7) from the org vdc > administration > org vdc networks > delete the network

😎 from system > manage > organisation VDCs > right click/properties > network pool&services > switch the network pool to your VXLAN pool

9) from the org vdc > administration > org vdc networks > recreate the network

10) from the org vdc > my cloud > vapp > virtual machine > properties > hardware > network > assign the new org vdc network object

11) poweron the vapp and confirm network connectivity is OK

Then I suggest you properly document, for each org VDC, all network settings and do previous setps at the end, when all your org vdc's will be migrated to VXLAN you will be able to delete the vCDNI pool which, in turn, will allow you to join your 6.5 hosts cleanly. Like I said, pay especially attention if you are using IPSEc tunnels since they will be deleted at the moment you will finish delete your vcdni org networks (at least it was my case).

Reply
0 Kudos
thgreyprnc
Enthusiast
Enthusiast
Jump to solution

@ArchangelDF

Just saw your reply now. It depends on what you want to do.

In a general perspective, the last version of vCenter supported by vCloud director in order to use the "migrate" function vCDNI > VXLAN is version 6.0U3.

But of course in order to have this migrate function available in the first place you might have to upgrade vCD to an intermediate version. Version which is still supporting the version of vsphere you are currently running Smiley Happy

If possible, I would : upgrade to vCD 8.20 > use the migration function on a test org vdc to migrate from vCDNI to VXLAN > apply on the prod > finally update vCD to 9.1

So first thing to do is to check the different interops in order to know from where to where you can go and compatible with what : VMware Product Interoperability Matrices

Reply
0 Kudos
sur_sumit
Contributor
Contributor
Jump to solution

With both the hosts on 5.5 , right?

Reply
0 Kudos