VMware Horizon Community
srodenburg
Expert
Expert
Jump to solution

Upgrade View 7.2 to 7.9 CloudPod

Hello everybody,

I would like to verify if a certain behaviour that we are seeing during an upgrade is "normal/by design" or not.

We are upgrading a 4-POD 7.2 System to 7.9

After POD1 was done, it looked good at first glance. But we started noticing that users that had a desktop session in POD1, where not re-connected to their Desktops but got new sessions instead (now having 2, one in POD1 and the new one in some, not yet upgraded POD.

We also noticed that when doing a global search for a user, the not yet upgraded PODs 2 3 and 4 could not see a users session in POD1.  POD1 however could see user sessions in the "older" PODs but as said, not the other way around.

In other words, POD1 kind of behaved like a Island, understanding "itself + the others" but PODs 2 3 and 4 had their picture of the world, but not what happens inside POD1.

I know VMware recommends upgraded Cloud PODs as soon as possible. Is this one of the reasons? This "partial" visibility?

0 Kudos
1 Solution

Accepted Solutions
srodenburg
Expert
Expert
Jump to solution

Solved. The problem was causes by a bug in the Connection-Server 7.9 installer. It upgrades Connection-brokers just fine but it disables the Inter-POD API Firewall rules in the Windows firewall. Well, it does not actually disable the rules, it forgets to re-enable them. Let me explain:

- The installer uninstalls the old Connection Server version

- It also uninstalls all the firewall rules that came with it. This includes the Inter-POD Rules that had to be manually enabled in the first place to allow the PODs to communicate with each other.

- It then installs the new version (7.9 in this case)

- It installs the firewall rules again, incl. the Inter-POD rules but DOES NOT ENABLE THEM

- So directly after the upgrade, this POD is now unreachable by all the other PODs.

Manually re-enabling the Inter-POD rules on that connection server solves the problem and this POD is reachable by the other PODs and all is good. Repeat for every connection-broker in every POD being upgraded.

I regard this is a bug. It is easy for the installer to see that this POD is part of a CloudPOD infrastructure and knowing this, can enable the Inter-POD rules to avoid this problem. But it does not.

View solution in original post

2 Replies
srodenburg
Expert
Expert
Jump to solution

Solved. The problem was causes by a bug in the Connection-Server 7.9 installer. It upgrades Connection-brokers just fine but it disables the Inter-POD API Firewall rules in the Windows firewall. Well, it does not actually disable the rules, it forgets to re-enable them. Let me explain:

- The installer uninstalls the old Connection Server version

- It also uninstalls all the firewall rules that came with it. This includes the Inter-POD Rules that had to be manually enabled in the first place to allow the PODs to communicate with each other.

- It then installs the new version (7.9 in this case)

- It installs the firewall rules again, incl. the Inter-POD rules but DOES NOT ENABLE THEM

- So directly after the upgrade, this POD is now unreachable by all the other PODs.

Manually re-enabling the Inter-POD rules on that connection server solves the problem and this POD is reachable by the other PODs and all is good. Repeat for every connection-broker in every POD being upgraded.

I regard this is a bug. It is easy for the installer to see that this POD is part of a CloudPOD infrastructure and knowing this, can enable the Inter-POD rules to avoid this problem. But it does not.

DCasota
Expert
Expert
Jump to solution

After an installation often it remains helpful monitoring pods.

Pod federation health, site info, pod endpoint health and local pod status are exposed horizon apis. You may have a look to this master piece PowerCLI-Example-Scripts/VMware.HV.Helper.psm1 at master · vmware/PowerCLI-Example-Scripts · GitHub .

local pod status is equal false in a non pod constellation, and local pod status=enabled (<>true!), as in a pod.

Some years ago I put some work on this based on the source above VMwareHorizonPoolsReport/Get-VMwareHorizonPoolsReport.ps1 at master · dcasota/VMwareHorizonPoolsRepo.... The repo contains some fake output as well.

vipa port off could be detectable by processing pod information.

0 Kudos