Good news, I was able to roll back my lab and re-run the updateSSOConfig.py and UpdateLsEndpoint.py scripts - only to find that the /psc URL did indeed load successfully on both nodes with the NetScaler load balancing in place. So at least I know that the correct behaviour is that you should be able to open /psc on both appliances.
By examining my snapshots at different stages I have now been able to identify a difference between the original migration node and the clean appliance:
When you run the updateSSOconfig.py Python script to repoint the SSO URL to the load balanced address it explains that hostname.txt and server.xml were modified:
# python updateSSOConfig.py --lb-fqdn=psc-ha-vip.sbcpureconsult.internal
script version:1.1.0
executing vmafd-cli command
Modifying hostname.txt
modifying server.xml
Executing StopService --all
Executing StartService --all
I was able to locate hostname.txt files (containing the load balancer address) in:
/etc/vmware/service-state/vmidentity/hostname.txt
/etc/vmware-sso/keys/hostname.txt (missing on node 2, but contained the local name on node 1)
/etc/vmware-sso/hostname.txt
but this second hostname file was missing on the second node. Why is this? I guess that it is used transiently during the script execution in order to inject the correct value into the server.xml file.
The server XML file is located in the folder:
/usr/lib/vmware-sso/vmware-sts/conf/server.xml
my faulty node contained the following certificate entries under the connector definition:
..store="STS_INTERNAL_SSL_CERT"
certificateKeystoreFile="STS_INTERNAL_SSL_CERT"..
my working node contained:
..store="MACHINE_SSL_CERT"
certificateKeystoreFile="MACHINE_SSL_CERT"..
So I was able to simply copy the server.xml file from the working node (overwriting the original on the faulty node) and also remove the /etc/vmware-sso/keys/hostname.txt file to match the configuration. Following a reboot my first SSO node now responds correctly by redirecting https://hosso01.sbcpureconsult.internal/psc to https://psc-ha-vip.sbcpureconsult.internal/websso to obtain its SAML token before ultimately displaying the PSC client UI.
As a follow up, by examining the STS_INTERNAL_SSL_CERT store I can see that it was issued by the original Windows vCenter Server 5.5 SSO CA to the subject name:
ssoserver,dc=vsphere,dc=local
This store is not present on the other node, and so the correct load balancing certificate replacement must somehow be omitted by one of the upgrade scripts when this scenario occurs (5.5 SSO to 6.5 PSC).
Hope that helps someone else one day!