Hi,
I have the following configuration.
I have a windows server 2016 with AD and DNS, IP 192.168.10.10.
First I imported vCSA ova file into workstation with the following configuration
Address Family : ipv4
Mode : static
IP Address : 192.168.10.12
Prefix : 24
Gateway : 192.168.10.1
DNS Server : 192.168.10.10
Host Identity : vCsa.vdns.local (A and PTR record created in windows server DNS Server).
Once the ova file is imported in workstation I am asked to access the URL https://vcsa.ad.local:5480.
When I try to access the address via chrome/edge I get 192.168.10.12 refused to connect, which means Stage 2 doesn't start at all as I have seen in many videos.
So I enable SSH through vCSA and access it, I find that the services vxpd and vsphere-ui are not running, I use the command service-control --status
I get the following response :
Command> helpservice-control --status
Stopped:
applmgmt lwsmd pschealth vmafdd vmcad vmcam vmdird vmdnsd vmonapi vmware-analytics vmware-cis-license vmware-cm vmware-content-library vmware-eam vmware-imagebuilder vmware-mbcs vmware-netdumper vmware-perfcharts vmware-postgres-archiver vmware-rbd-watchdog vmware-rhttpproxy vmware-sca vmware-sps vmware-statsmonitor vmware-sts-idmd vmware-stsd vmware-updatemgr vmware-vapi-endpoint vmware-vcha vmware-vmon vmware-vpostgres vmware-vpxd vmware-vpxd-svcs vmware-vsan-health vmware-vsm vsan-dps vsphere-client vsphere-ui
Then I run the following command service-control --start --all
I receive the following errors :
Command> service-control --start --all
Operation not cancellable. Please wait for it to finish...
Performing start operation on service lwsmd...
2019-02-08T14:25:24.580Z Failure setting accounting for lwsmd. Err Failed to set unit properties on lwsmd.service: Unit lwsmd.service is not loaded.
Successfully started service lwsmd
Performing start operation on service vmafdd...
2019-02-08T14:25:25.083Z Failure setting accounting for vmafdd. Err Failed to set unit properties on vmafdd.service: Unit vmafdd.service is not loaded.
Successfully started service vmafdd
Performing start operation on service vmdird...
2019-02-08T14:25:28.180Z Failure setting accounting for vmdird. Err Failed to set unit properties on vmdird.service: Unit vmdird.service is not loaded.
2019-02-08T14:25:28.180Z Failure setting accounting for vmdird. Err Failed to set unit properties on vmdird.service: Unit vmdird.service is not loaded.
Successfully started service vmdird
Successfully started service vmdird
Performing start operation on service vmcad...
2019-02-08T14:25:32.415Z Failure setting accounting for vmcad. Err Failed to set unit properties on vmcad.service: Unit vmcad.service is not loaded.
Successfully started service vmcad
Successfully started service vmcad
Performing start operation on service vmware-sts-idmd...
2019-02-08T14:25:33.942Z Failure setting accounting for vmware-sts-idmd. Err Failed to set unit properties on vmware-sts-idmd.service: Unit vmware-sts-idmd.service is not loaded.
Successfully started service vmware-sts-idmd
Performing start operation on service vmware-stsd...
2019-02-08T14:25:46.581Z Failure setting accounting for vmware-stsd. Err Failed to set unit properties on vmware-stsd.service: Unit vmware-stsd.service is not loaded.
2019-02-08T14:26:01.076Z RC = 1
Stdout =
Stderr = Job for vmware-stsd.service failed because the control process exited with error code. See "systemctl status vmware-stsd.service" and "journalctl -xe" for details.
2019-02-08T14:26:01.077Z {
"problemId": null,
"resolution": null,
"detail": [
{
"translatable": "An error occurred while invoking external command : '%(0)s'",
"id": "install.ciscommon.command.errinvoke",
"args": [
"Stderr: Job for vmware-stsd.service failed because the control process exited with error code. See \"systemctl status vmware-stsd.service\" and \"journalctl -xe\" for details.\n"
],
"localized": "An error occurred while invoking external command : 'Stderr: Job for vmware-stsd.service failed because the control process exited with error code. See \"systemctl status vmware-stsd.service\" and \"journalctl -xe\" for details.\n'"
}
],
"componentKey": null
}
Error executing start on service vmware-stsd. Details {
"problemId": null,
"resolution": null,
"detail": [
{
"translatable": "An error occurred while starting service '%(0)s'",
"id": "install.ciscommon.service.failstart",
"args": [
"vmware-stsd"
],
"localized": "An error occurred while starting service 'vmware-stsd'"
}
],
"componentKey": null
}
Service-control failed. Error: {
"problemId": null,
"resolution": null,
"detail": [
{
"translatable": "An error occurred while starting service '%(0)s'",
"id": "install.ciscommon.service.failstart",
"args": [
"vmware-stsd"
],
"localized": "An error occurred while starting service 'vmware-stsd'"
}
],
"componentKey": null
}
Could someone advise me what is going one.
Thank You
This site can’t be reached
vcsa.dc.local’s server IP address could not be found.
Where does this "vcsa.dc.local" come from? I thought the FQDN was vcsa.vdns.local?
And if you configure your vCenter with DNS and FQDNs, the corresponding DNS servers must be accessible by the clients. Especially if it is a .local zone that is not publicly resolved. If your laptop is using the Google DNS server, there will be problems. Or if vCenter cannot resolve these FQDN because the DNS servers for these entries are not reachable.
This is especially true during deployment, but also later during operation. Alternatively, you can deploy vCenter only with IP addresses instead of a resolvable FQDN and configure the ESXi hosts also with IP addresses instead of FQDNs. In this case DNS is not so important, but you will never be able to change the IP address of the vCenter in that case.
If they're not running the installation failed and you should trash it and start again. Specify a gateway this time, even if it's a host-local network.
Thanks for the reply,
I have added a gateway and tried this time (when i wrote this post) this result is from the fresh installation.
They are on local-host.
Looks like your deployment wasn't successful and VCSA is broken. But since you didn't specify a gateway during deployment, the question is whether your PC/laptop is in the same subnet (192.168.30.0/24)?
And which VCSA version do you use?
Were both stages successful during installation? Usually all services should be started in stage 2.
I did specify a gateway, the above SSH response is from the fresh install with gateway added.
The PC can ping the interface of VCSA.
I have tried both VMware-vCenter-Server-Appliance-6.7.0.21000-11726888_OVF10 and VMware-vCenter-Server-Appliance-6.5.0.23000-10964411_OVF10.
Same issue.
The stage 2 DOES NOT BEGIN, as I have learned that when I access the Web UI via https://vcsa.vdns.local:5480 the page that shows up allows to start Stage 2 this doesnt happen at all as the page says 192.168.30.12 refused to connect.
Thank You
Stage 2 automatically starts in the installer after stage 1 has finished successfully. You don't have to login in VAMI after stage 1.
Just look at this short video tutorial: VMware vCSA 6.7 Clean Installation - YouTube
So, if stage 2 doesn't start automatically the problem is in stage 1 or in the first boot process.
Maybe this will help:
Collect Deployment Log Files for the vCenter Server Appliance
Appreciate the guidance.
I got to the logs, while the vsphere-client-update.log and vpshere-ui.update.log has no data in it.
The fie cloudvm.log has the following log which frankly I did not find anything that could point any error, although some components have been skipped and vsphere client and vsphere ui are installed.
wrapping usermod with python script
2019-02-08T14:15:04.623Z: Fix for systemd-journald SIGABRT core during heavy I/O
2019-02-08T14:15:04.637Z: Restarting systemd-journald service
2019-02-08T14:16:00.653Z: Updating configuration state based on install params
disabling ssh
passwd: password expiry information changed.
setting shell to appliancesh
shell .DEFAULT
shell2 .DEFAULT
Allow ssh connections from all hosts
Disabling sshd.
Note: Forwarding request to 'systemctl disable sshd.service'.
Removed symlink /etc/systemd/system/multi-user.target.wants/sshd.service.
2019-02-08T14:16:03.101Z: Updating network configuration
2019-02-08T14:16:03.467Z Getting value for install-parameter: appliance.net.addr.family
2019-02-08T14:16:03.468Z ipv4
2019-02-08T14:16:03.468Z Getting value for install-parameter: appliance.net.mode
2019-02-08T14:16:03.468Z appliance.net.mode is set to 'static'
2019-02-08T14:16:03.468Z Getting value for install-parameter: appliance.net.addr
2019-02-08T14:16:03.468Z Getting value for install-parameter: appliance.net.prefix
2019-02-08T14:16:03.468Z Getting value for install-parameter: appliance.net.gateway
2019-02-08T14:16:03.468Z appliance.net.addr is set to '192.168.10.12'
2019-02-08T14:16:03.468Z appliance.net.prefix is set to '24'
2019-02-08T14:16:03.468Z appliance.net.gateway is set to '192.168.10.1'
2019-02-08T14:16:03.469Z Executing command: ['/opt/vmware/share/vami/vami_set_network', 'eth0', 'STATICV4', '192.168.10.12', '255.255.255.0', '192.168.10.1']
2019-02-08T14:16:03.469Z Running command: ['/opt/vmware/share/vami/vami_set_network', 'eth0', 'STATICV4', '192.168.10.12', '255.255.255.0', '192.168.10.1']
2019-02-08T14:16:09.953Z Done running command
2019-02-08T14:16:09.953Z Stdout: net.ipv6.conf.eth0.disable_ipv6 = 1
Network parameters successfully changed to requested values
2019-02-08T14:16:09.953Z Getting value for install-parameter: appliance.net.mode
2019-02-08T14:16:09.953Z Getting value for install-parameter: appliance.net.addr
2019-02-08T14:16:09.953Z Getting value for install-parameter: appliance.net.prefix
2019-02-08T14:16:09.953Z Getting value for install-parameter: appliance.net.gateway
2019-02-08T14:16:09.953Z appliance.net.addr is set to '192.168.10.12'
2019-02-08T14:16:09.953Z appliance.net.prefix is set to '24'
2019-02-08T14:16:09.953Z appliance.net.gateway is set to '192.168.10.1'
2019-02-08T14:16:09.953Z Prefix is not 32, no need to update /etc/systemd/network/10-eth0.network
2019-02-08T14:16:09.953Z Getting value for install-parameter: appliance.net.dns.servers
2019-02-08T14:16:09.953Z DNS servers: 192.168.10.10
PNID configuration failure /etc/vmware/systemname_info.json {[Errno 2] No such file or directory: '/etc/vmware/systemname_info.json'}
PNID configuration failure /etc/vmware/systemname_info.json {[Errno 2] No such file or directory: '/etc/vmware/systemname_info.json'}
2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.dns.searchlist
2019-02-08T14:16:10.360Z DNS search:
2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.mode
2019-02-08T14:16:10.360Z Found static mode. No need to wait for IP.
2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.pnid
2019-02-08T14:16:10.360Z appliance.net.pnid is set to 'vCSA.vdns.local'
2019-02-08T14:16:10.361Z PNID 'vCSA.vdns.local' appears to be an FQDN; using it for the hostname
2019-02-08T14:16:10.361Z Running command: ['/opt/vmware/share/vami/vami_set_hostname', 'vCSA.vdns.local']
2019-02-08T14:16:10.475Z Done running command
2019-02-08T14:16:10.476Z Stdout: == set_ipv4 ==
DEFULT_INT: eth0
DEFAULT_IPV4: 192.168.10.12
HN: vCSA
DN: vdns.local
==============
== set_ipv6 ==
DEFULT_INT: eth0
DEFAULT_IPV6:
HN: vCSA
DN: vdns.local
==============
Host name has been set to vCSA.vdns.local
2019-02-08T14:16:10.476Z Getting value for install-parameter: appliance.ntp.servers
2019-02-08T14:16:10.476Z Getting value for install-parameter: appliance.time.tools-sync
2019-02-08T14:16:10.476Z Running command: ['/usr/bin/systemctl', 'stop', 'ntpd']
2019-02-08T14:16:10.482Z Done running command
2019-02-08T14:16:10.482Z Running command: ['/usr/bin/systemctl', 'disable', 'ntpd']
2019-02-08T14:16:10.548Z Done running command
2019-02-08T14:16:10.548Z Stderr: Removed symlink /etc/systemd/system/multi-user.target.wants/ntpd.service.
2019-02-08T14:16:10.549Z Running command: ['/opt/vmware/share/vami/vami_ip6_addr', 'eth0']
2019-02-08T14:16:10.572Z Done running command
2019-02-08T14:16:10.572Z Running command: ['/opt/vmware/share/vami/vami_ip_addr', 'eth0']
2019-02-08T14:16:10.584Z Done running command
2019-02-08T14:16:10.584Z Stdout: 192.168.10.12
2019-02-08T14:16:10.584Z Running command: ['groupadd', '-r', 'dnsmasq']
2019-02-08T14:16:10.669Z Done running command
2019-02-08T14:16:10.669Z Running command: ['useradd', '-r', '-g', 'dnsmasq', 'dnsmasq']
2019-02-08T14:16:10.771Z Done running command
2019-02-08T14:16:10.771Z Running command: ['service', 'dnsmasq', 'stop']
2019-02-08T14:16:10.798Z Done running command
2019-02-08T14:16:10.798Z Running command: ['systemctl', 'enable', 'dnsmasq']
2019-02-08T14:16:10.884Z Done running command
2019-02-08T14:16:10.884Z Stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/dnsmasq.service to /usr/lib/systemd/system/dnsmasq.service.
2019-02-08T14:16:10.902Z Running command: ['service', 'systemd-resolved', 'restart']
2019-02-08T14:16:10.995Z Done running command
2019-02-08T14:16:11.024Z Running command: ['service', 'dnsmasq', 'restart']
2019-02-08T14:16:11.071Z Done running command
2019-02-08T14:16:11.495Z: Recreate Swap
2019-02-08T14:16:10.902Z Running command: ['service', 'systemd-resolved', 'restart']
2019-02-08T14:16:10.995Z Done running command
2019-02-08T14:16:11.024Z Running command: ['service', 'dnsmasq', 'restart']
2019-02-08T14:16:11.071Z Done running command
2019-02-08T14:16:11.495Z: Recreate Swap
2019-02-08T14:16:11.498Z Disk Util: INFO: Recreating swap
Physical volume "/dev/sdc" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
Setting up swapspace version 1, size = 25 GiB (26835152896 bytes)
no label, UUID=54421216-3b67-4891-908b-8c47781f5d59
2019-02-08T14:16:11.809Z Disk Util: INFO: Swap resizing done
2019-02-08T14:16:11.812Z: Initializing storage
2019-02-08T14:16:11.927Z Disk Util: INFO: core_vg on /dev/sdd setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:13.100Z Disk Util: INFO: core_vg on /dev/sdd setup completed
2019-02-08T14:16:13.176Z Disk Util: INFO: log_vg on /dev/sde setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:13.598Z Disk Util: INFO: log_vg on /dev/sde setup completed
2019-02-08T14:16:13.660Z Disk Util: INFO: db_vg on /dev/sdf setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:14.010Z Disk Util: INFO: db_vg on /dev/sdf setup completed
2019-02-08T14:16:14.072Z Disk Util: INFO: dblog_vg on /dev/sdg setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:14.416Z Disk Util: INFO: dblog_vg on /dev/sdg setup completed
2019-02-08T14:16:14.506Z Disk Util: INFO: seat_vg on /dev/sdh setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:14.898Z Disk Util: INFO: seat_vg on /dev/sdh setup completed
2019-02-08T14:16:14.974Z Disk Util: INFO: netdump_vg on /dev/sdi setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:15.108Z Disk Util: INFO: netdump_vg on /dev/sdi setup completed
2019-02-08T14:16:15.186Z Disk Util: INFO: autodeploy_vg on /dev/sdj setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:15.536Z Disk Util: INFO: autodeploy_vg on /dev/sdj setup completed
2019-02-08T14:16:15.630Z Disk Util: INFO: imagebuilder_vg on /dev/sdk setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:16.036Z Disk Util: INFO: imagebuilder_vg on /dev/sdk setup completed
2019-02-08T14:16:16.120Z Disk Util: INFO: updatemgr_vg on /dev/sdl setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:16.561Z Disk Util: INFO: updatemgr_vg on /dev/sdl setup completed
2019-02-08T14:16:16.665Z Disk Util: INFO: archive_vg on /dev/sdm setup started
mke2fs 1.42.13 (17-May-2015)
2019-02-08T14:16:17.082Z Disk Util: INFO: archive_vg on /dev/sdm setup completed
2019-02-08T14:16:17.093Z Disk Util: INFO: All filesystems created. Mounting all
2019-02-08T14:16:17.487Z: Setting up log, core symlinks
2019-02-08T14:16:17.628Z: Doing initial package configuration ...
2019-02-08T14:16:18.420Z: Deployment type is embedded
Installing : VMware-TlsReconfigurator-6.7.0-11726888.x86_64.rpm
Installing : cis-upgrade-runner-6.7.0-11726888.x86_64.rpm
Installing : VMware-TlsReconfigurator-6.7.0-11726888.x86_64.rpm
Installing : cis-upgrade-runner-6.7.0-11726888.x86_64.rpm
Installing : VMware-jmemtool-6.7.0-11726888.x86_64.rpm
Installing : VMware-commonjars-6.7.0-11726888.x86_64.rpm
Skip Installing : VMware-vc-support-6.7.0-11726888.x86_64.rpm
Installing : vmware-pyvmomi-6.7.0-0.0.11727104.noarch.rpm
Skip Installing : applmgmt-6.7.0-11726888.x86_64.rpm
Installing : vmware-snmp-1.2.6.rpm
Installing : rvc_1.6.0-8979020_x86_64.rpm
Skip Installing : vc-deploy-6.7.0-11726888.x86_64.rpm
Installing : VMware-unixODBC-2.3.2.vmw.2-6.7.0.x86_64.rpm
Installing : vmware-lwis-6.2.0-9958279.x86_64.rpm
Skip Installing : VMware-visl-integration-6.7.0-11726888.x86_64.rpm
Installing : vmware-afd-6.7.0.4549-11338770.x86_64.rpm
Installing : vmware-directory-client-6.7.0.3781-11338774.x86_64.rpm
Installing : vmware-directory-6.7.0.3781-11338774.x86_64.rpm
Installing : vmware-ic-deploy-6.7.0.3090-11338776.x86_64.rpm
Installing : vmware-certificate-client-6.7.0.4582-11338783.x86_64.rpm
Installing : vmware-certificate-server-6.7.0.4582-11338783.x86_64.rpm
Installing : vmware-identity-sts-6.7.0.4892-11338777.noarch.rpm
Installing : VMware-pod-6.7.0-11726888.x86_64.rpm
Installing : vmware-dns-client-1.0.0-11338780.x86_64.rpm
Installing : vmware-dns-server-1.0.0-11338780.x86_64.rpm
Installing : dbcc-1.0.0-1.noarch.rpm
Skip Installing : VMware-Log-Insight-Agent-4.5.0-5626690.noarch.rpm
Installing : VMware-vmon-6.7.0-11726888.x86_64.rpm
Installing : VMware-rhttpproxy-6.7.0-11726888.x86_64.rpm
Installing : VMware-analytics-6.7.0-9393109.x86_64.rpm
Installing : vmware-cm-6.7.0-11726888.x86_64.rpm
Installing : VMware-cis-license-6.7.0-9193020.x86_64.rpm
Installing : vmware-psc-health-6.7.0.1529-11338778.x86_64.rpm
Installing : vmware-sca-6.7.0.656-11338765.noarch.rpm
Installing : vmware-esx-netdumper-6.7.0-0.0.11726888.i386.rpm
Installing : VMware-applmon-cloudvm-6.7.0-11726888.x86_64.rpm
Installing : applmgmt-cloudvm-6.7.0-11726888.x86_64.rpm
Skip Installing : VMware-vapi-6.7.0-11726888.x86_64.rpm
Installing : VMware-Postgres-osslibs-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-osslibs-server-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-libs-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-server-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-pg_rewind-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-extras-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-cis-visl-scripts-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-pg_top-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-odbc-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-contrib-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-Postgres-client-jdbc-9.6.9.0-8615968.noarch.rpm
Installing : VMware-dbconfig-6.7.0-11726888.x86_64.rpm
Installing : VMware-Postgres-pg_archiver-9.6.9.0-8615968.x86_64.rpm
Installing : VMware-vpxd-svcs-6.7.0-11726888.x86_64.rpm
Skip Installing : VMware-certificatemanagement-6.7.0-11726888.x86_64.rpm
Skip Installing : VMware-hvc-6.7.0-11726888.x86_64.rpm
Skip Installing : VMware-trustmanagement-6.7.0-11726888.x86_64.rpm
Installing : VMware-vpxd-6.7.0-11726888.x86_64.rpm
Installing : VMware-vpxd-agents-eesx-6.7.0-11726888.x86_64.rpm
Installing : vmware-virgo-6.7.0-11726888.x86_64.rpm
Installing : VMware-cloudvm-vimtop-6.7.0-11726888.x86_64.rpm
Installing : VMware-content-library-6.7.0-11726888.x86_64.rpm
Installing : vmware-eam-6.7.0-11726888.x86_64.rpm
Installing : vmware-imagebuilder-6.7.0-11726888.x86_64.rpm
Installing : VMware-mbcs-6.7.0-11726888.x86_64.rpm
Installing : VMware-sps-6.7.0-11726888.x86_64.rpm
Installing : ipxe-1.0.0-1.5578189.vmw.i686.rpm
Installing : vmware-autodeploy-6.7.0-0.0.11727104.noarch.rpm
Installing : VMware-UpdateManager-6.7.0-10164201.x86_64.rpm
Installing : VMware-vcha-6.7.0-11726888.x86_64.rpm
Installing : vmware-cam-6.7.0.614-11338766.x86_64.rpm
Installing : VMware-vsan-dps-6.7.0-0.0.11397883.x86_64.rpm
Installing : VMware-vsan-health-6.7.0-11397901.x86_64.rpm
Installing : VMware-vsanmgmt-6.7.0-0.1.11397901.x86_64.rpm
Installing : vmware-vsm-6.7.0-11726888.x86_64.rpm
Installing : vsphere-client-6.7.0-11727122.noarch.rpm
Installing : VMware-perfcharts-6.7.0-11726888.x86_64.rpm
Installing : vsphere-ui-6.7.0.20000-11727124.noarch.rpm
2019-02-08T14:18:49.497Z: Initial RPM install done.
2019-02-08T14:18:49.910Z: Copying files from visl
2019-02-08T14:18:49.962Z: Doing initial configuration ...
** Vpxd VA Post-install script started
Setting up log, core symlinks
Symlinks already setup.
/usr/sbin/usermod.bk -a -G coredump root
soft/hard limit for # of processes for postgres user
soft/hard limit for # of open files for postgres user
Configuring apache
Configuring core dumps
Configuring semaphores and shared pages
Configuring TCP buffer size
Turning off memory overcommit. Controlling overcommit using ratio
Increase neighbour table size
Increase somaxconn
Enable gratuitous ARP
Decrease netfilter nf_conntrack_tcp_timeout_time_wait
Add heartbeat port to reserved
net.ipv6.conf.all.dad_transmits = 0
net.ipv6.conf.default.dad_transmits = 0
net.ipv6.conf.all.max_addresses = 1
net.ipv6.conf.default.max_addresses = 1
kernel.core_uses_pid = 0
kernel.core_pattern = /var/core/core.%e.%p
kernel.shmmni = 4096
kernel.sem = 800 50000 100 250
net.core.rmem_default = 8388608
net.core.rmem_max = 8388608
net.core.wmem_default = 8388608
net.core.wmem_max = 8388608
vm.overcommit_memory = 2
vm.overcommit_ratio = 99
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 8192
net.ipv4.neigh.default.gc_thresh3 = 10240
net.ipv6.neigh.default.gc_thresh1 = 1024
net.ipv6.neigh.default.gc_thresh2 = 8192
net.ipv6.neigh.default.gc_thresh3 = 10240
net.core.somaxconn = 2048
net.ipv4.conf.all.arp_notify = 1
net.ipv4.tcp_early_retrans = 0
net.ipv4.tcp_ecn = 0
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 60
net.ipv4.ip_local_reserved_ports = 902
Enable performance logs
Note: Forwarding request to 'systemctl enable sysstat.service'.
Failed to execute operation: No such file or directory
Executing the runonce scripts
Blacklisting vpxd hartbeat port from the ones rpcbind uses
Configure ssh
/usr/sbin/usermod.bk -a -G shellaccess root
Add system logging to /sbin/ifup
Setting global java options
Fixing likewise configuration
SUCCESS
SUCCESS
Patch slow vami cd mounting
patching file vami_ovf_process
Hunk #1 succeeded at 308 with fuzz 1 (offset 59 lines).
Setting up subsequent boot script
2019-02-08T14:18:51.394Z: Upating deployment node type embedded
I ran through the installer .log file and found this :
2019-02-08T14:15:54.581034+00:00 photon-machine vami-lighttp[1440]: Starting vami-lighttpd:Extracting SSL certificate from VECS
2019-02-08T14:15:54.581623+00:00 photon-machine vami-lighttp[1440]: /opt/vmware/share/lighttpd/pre-start.sh: line 27: /usr/lib/vmware-vmafd/bin/vecs-cli: No such file or directory
2019-02-08T14:15:54.581904+00:00 photon-machine vami-lighttp[1440]: Failed to retrieve certificate from VECS
2019-02-08T14:15:54.597180+00:00 photon-machine systemd[1]: Reloading.
2019-02-08T14:15:54.597429+00:00 photon-machine vaos[863]: Created symlink from /etc/systemd/system/multi-user.target.wants/applmgmt.service to /usr/lib/systemd/system/applmgmt.service.
2019-02-08T14:15:54.630915+00:00 photon-machine vami-lighttp[1440]: 2019-02-08 14:15:54: (/build/mts/release/bora-9049398/studio/src/vami/apps/lighttpd/1.4.45/src/network.c.273) warning: please use server.us
e-ipv6 only for hostnames, not without server.bind / empty address; your config will break if the kernel default for IPV6_V6ONLY changes
2019-02-08T14:15:54.633392+00:00 photon-machine vami-lighttp[1440]: [ OK ]
2019-02-08T14:15:54.654669+00:00 photon-machine systemd[1]: Started LSB: Lightning fast webserver with light system requirements.
2019-02-08T14:15:54.694228+00:00 photon-machine systemd[1]: Stopping DCUI...
2019-02-08T14:15:54.696868+00:00 photon-machine systemd[1]: Stopped DCUI.
2019-02-08T14:15:54.709214+00:00 photon-machine systemd[1]: Started DCUI.
2019-02-08T14:15:54.714869+00:00 photon-machine vaos[863]: Starting service!
2019-02-08T14:15:54.748003+00:00 photon-machine systemd[1]: Started Appliance Management Service..
2019-02-08T14:15:54.753180+00:00 photon-machine vaos[863]: Created symlink from /etc/systemd/system/multi-user.target.wants/vmware-firewall.service to /usr/lib/systemd/system/vmware-firewall.service.
2019-02-08T14:15:54.753910+00:00 photon-machine systemd[1]: Reloading.
2019-02-08T14:15:54.786571+00:00 photon-machine applmgmt-systemd.launcher[1525]: kill: not enough arguments
2019-02-08T14:15:54.823188+00:00 photon-machine vaos[863]: Enable lighttpd in systemd
2019-02-08T14:15:54.824941+00:00 photon-machine vaos[863]: Reloading firewall...
2019-02-08T14:15:55.358981+00:00 photon-machine vaos[863]: Restart lighttpd
2019-02-08T14:15:55.404931+00:00 photon-machine systemd[1]: Stopping LSB: Lightning fast webserver with light system requirements...
2019-02-08T14:15:59.866331+00:00 photon-machine dcui[1509]: Traceback (most recent call last):
2019-02-08T14:15:59.866644+00:00 photon-machine dcui[1509]: File "/usr/lib/applmgmt/base/bin/vherdrunner", line 8, in <module>
2019-02-08T14:15:59.866819+00:00 photon-machine dcui[1509]: vherdrunner.start(vherdrunner.directories)
2019-02-08T14:15:59.866985+00:00 photon-machine dcui[1509]: File "/usr/lib/applmgmt/base/bin/vherdrunner.py", line 129, in start
2019-02-08T14:15:59.867149+00:00 photon-machine dcui[1509]: exec(code, childGlobals)
2019-02-08T14:15:59.867311+00:00 photon-machine dcui[1509]: File "/usr/lib/applmgmt/dcui/dcui.py", line 8, in <module>
2019-02-08T14:15:59.867476+00:00 photon-machine dcui[1509]: import vmkctl
2019-02-08T14:15:59.867637+00:00 photon-machine dcui[1509]: File "/usr/lib/applmgmt/dcui/vmkctl.py", line 16, in <module>
2019-02-08T14:15:59.867798+00:00 photon-machine dcui[1509]: from util import RunCommand, ConfigureIPv6State
2019-02-08T14:15:59.867960+00:00 photon-machine dcui[1509]: File "/usr/lib/applmgmt/dcui/util.py", line 21, in <module>
2019-02-08T14:15:59.868183+00:00 photon-machine dcui[1509]: from identity.vmkeystore import VmKeyStore
2019-02-08T14:15:59.868361+00:00 photon-machine dcui[1509]: ImportError: No module named 'identity'
2019-02-08T14:15:59.886970+00:00 photon-machine systemd[1]: getty@tty2.service: Main process exited, code=exited, status=1/FAILURE
2019-02-08T14:15:59.887283+00:00 photon-machine systemd[1]: getty@tty2.service: Unit entered failed state.
2019-02-08T14:15:59.887494+00:00 photon-machine systemd[1]: getty@tty2.service: Failed with result 'exit-code'.
2019-02-08T14:16:00.461398+00:00 photon-machine vami-lighttp[1582]: Stopping vami-lighttpd:
2019-02-08T14:16:00.464016+00:00 photon-machine systemd[1]: Stopped LSB: Lightning fast webserver with light system requirements.
2019-02-08T14:16:00.482210+00:00 photon-machine systemd[1]: Starting LSB: Lightning fast webserver with light system requirements...
2019-02-08T14:16:00.510357+00:00 photon-machine vami-lighttp[1606]: Starting vami-lighttpd:Extracting SSL certificate from VECS
2019-02-08T14:16:00.510831+00:00 photon-machine vami-lighttp[1606]: /opt/vmware/share/lighttpd/pre-start.sh: line 27: /usr/lib/vmware-vmafd/bin/vecs-cli: No such file or directory
2019-02-08T14:16:00.511001+00:00 photon-machine vami-lighttp[1606]: Failed to retrieve certificate from VECS
2019-02-08T14:16:00.525976+00:00 photon-machine vami-lighttp[1606]: 2019-02-08 14:16:00: (/build/mts/release/bora-9049398/studio/src/vami/apps/lighttpd/1.4.45/src/network.c.273) warning: please use server.us
e-ipv6 only for hostnames, not without server.bind / empty address; your config will break if the kernel default for IPV6_V6ONLY changes
2019-02-08T14:16:00.528281+00:00 photon-machine vami-lighttp[1606]: [ OK ]
************
2019-02-08T14:16:10.362005+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.dns.searchlist
2019-02-08T14:16:10.362345+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z DNS search:
2019-02-08T14:16:10.362622+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.mode
2019-02-08T14:16:10.362830+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z Found static mode. No need to wait for IP.
2019-02-08T14:16:10.363034+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z Getting value for install-parameter: appliance.net.pnid
2019-02-08T14:16:10.363238+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.360Z appliance.net.pnid is set to 'vCSA.vdns.local'
2019-02-08T14:16:10.363435+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.361Z PNID 'vCSA.vdns.local' appears to be an FQDN; using it for the hostname
2019-02-08T14:16:10.363701+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.361Z Running command: ['/opt/vmware/share/vami/vami_set_hostname', 'vCSA.vdns.local']
2019-02-08T14:16:10.454916+00:00 photon-machine systemd-resolved[1906]: System hostname changed to 'vCSA'.
2019-02-08T14:16:10.463043+00:00 photon-machine root: vami_set_hostname, line 247: Host name has been set to vCSA.vdns.local
2019-02-08T14:16:10.472212+00:00 photon-machine root: vami_set_hostname, line 98: Inaccessible file: vami_access -rw /etc/mailname failed.
2019-02-08T14:16:10.476943+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.475Z Done running command
2019-02-08T14:16:10.477203+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.476Z Stdout: == set_ipv4 ==
2019-02-08T14:16:10.477390+00:00 photon-machine vaos[863]: DEFULT_INT: eth0
2019-02-08T14:16:10.477553+00:00 photon-machine vaos[863]: DEFAULT_IPV4: 192.168.10.12
2019-02-08T14:16:10.477715+00:00 photon-machine vaos[863]: HN: vCSA
2019-02-08T14:16:10.477873+00:00 photon-machine vaos[863]: DN: vdns.local
2019-02-08T14:16:10.478033+00:00 photon-machine vaos[863]: ==============
2019-02-08T14:16:10.478189+00:00 photon-machine vaos[863]: == set_ipv6 ==
2019-02-08T14:16:10.478348+00:00 photon-machine vaos[863]: DEFULT_INT: eth0
2019-02-08T14:16:10.478502+00:00 photon-machine vaos[863]: DEFAULT_IPV6:
2019-02-08T14:16:10.478674+00:00 photon-machine vaos[863]: HN: vCSA
2019-02-08T14:16:10.479724+00:00 photon-machine vaos[863]: DN: vdns.local
2019-02-08T14:16:10.479896+00:00 photon-machine vaos[863]: ==============
2019-02-08T14:16:10.480038+00:00 photon-machine vaos[863]: Host name has been set to vCSA.vdns.local
2019-02-08T14:16:10.480200+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.476Z Getting value for install-parameter: appliance.ntp.servers
2019-02-08T14:16:10.480351+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.476Z Getting value for install-parameter: appliance.time.tools-sync
2019-02-08T14:16:10.481095+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.476Z Running command: ['/usr/bin/systemctl', 'stop', 'ntpd']
2019-02-08T14:16:10.481279+00:00 photon-machine systemd[1]: Stopping Network Time Service...
2019-02-08T14:16:10.481442+00:00 photon-machine ntpd[841]: ntpd exiting on signal 15 (Terminated)
2019-02-08T14:16:10.481589+00:00 photon-machine systemd[1]: Stopped Network Time Service.
2019-02-08T14:16:10.483707+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.482Z Done running command
2019-02-08T14:16:10.483996+00:00 photon-machine vaos[863]: 2019-02-08T14:16:10.482Z Running command: ['/usr/bin/systemctl', 'disable', 'ntpd']
2019-02-08T14:16:10.487931+00:00 photon-machine systemd[1]: Reloading.
Any thoughts..
Yes. The hostname "photon-machine" is a clear sign that the deployment configuration was not applied correctly and indicate that there could be a problem with the DNS configuration.
See: DNS Requirements for the vCenter Server Appliance and Platform Services Controller Appliance
So, did you specify a FQDN for VCSA during deployment configuration? Have you set this hostname in the DNS zone and does the PTR for the VCSA IP address also exists? Is this FQDN resolvable from your client machine? Are the DNS servers that can resolve this FQDN are reachable from the network where you want to deploy the VCSA?
I think a screenshot of step 8 in stage 1 (network configuration) would be helpful to determine if all settings are correct.
Please double-check/clarrify your setup.
According to your initial post you configured "vCsa.vdns.local" with an IP 192.168.30.11/24.
However, your log shows 192.168.10.12/24 !?
Please run
nslookup vCsa.vdns.local
nslookup 192.168.30.11
and check whether both resolve as expected.
André
Apologies,
The actual vCSA IP is 192.168.10.12, the DNS server is 192.168.10.10, the 192.168.30.11 here is a typographical error.
I tested the nslookup from a windows client and the AD DNS server resolves both forward and reverse lookup..
C:\Users\testPC>nslookup vcsa.vdns.local
Server: ad.vdns.local
Address: 192.168.10.10
Name: vcsa.vdns.local
Address: 192.168.10.12
C:\Users\testPC>nslookup 192.168.10.12
Server: ad.vdns.local
Address: 192.168.10.10
Name: vCSA.vdns.local
Address: 192.168.10.12
However, even the client can't access the vCSA from its browser.
Thank You
I have a question.
I'm trying to nslookup from the host machine (my laptop) and the result is not resolved as the DNS is 8.8.8.8,. I had tested an earlier network, NOT vCSA, it would resolve from my host machine, my understanding is that my DNS queries from my browser is being resolved by Google and NOT by the AD DNS server as its suppose to.
On a similar note, even the windows client can't access the vCSA from its own browser.
Any thoughts.
Thank You
Maybe it's worth trying to deploy the vCSA from the domain controller to to see whether this works!?
André
Thought of it and test it as well, works fine..
However, the issue here is that I can't use my host machine to acess the vCSA even if installed in the DC, I must use the browser from within the DC to access it.
How do the host machine's network settings look likeI?
Are you at least able to ping the vCSA from the host machine?
André
Yes the host can ping and SSH into the vCSA only the web UI can't be accessed through the host browser.
So now I can access the address 192.168.10.12:443 but when I click the HTML5 window it does not find the IP.
This site can’t be reached
vcsa.dc.local’s server IP address could not be found.
What I have noted is that once I click HTML 5 the browser just nearly opens up the SSO sign page before saying the address cannot befound
This site can’t be reached
vcsa.dc.local’s server IP address could not be found.
Where does this "vcsa.dc.local" come from? I thought the FQDN was vcsa.vdns.local?
And if you configure your vCenter with DNS and FQDNs, the corresponding DNS servers must be accessible by the clients. Especially if it is a .local zone that is not publicly resolved. If your laptop is using the Google DNS server, there will be problems. Or if vCenter cannot resolve these FQDN because the DNS servers for these entries are not reachable.
This is especially true during deployment, but also later during operation. Alternatively, you can deploy vCenter only with IP addresses instead of a resolvable FQDN and configure the ESXi hosts also with IP addresses instead of FQDNs. In this case DNS is not so important, but you will never be able to change the IP address of the vCenter in that case.
Nice, with this reply we now have tree disfferent names, vcsa.dc.local, vcsa.ad.local, and vCsa.vdns.local ;-)))
Anyway, what you may do on the host system is to add an entry to the hosts file, so that the name resolution works.
192.168.10.12 vcsa.<whatever-your.-domain-is>.local
In Windows the hosts file can be found at C:\Windows\System32\drivers\etc. Make sure you add an empty line at the end of the file.
André
Anyway, what you may do on the host system is to add an entry to the hosts file, so that the name resolution works.
Do you recommend that for a production setup? :smileygrin:
One more thing: You need to set this hosts file entry on each client from which you want to access the vCenter.
OK, so for the FQDN changing thats because I was working on a DC named ad.local thus vcsa.ad.local, then I was testing on a DC named dc.local thus vcsa.dc.local, sorry for the confusion.
The google DNS is obtained from DHCP, I have not set it manually, plus I am accessing ESXi via my host broweser, so no issues there, currently the issue is that I can access the 192.168.10.12 via browser, but when I try and click on the HTML5 for SSO sign in it says address NOT found.
When setting up vCSA I had used both FQDN and IP, I understand FQDN was optional, I used it nevertheless.
I tried a deployment with just IP address, same problem, the host cannot access the vCSA when installed in Windows server, I tried a deployment with vCSA installed on ESXi with AD, did not work., same problem, I tried a deployment y importing the vCSA directly into Workstation, same problem, can ping, can SSH but can't open from host browser.
I could use the host file method, but I understand your point of this approach in a production environment.