rwk1982's Posts

Our vIDM appliance is now at 3.3.4, was updated several times and still has the disk layout from the initial 3.3.2 installation: The pre check to 3.3.5 now fails complaining: Disk space on /opt/... See more...
Our vIDM appliance is now at 3.3.4, was updated several times and still has the disk layout from the initial 3.3.2 installation: The pre check to 3.3.5 now fails complaining: Disk space on /opt/vmware/horizon should be at least 7 GB for upgrade bundle upload. Current available space on /opt/vmware/horizon is 2 GB in node. So i decided to extend the disks and partitions to match the 3.3.5 vIDM disk settings: The steps are quite simple and can be done online: Extend Hard disk 1 to 60GB and Hard disk 2 and 3 to 20GB in vCenter (Hard disk 4 can be ignored) After that is a good time to take a snapshot of the VM. Login as "root" to your appliance let linux rescan for the disk changes:   echo 1 > /sys/class/block/sda/device/rescan echo 1 > /sys/class/block/sdb/device/rescan echo 1 > /sys/class/block/sdc/device/rescan   Check if the new disk sizes are recognized with dmesg:   dmesg -T | grep "detected capacity change"   Now you can extend the root partition with:   cfdisk /dev/sda   Select /dev/sda4, [ Resize ], type 12G (12G are used in the 3.3.5 appliance), than [ Write ] and finally type "yes" and quit cfdisk Now you must resize the file system with "resize2fs" to use the new space:   [before resize2fs] df -h | grep "sda4" /dev/sda4 8.8G 3.9G 4.5G 47% / [run resize2fs] resize2fs /dev/sda4 [after resize2fs] df -h | grep "sda4" /dev/sda4 12G 3.9G 7.3G 35% /     Next is to resize the two data partitions which are LVMs and the steps are different in this case: Use "pvresize" to let LVM know that the physical disk size has changed:   [before pvresize] pvdisplay /dev/sdc /dev/sdb --- Physical volume --- PV Name /dev/sdc VG Name tomcat_vg PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2559 Free PE 0 Allocated PE 2559 PV UUID 546IZ8-0PYE-AAKg-f2WP-VGeG-L9ep-OTasC3 --- Physical volume --- PV Name /dev/sdb VG Name db_vg PV Size 10.00 GiB / not usable 4.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2559 Free PE 0 Allocated PE 2559 PV UUID ue30Dt-bPWI-JlYG-DktR-5mGT-AZ66-4KEK03 [run pvresize] pvresize /dev/sdb /dev/sdc [after pvresize] pvdisplay /dev/sdc /dev/sdb --- Physical volume --- PV Name /dev/sdc VG Name tomcat_vg PV Size <20.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 2560 Allocated PE 2559 PV UUID 546IZ8-0PYE-AAKg-f2WP-VGeG-L9ep-OTasC3 --- Physical volume --- PV Name /dev/sdb VG Name db_vg PV Size <20.00 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 2560 Allocated PE 2559 PV UUID ue30Dt-bPWI-JlYG-DktR-5mGT-AZ66-4KEK03     Next step is to extend the logical volumes with "lvextend" to use the new space:   [before lvextend] lvdisplay /dev/tomcat_vg/horizon /dev/db_vg/db --- Logical volume --- LV Path /dev/tomcat_vg/horizon LV Name horizon VG Name tomcat_vg LV UUID gw2MpR-PZgs-ehmA-Ebpc-vg52-97Nd-9t5YVW LV Write Access read/write LV Creation host, time sc-a01-049-209, 2019-09-12 15:55:24 +0000 LV Status available # open 1 LV Size <10.00 GiB Current LE 2559 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 --- Logical volume --- LV Path /dev/db_vg/db LV Name db VG Name db_vg LV UUID WzGOjU-z4Qw-FI0u-aOVO-AWlB-AO1O-2fwSkN LV Write Access read/write LV Creation host, time sc-a01-049-209, 2019-09-12 15:55:23 +0000 LV Status available # open 1 LV Size <10.00 GiB Current LE 2559 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0 [run lvextend] lvextend -l +100%FREE /dev/tomcat_vg/horizon lvextend -l +100%FREE /dev/db_vg/db [after lvextend] lvdisplay /dev/tomcat_vg/horizon /dev/db_vg/db --- Logical volume --- LV Path /dev/tomcat_vg/horizon LV Name horizon VG Name tomcat_vg LV UUID gw2MpR-PZgs-ehmA-Ebpc-vg52-97Nd-9t5YVW LV Write Access read/write LV Creation host, time sc-a01-049-209, 2019-09-12 15:55:24 +0000 LV Status available # open 1 LV Size <20.00 GiB Current LE 5119 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:1 --- Logical volume --- LV Path /dev/db_vg/db LV Name db VG Name db_vg LV UUID WzGOjU-z4Qw-FI0u-aOVO-AWlB-AO1O-2fwSkN LV Write Access read/write LV Creation host, time sc-a01-049-209, 2019-09-12 15:55:23 +0000 LV Status available # open 1 LV Size <20.00 GiB Current LE 5119 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 254:0     And finally use "resize2fs" to extend the file system on the two LVs:   [before resize2fs] df -h | grep "mapper" /dev/mapper/db_vg-db 9.8G 1014M 8.3G 11% /db /dev/mapper/tomcat_vg-horizon 9.8G 7.1G 2.2G 78% /opt/vmware/horizon [run resize2fs] resize2fs /dev/tomcat_vg/horizon resize2fs /dev/db_vg/db [after resize2fs] df -h | grep "mapper" /dev/mapper/db_vg-db 20G 1019M 18G 6% /db /dev/mapper/tomcat_vg-horizon 20G 7.2G 12G 39% /opt/vmware/horizon     That's all... and I think not supported from VMware
This should be possible with the latest release: Deploying and Managing Virtual Machines in vSphere with Tanzu 
/bin/esxcfg-info -w -F xml | grep -E 'bios-version|bios-releasedate|bmc-version' or you could parse the xml from: https://YourESXiHost/cgi-bin/esxcfg-info.cgi?xml
You can remove the Plugin with the vCenter Extension Manager-> https://kb.vmware.com/s/article/1025360  
Hello! Properties are not sortable in JavaScript. I use this workaround (only tested in vRA 7.x): var sortme = ["t1@tier-1","t5@tier-5","t3@tier-3","t2@tier-2","t4@tier-4"] var val2retur... See more...
Hello! Properties are not sortable in JavaScript. I use this workaround (only tested in vRA 7.x): var sortme = ["t1@tier-1","t5@tier-5","t3@tier-3","t2@tier-2","t4@tier-4"] var val2return = [] for each (item in sortme.sort()) {      var prop = new Properties();      prop.put('label',item.split("@")[0])      prop.put('value',item.split("@")[1])      val2return.push(prop) } return val2return
Maybe under "Deployments"? I have no 7.6 Installation to check...
Hello! Sure... Tab "Items" -> "Machines" -> Click on the VM -> Select "Reconfigure" on the right -> on the tab "Properties" you can add/edit/remove properties. Robert
Hello! For new Deployments you can set the Custom Property "Snapshot.Policy.Limit" on the "vSphere vCenter Machine" Item in the Blueprint. For existing VMs add the Property here: This w... See more...
Hello! For new Deployments you can set the Custom Property "Snapshot.Policy.Limit" on the "vSphere vCenter Machine" Item in the Blueprint. For existing VMs add the Property here: This works on 7.4 and should also work on 7.6 -> https://docs.vmware.com/en/vRealize-Automation/7.6/com.vmware.vra.prepare.use.doc/GUID-491153BB-5B6A-4FFA-8632-443F5C024D0B.html Robert
Hello! You cannot change the Type of the NIC in the GUI. Here you can only delete it and add a new one with the correct type but this will also genereate a new MAC Adresse for your VM. For cha... See more...
Hello! You cannot change the Type of the NIC in the GUI. Here you can only delete it and add a new one with the correct type but this will also genereate a new MAC Adresse for your VM. For changing the NIC Type and preserve the MAC Address i always use PowerCLI/PowerShell -> PowerShell Gallery | VMware.PowerCLI 12.0.0.15947286 Open PowerShell and install the PowerCLI Modules: Install-Module -Name VMware.PowerCLI Now you can connect to your vCenter with: Connect-VIServer -Server yourvcenter.fqdn And change the NIC Type of your VM (the VM must be PoweredOff) - in your case: Get-VM -Name SDWAN_vESXi | Get-NetworkAdapter | Set-NetworkAdapter -Type Vmxnet3 Hope this helps a little bit
You can download the ISO from here: https://my.vmware.com/group/vmware/patch​ Just make sure you get the *-all-* version: VMware-VCSA-all-6.7.0-15808844.iso
Maybe a "Re-Trust With Identity Manager" on your vRA Deployment could fix your issue?
We also get the same "Collection Failed" after the Update to 8.1. Can you check the log "/storage/var/loginsight/plugins/vsphere/li-vsphere.log" for "Error running vSphere WCP collection" like: [202... See more...
We also get the same "Collection Failed" after the Update to 8.1. Can you check the log "/storage/var/loginsight/plugins/vsphere/li-vsphere.log" for "Error running vSphere WCP collection" like: [2020-02-30 25:00:00.00+0000] ["pool-10-thread-1"/127.0.0.1 ERROR] [com.vmware.loginsight.scheduled.ScheduledPluginService] [Error running vSphere WCP collection] java.lang.Exception: vCenter API your.vcenter.fqdn is not available, response code: 403, message: {"type":"com.vmware.vapi.std.errors.unauthorized","value":{"messages":[{"args":[],"default_message":"Unable to authorize user","id":"vapi.security.authorization.invalid"}]}} at com.vmware.loginsight.scheduled.VSphereWCPConnector.validateVCenterVersion(VSphereWCPConnector.java:143) at com.vmware.loginsight.scheduled.VSphereWCPConnector.fetchEvents(VSphereWCPConnector.java:163) at com.vmware.loginsight.scheduled.ScheduledPluginService$ScheduledPluginServiceImpl.fetchEventsFromWCP(ScheduledPluginService.java:619) at com.vmware.loginsight.scheduled.ScheduledPluginService$ScheduledPluginServiceImpl.lambda$executeVsphereCollection$2(ScheduledPluginService.java:517) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) The WCP collection is for the collection of data from Kubernetes clusters and it is enabled per default. It can be disabled by changing the parameter "wcp-collection-enabled" from <config> ... <vsphere> <wcp-collection-enabled value="true" /> ... </config> to <config> ... <vsphere> <wcp-collection-enabled value="false" /> ... </config> in the internal configuration options from vRLI: https://kb.vmware.com/s/article/2123058 We set the paramter yesterday and got no error mail today.  
Hello! If you get this error there is a chance that you use IPs in the Subnet 172.16/12 (172.16.0.0 - 172.31.255.255) in your Network. In our Case the vRA Appliance has a 172.17.16.x address. ... See more...
Hello! If you get this error there is a chance that you use IPs in the Subnet 172.16/12 (172.16.0.0 - 172.31.255.255) in your Network. In our Case the vRA Appliance has a 172.17.16.x address. The vCO docker image has a unused docker0 interface with the IP 172.17.0.1/16 so it breaks routing and the container can not reach anything between 172.17.0.1 - 172.17.255.254. Workaround is to remove the interface from the image: Connect to the vRA Appliance with ssh and run: cat << EOF > Dockerfile FROM vco-polyglot-runner_private:latest RUN mkdir -p /etc/docker RUN printf '{\n "iptables": false,\n "bridge": "none"\n}\n' > /etc/docker/daemon.json EOF docker build -t vco-polyglot-runner_private:latest . /opt/scripts/backup_docker_images.sh After that you can hit "Retry" button in vRLCM and the deployment should finish. The same issue is in the vRO Standalone Appliance and to apply the workaround run: /opt/scripts/deploy.sh --onlyClean cat << EOF > Dockerfile FROM vco-polyglot-runner_private:latest RUN mkdir -p /etc/docker RUN printf '{\n "iptables": false,\n "bridge": "none"\n}\n' > /etc/docker/daemon.json EOF docker build -t vco-polyglot-runner_private:latest . /opt/scripts/backup_docker_images.sh /opt/scripts/deploy.sh A official KB should be released soon. @VMware GSS: Thanks for the fix
You can try: - Update the iLO to the latest version (always a good idea) - NAND format the iLO - Power Off the Server, boot it from the HPE Custom ESXi ISO and do a "offline" update
The update takes a while just wait. /dev/sdc1 ist a mounted ISO as far as i know.
Found the Solution... if the root Disk (Harddisk 1) is smaller than 20 GB the setup can not create the Paration Table.
Hello! I just installed the Assessment Tool a second time and it reports no issues. But the Update from 7.5 to 8.0 fails...
Hello! Look at /var/log/vmware-imc/toolsDeployPkg.log for errors. Regards, Robert
Hello Jono! "virtualMachineAddOrUpdateProperties" is just an Output Parameter (Type: Properties) of the Workflow: Robert
Hello! We just migrated from vRO 7.4 to 7.6 but did it a bit different as in the Documenation (Migrate an External vRealize Orchestrator Appliance 6.x and Later to vRealize Orchestrator 7.6 ... See more...
Hello! We just migrated from vRO 7.4 to 7.6 but did it a bit different as in the Documenation (Migrate an External vRealize Orchestrator Appliance 6.x and Later to vRealize Orchestrator 7.6 - Rename the old Appliance in vCenter - Take a Snapshot of the Appliance (always a good idea) - Deploy new 7.6 Appliance with the same Name and IP Settings but do not power it on. - Stop the source Orchestrator services:      service vco-server stop      service vco-configurator stop - Add the line "listen_addresses ='*'" to /var/vmware/vpostgres/current/pgdata/postgresql.conf - Add the line "host all all 0.0.0.0/0 md5" to /var/vmware/vpostgres/current/pgdata/pg_hba.conf - Add a new temporary IP to eth0 (on the same network/vlan). For Example:      ip addr add 192.168.0.222/24 dev eth0 - Close the SSH Session and login to temporary IP - Remove the old IP from eth0. For Example:      ip addr remove 192.168.0.111/24 dev eth0 - Restart the vPostgrsql:      service vpostgres restart - Start the new deployed Appliance - Go the "Migrate" Tab an the VAMI of the new Appliance and instead of the Host name of the Source Orchestrator Appliance use the temporary IP - Wait until the the wizard has completed. - Shutdown the old Appliance and never power it on again - Do the Post Migration Steps - Done