All Posts

Hi, i am planning to install DNS Server on the base Nano Pi T6 (Rockchip RK3588) on VMware ESX + FreeBSD, but i see that RS3399 is not supported , only PINE64-Quartz64-Model-A (Rockchip RK3566). May... See more...
Hi, i am planning to install DNS Server on the base Nano Pi T6 (Rockchip RK3588) on VMware ESX + FreeBSD, but i see that RS3399 is not supported , only PINE64-Quartz64-Model-A (Rockchip RK3566). May be information is old in documentation ? I know ESXi is very sensitive to hardware and I wouldn't want to make a mistake. Thanks, if anyone  can help me or join cyber security projects. Email: 1149477(at)gmail.com Alexander  
You mean like this? @{N="VMDK Size";E={[math]::Round(($vm.extensiondata.layoutex.file|?{$_.name -contains $harddisk.filename.replace(".","-flat.")}).size/1GB)}}, Like I said earlier, thi... See more...
You mean like this? @{N="VMDK Size";E={[math]::Round(($vm.extensiondata.layoutex.file|?{$_.name -contains $harddisk.filename.replace(".","-flat.")}).size/1GB)}}, Like I said earlier, this Get-VMGuestDisk doesn't always work. There are many prerequisites for this cmdlet and VMware Tools to return values.
for this line, I tried multiple ways to get things like other lines unfortunately without success @{N="VMDK Size";E={($vm.extensiondata.layoutex.file|Where-Object{$_.name -contains $harddisk.filena... See more...
for this line, I tried multiple ways to get things like other lines unfortunately without success @{N="VMDK Size";E={($vm.extensiondata.layoutex.file|Where-Object{$_.name -contains $harddisk.filename.replace(".","-flat.")}).size/1GB}}   by the way, do you know why for some VMs the below details are missing? Is it related to Guest OS? vmwareTools?     GuestDiskPath GuestCapacityGB GuestFreeGB GuestDiskType
I'm no data recovery expert, but what may be worth a try is to use gdisk (https://sourceforge.net/projects/gptfdisk/) to test the 1TB virtual disk. Maybe it can find and fix the issue!? In any case... See more...
I'm no data recovery expert, but what may be worth a try is to use gdisk (https://sourceforge.net/projects/gptfdisk/) to test the 1TB virtual disk. Maybe it can find and fix the issue!? In any case, make sure that you take another snapshot, prior to trying to fix things, so that you can always revert to the current state if required. André
Oh - well that would help!!  Let's see if this attachment works for the logs.
This thread pops up in the search results for problems with the VCSA 8 certificate errors problem, I thought I would add my experience for future reference in case it helps point others to their solu... See more...
This thread pops up in the search results for problems with the VCSA 8 certificate errors problem, I thought I would add my experience for future reference in case it helps point others to their solution.   I was trying to upgrade 7.0.3.01700 (22357613) to 8.0.2 (22617221), but it was failing with weak signature algorithm errors. I know about the main support article: https://kb.vmware.com/s/article/89424. However, it didn't address all the issues and potential troubleshooting steps. I would suggest testing your certificates with the vsphere8_upgrade_certificate_checks.py Python script at the bottom of that article link, since you can make changes and re-test quickly without going through the upgrade process.   2023-11-08 10:24:58.823Z ERROR #################### Errors Found #################### 2023-11-08 10:24:58.823Z ERROR 2023-11-08 10:24:58.823Z ERROR Support for certificates with weak signature algorithms has been removed in vSphere 8.0. Weak signature algorithm certificates must be replaced before upgrade. Refer to the vSphere release notes and VMware KB 89424 for more details. Correct the following 2 issues before proceeding with upgrade. 2023-11-08 10:24:58.823Z ERROR 2023-11-08 10:24:58.823Z ERROR 1. The certificate with subject '/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=AAA Certificate Services' in VECS store MACHINE_SSL_CERT has weak signature algorithm sha1WithRSAEncryption. The certificate thumbprint is D1:EB:23:A4:6D:17:D6:8F:D9:25:64:C2:F1:F1:60:17:64:D8:E3:49. The certificate Subject Key Identifier is A0:11:0A:23:3E:96:F1:07:EC:E2:AF:29:EF:82:A5:7F:D0:30:A4:B4. 2023-11-08 10:24:58.823Z ERROR 2023-11-08 10:24:58.823Z ERROR 2. The certificate with subject '/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=AAA Certificate Services' in VECS store TRUSTED_ROOTS has weak signature algorithm sha1WithRSAEncryption. The certificate thumbprint is D1:EB:23:A4:6D:17:D6:8F:D9:25:64:C2:F1:F1:60:17:64:D8:E3:49. The certificate Subject Key Identifier is A0:11:0A:23:3E:96:F1:07:EC:E2:AF:29:EF:82:A5:7F:D0:30:A4:B4. Caution: Verify that any certificates signed by the problematic certificate are not in use by vCenter Server. 2023-11-08 10:24:58.823Z ERROR 2023-11-08 10:24:58.823Z ERROR ######################################################   Our leaf certificate was issued by "InCommon ECC Server CA" (https://crt.sh/?id=12722102) which was issued by "USERTrust ECC Certification Authority" (https://crt.sh/?id=1282303296) which was issued by "AAA Certificate Services" (https://crt.sh/?id=331986). The last one is the problem, because its signature algorithm is sha1WithRSAEncryption. The "USERTrust ECC Certification Authority" is also a problem, because it's issued by the bad root.   [*] Store : TRUSTED_ROOTS Alias: d1eb23a46d17d68fd92564c2f1f1601764d8e349 Signature Algorithm: sha1WithRSAEncryption Issuer: C=GB, ST=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=AAA Certificate Services Subject: C=GB, ST=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=AAA Certificate Services Subject Key Identifier: A0:11:0A:23:3E:96:F1:07:EC:E2:AF:29:EF:82:A5:7F:D0:30:A4:B4 [*] Store : TRUSTED_ROOTS Alias: ca7788c32da1e4b7863a4fb57d00b55ddacbc7f9 Signature Algorithm: sha384WithRSAEncryption Issuer: C=GB, ST=Greater Manchester, L=Salford, O=Comodo CA Limited, CN=AAA Certificate Services Subject: C=US, ST=New Jersey, L=Jersey City, O=The USERTRUST Network, CN=USER Trust ECC Certification Authority Subject Key Identifier: 3A:E1:09:86:D4:CF:19:C2:96:76:74:49:76:DC:E0:35:C6:63:63:9A   Based on @BrianCunnie's reply and website, I knew I needed to remove not only the root certificate, but also remove & replace the "USERTrust ECC Certification Authority" at the next level down with its newer self-signed version (https://crt.sh/?id=2841410) that expires in 2038.   At that point, I used the common commands to list, unpublish, and publish.     /usr/lib/vmware-vmafd/bin/dir-cli trustedcert list /usr/lib/vmware-vmafd/bin/vecs-cli entry list --store TRUSTED_ROOTS --text # This is the AAA Certificate Services root /usr/lib/vmware-vmafd/bin/dir-cli trustedcert get --id A0110A233E96F107ECE2AF29EF82A57FD030A4B4 --outcert /certs/A0110A233E96F107ECE2AF29EF82A57FD030A4B4.pem /usr/lib/vmware-vmafd/bin/dir-cli trustedcert unpublish --cert /certs/A0110A233E96F107ECE2AF29EF82A57FD030A4B4.pem # This is the USERTrust ECC Certification Authority issued by AAA Certificate Services /usr/lib/vmware-vmafd/bin/dir-cli trustedcert get --id 3AE10986D4CF19C29676744976DCE035C663639A --outcert /certs/3AE10986D4CF19C29676744976DCE035C663639A.pem /usr/lib/vmware-vmafd/bin/dir-cli trustedcert unpublish --cert /certs/3AE10986D4CF19C29676744976DCE035C663639A.pem     I then uploaded the new self-signed "USERTrust ECC Certification Authority" (https://crt.sh/?id=2841410) through the vSphere Certificate Manager GUI. I had to do that after the above, because it has the same Subject Key Identifier as the other version, otherwise vSphere would complain that it was already in the store.   At this point, I was still having problems. The VCSA 8 certificate check was still failing. Hmmmm??? I started looking and remembered about /etc/vmware-rhttpproxy/ssl/rui.crt and /etc/vmware-vpx/ssl/rui.crt. These files had the old intermediate+root chain in them, so I removed that (i.e., the "-----BEGIN CERTIFICATE-----" sections) and added the new certificate information to them and restarted the services. I went back to the GUI and got an error: "Error occurred while fetching machine certificates: com.vmware.vcenter.certificate_management.vcenter.tls". This was solved with a full VCSA reboot. For some reason stopping and starting the services wouldn't fix it.   After the reboot, everything looks great. The correct root is there and no errors in VCSA. BUT!!! The VCSA 8 certificate check still fails with: "The certificate with subject '/C=GB/ST=Greater Manchester/L=Salford/O=Comodo CA Limited/CN=AAA Certificate Services' in VECS store MACHINE_SSL_CERT has weak signature algorithm sha1WithRSAEncryption." WHY???!!!   I figured that somewhere the old root information was still in VCSA, but I've replaced everything. Not so fast. Whenever you upload a new leaf certificate, VMware tells us to append the full chain to the end of that certificate. So when it's saying the problem is in MACHINE_SSL_CERT, it's talking about this. But this isn't mention anywhere in the notes and you can't easily troubleshoot it, at least I couldn't. I thought the easiest would be to create a new file that contained the old/current leaf, but with the new root chain appended. But VCSA won't let you do that, because: “MACHINE_SSL_CERT certificate replacement failed. SerialNumber and Thumbprint not changed after replacement, certificates are same before and after.” I understand the error, because the leaf is not changing. But the chain is changing. I kind of feel like I should be able to perform this action.   While reviewing https://kb.vmware.com/s/article/83276, it showed the procedure for extracting the current certificate and private key from the MACHINE_SSL_CERT. When I did that, I confirmed that the “__MACHINE_CERT” alias contained the WHOLE certificate chain (leaf, intermediates, root). So I created a new file that contained the old leaf, intermediate, and NEW root chain. I deleted and recreated “__MACHINE_CERT” and restarted VCSA services. That finally fixed it! The upgrade certificate check script succeeds.     /usr/lib/vmware-vmafd/bin/vecs-cli entry getcert --store MACHINE_SSL_CERT --alias __MACHINE_CERT --output ~/entry__MACHINE_CERT-getcert.txt /usr/lib/vmware-vmafd/bin/vecs-cli entry getkey --store MACHINE_SSL_CERT --alias __MACHINE_CERT --output ~/entry__MACHINE_CERT-getkey.txt openssl pkey -in entry__MACHINE_CERT-getkey.txt -pubout -outform pem | sha256sum openssl x509 -in leaf_MACHINE_CERT.pem -pubkey -noout -outform pem | sha256sum     I manually created my own leaf_chain_MACHINE_CERT.pem with the right certificates.     /usr/lib/vmware-vmafd/bin/vecs-cli entry delete --store MACHINE_SSL_CERT --alias __MACHINE_CERT /usr/lib/vmware-vmafd/bin/vecs-cli entry create --store MACHINE_SSL_CERT --alias __MACINE_CERT --cert leaf_chain_MACHINE_CERT.pem --key entry__MACHINE_CERT-getkey.txt     No more errors with the certificate checks.
Never seen that error before I have to admit. Not sure what this "... from TES" means. I would suggest opening an SR
Yes, that is why I added GB to the property name. Try like this @{N='GuestCapacityGB';E={($guestHD.CapacityGB.foreach{[math]::Round($_)}) -join '|'}}, @{N='GuestFreeGB';E={($gues... See more...
Yes, that is why I added GB to the property name. Try like this @{N='GuestCapacityGB';E={($guestHD.CapacityGB.foreach{[math]::Round($_)}) -join '|'}}, @{N='GuestFreeGB';E={($guestHD.FreeSpaceGB.foreach{[math]::Round($_)}) -join '|'}},
Appreciate it.  And just realized my snippet was missing a "!" in the if test...
Afaik there is no property indicating if an object was created due to linked vCenters. That only plays when the object is created I suspect. So yes, testing, like your snippet, might be the best ... See more...
Afaik there is no property indicating if an object was created due to linked vCenters. That only plays when the object is created I suspect. So yes, testing, like your snippet, might be the best solution.
Yes, the server parameter was unhelpful to separate roles in linked vcenters. Because the one role is shared across the linkage. So creating a second one from the second server creates a second role,... See more...
Yes, the server parameter was unhelpful to separate roles in linked vcenters. Because the one role is shared across the linkage. So creating a second one from the second server creates a second role, same name but unique ID, shared across the linkage. Since I can't see any attribute that defines if linked or not, I think the only way is to check every time.
I can't see any logs. André
Did you already try the Server parameter, pointing to one specific vCenter, on the New-VIRole cmdlet?
Thanks LucD it's working fine. for VMDK Size & Drive Size they are in GB, right? I made a change in order to get data without "." unfortunately GuestCapacityGB and GuestFreeGB are Empty          ... See more...
Thanks LucD it's working fine. for VMDK Size & Drive Size they are in GB, right? I made a change in order to get data without "." unfortunately GuestCapacityGB and GuestFreeGB are Empty           @{N='GuestCapacityGB';E={([Math]::Round($guestHD.CapacityGB)/1GB,0) -join "`n"}},           @{N='GuestFreeGB';E={([Math]::Round($guestHD.FreeSpaceGB)/1GB,0) -join "`n"}},    
Or is there nothing more to do than: $TestForRole = get-virole | where {$_.Name -eq $vInventory.SelectNodes($XpathRoles).Name} if ($TestForRole){      #create role      } else {      echo "Ski... See more...
Or is there nothing more to do than: $TestForRole = get-virole | where {$_.Name -eq $vInventory.SelectNodes($XpathRoles).Name} if ($TestForRole){      #create role      } else {      echo "Skipping $($vInventory.SelectNodes($XpathRoles).Name) because there was a duplicate in $vc}      }  
Only the source and destination hosts need EnterprisePlus license. Not all hosts. We had 6.5 Enterprise (nonPlus) clusters as source and 7.0 EnterprisePlus clusters as destination. Fortunately we ha... See more...
Only the source and destination hosts need EnterprisePlus license. Not all hosts. We had 6.5 Enterprise (nonPlus) clusters as source and 7.0 EnterprisePlus clusters as destination. Fortunately we had two extra 6.5 EnterprisePlus licenses, so we re-licensed two hosts in our 6.5 environment.  Then to cross-site vMotion we had to first move within the source site to one of those two hosts. This works well for a migration.    
Understood. Thanks for your feedback. 
I think I've asked this question before, but can't seem to find if there was ever a good solution. Environment has ~40 vcenters. Some are linked, some are not. I'm running a modified version of Ala... See more...
I think I've asked this question before, but can't seem to find if there was ever a good solution. Environment has ~40 vcenters. Some are linked, some are not. I'm running a modified version of Alan and Luc's role/permission script from 2010.  (yes, its still useful today!) The problem is that when I log into a linked vcenter, it creates a role from each vcenter login. So I end up getting duplicate roles. When you later go to set permissions using 'the role', it fails because there's two. So, the larger question is wondering if there is any way to tell that you're logged into a linked vcenter so that automation like this doesn't happen twice? Or is the only real solution to do a duplication check before role creation/whatever else you're trying to do?
Hi LucD,   The script worked fine until I upgraded my VCSA to 8.0 U2a from 7.x. I'm getting below error while executing the command: Exception calling "CreateObjectScheduledTask" with "2" argument... See more...
Hi LucD,   The script worked fine until I upgraded my VCSA to 8.0 U2a from 7.x. I'm getting below error while executing the command: Exception calling "CreateObjectScheduledTask" with "2" argument(s): "A general system error occurred: Error while getting Persistable Token for Session User from TES" At line:65 char:5 + $scheduledTaskManager.CreateObjectScheduledTask($vm.ExtensionData ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodInvocationException Please help on this as we get lot of snapshot tickets daily.
Deleting an VMFSL partition was resolved quickest with booting into RHEL <any-REL> boot USB Flash device (<5m). Use Rufus to create the boot image from the RHEL ISO to a bootable USB Flash. Select... See more...
Deleting an VMFSL partition was resolved quickest with booting into RHEL <any-REL> boot USB Flash device (<5m). Use Rufus to create the boot image from the RHEL ISO to a bootable USB Flash. Select Recovery shell. fdisk /dev/sdc d to delete the partition w to write/save the changes. This worked well, when the ESXi tools were failing (ESXi Host Client, partedUtil, esxi fdisk, esxi dd, vmkfstools and objtool).