Ski473's Posts

I found the cause - the boot loader was executing the SUSE environment which was 4.8 instead of the Photon one. By manually changing the boot to Photon - 8.1.1 successfully booted. Changin... See more...
I found the cause - the boot loader was executing the SUSE environment which was 4.8 instead of the Photon one. By manually changing the boot to Photon - 8.1.1 successfully booted. Changing the boot loader to use Photon as the default via Yast2 corrected the issue and the VM now reboots into Photon and 8.1.1. Perhaps this is a bug in the upgrade path ?
Really weird one - I upgraded from 4.6 to 4.7 to 4.8 then to 8.11. After the upgrade 8.11 was displaying fine and I could see data and the marketplace.  It was running as 8.11 as I remember lo... See more...
Really weird one - I upgraded from 4.6 to 4.7 to 4.8 then to 8.11. After the upgrade 8.11 was displaying fine and I could see data and the marketplace.  It was running as 8.11 as I remember looking at the version and seeing more options under admin. However after a reboot the appliance displays 4.8.  Now is the interesting part - when I try to do the upgrade again it says it's already complete. The log insight webpage version says 4.8 on the console as well as the web page so I'm a bit confused. Any ideas ?
If your problem is similar to mine I found a solution.  Check out the following : Remove Machine from VRA 7.2 But in essence following the KB and downloading the script should help you out ... See more...
If your problem is similar to mine I found a solution.  Check out the following : Remove Machine from VRA 7.2 But in essence following the KB and downloading the script should help you out here. https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144269
Success.... I went back to revisit the KB again https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144269 And tried to see why the additional... See more...
Success.... I went back to revisit the KB again https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144269 And tried to see why the additional step of logging into postgres and updating the table didn't actually run the way it should have. As a side note the Readme file for the KB doesn't include the semi colon at the end of the command line ( ; ) - which can throw you if your new to updating stuff in postgres. This time I used the machine name from the expiry email instead of the actual VM which was listed in VRA. So for each of the machines which were being listed in the email I did the following : SELECT * FROM cat_resource WHERE name='Windows 10-99832872'; Notice the Status is PENDING_APPROVAL Run the update command as per the KB - I ignored the Tenant bit. Updated now to Deleted. Once I completed that on all machines I can then do a select again and it now shows the six original machines now with a request_id nulled out from previous commands but the status is now set to deleted. This status must stop whatever background process for expiry is trying to occur - Once they were changed the emails stopped. Yippeee !!!! So perhaps the whole nulling the ID and trying to delete the references were all a bit pointless - following the KB ( with a bit more info ) looks to solve the issue. Thanks everybody for your input - I hope this post helps other people with similar issues in the future.
So this is how it looks when I try to remove it . SELECT * FROM cat_resource WHERE name='Windows 10-66940198'; The ID then shows as 79a1b805-5ffc-459e-bbc1-feab66fe16b3 - So I try and delet... See more...
So this is how it looks when I try to remove it . SELECT * FROM cat_resource WHERE name='Windows 10-66940198'; The ID then shows as 79a1b805-5ffc-459e-bbc1-feab66fe16b3 - So I try and delete that. vcac=# delete FROM cat_request where resource_id = '79a1b805-5ffc-459e-bbc1-feab66fe16b3'; ERROR:  update or delete on table "cat_request" violates foreign key constraint "cat_requestevent__request_id__cat_request__id__fkey" on table "cat_requestevent" DETAIL:  Key (id)=(49411b21-2e55-48a4-aad3-4f11d2e99e67) is still referenced from table "cat_requestevent". So I check this out. So lets delete these : delete from cat_requestevent where request_id = '49411b21-2e55-48a4-aad3-4f11d2e99e67'; ERROR:  update or delete on table "cat_requestevent" violates foreign key constraint "cat_requestevent_details__requestevent_id__cat_requestevent__id" on table "cat_requestevent_details" DETAIL:  Key (id)=(96b0b8d0-f774-4f88-a048-9541158922e2) is still referenced from table "cat_requestevent_details". vcac=# select * from cat_requestevent_details where requestevent_id = '96b0b8d0-f774-4f88-a048-9541158922e2';            requestevent_id            | eventcode | severity | source |                                                                                              systemmessage                                                                 |                                                                             usermessage --------------------------------------+-----------+----------+--------+----------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------- ---------------------- 96b0b8d0-f774-4f88-a048-9541158922e2 |           | ERROR    |        | [Error code: 20145 ] - [Error Msg: Request initialization failed: Rejecting blueprint request [9bd303ba-a357-403b-803d-9a0e6fa0b460]. Th ere are other active requests on the corresponding deployment.]+| Request initialization failed: Rejecting blueprint request [9bd303ba-a357-403b-803d-9a0e6fa0b460]. There are other active requests on the corr esponding deployment. So I can see the error message here that corresponds to it - lets delete it. vcac=# delete from cat_requestevent_details where requestevent_id = '96b0b8d0-f774-4f88-a048-9541158922e2'; DELETE 1 K , lets go back and delete the others vcac=# delete from cat_requestevent where request_id = '49411b21-2e55-48a4-aad3-4f11d2e99e67'; DELETE 2 vcac=# delete FROM cat_request where resource_id = '79a1b805-5ffc-459e-bbc1-feab66fe16b3'; ERROR:  update or delete on table "cat_request" violates foreign key constraint "cat_requestevent__request_id__cat_request__id__fkey" on table "cat_requestevent" DETAIL:  Key (id)=(1b8a848e-430f-492d-9d77-705db4e4403d) is still referenced from table "cat_requestevent". vcac=# So now it references a completely different key - ARRRGGGG !!
Yeah , I've tried this as well - however I keep getting circular references that the key is in use in another table. ERROR:  update or delete on table "cat_requestevent" violates foreign key c... See more...
Yeah , I've tried this as well - however I keep getting circular references that the key is in use in another table. ERROR:  update or delete on table "cat_requestevent" violates foreign key constraint "cat_requestevent_details__requestevent_id__cat_requestevent__id" on table "cat_requestevent_details" DETAIL:  Key (id)=(fe1e143d-fbe1-4c60-8502-56283c07eb6f) is still referenced from table "cat_requestevent_details". I can dig further down and delete them from the lower tables but when I attempt to remove from cat_request I seem to get a different key file each time - very frustrating ! I'll post a more through description later. The problem ones I believe have been nulled ( and still showing as PENDING_APPROVAL ) by the commands but I can't delete them anymore
The VM's don't show up anymore anywhere - I still get the expire message though !
Thanks for the reply but I tried that already - the command initially run and said it was successful but didn't actually clean anything up.  I've since managed to remove the actual machines from ... See more...
Thanks for the reply but I tried that already - the command initially run and said it was successful but didn't actually clean anything up.  I've since managed to remove the actual machines from the managed machines side via the script it the VMware KB - however I'm still receiving emails to say that the expire job has failed. I tried running it again for good measure but now just get machine not found.
Found what I think is the same issue - no replies yet though. Re: vra 7.1 - Expired Blueprint - Request to expire a deployment and all constituent resources failed
I have the same issue - if I find out how to delete these references I'll let you know.
Hi , VMware has now updated the KB and the script to remove the VM's is now present. https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144269 I... See more...
Hi , VMware has now updated the KB and the script to remove the VM's is now present. https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144269 I've run the script according to the instructions and the machines now do not show under managed machines so they are cleaned up it that respect - the section on postgres cleanup however didn't work as nothing shows when I run the select command.   Run the following commands:     su - postgres        cd /opt/vmware/vpostgres/current/bin        ./psql        In psql console run,     \c vcac    SELECT * FROM cat_resource WHERE name='<machine name>' and tenant_id='<tenant name>'    (make sure we see the machine, then, execute)    update cat_resource set status = 'DELETED' where name = '<machine name>’ and tenant_id='<tenant name>' In any event although it's cleared up from managed machines side I'm still receiving the emails about the expire request failure.    Upon looking at the actual request I see the following when viewing the details of the Request. Request initialization failed: Rejecting blueprint request [bb833a6b-63d5-43c0-8ecf-155fb788a46a]. There are other active requests on the corresponding deployment. Request Information Resource Action Expire Requested by System User Request date 1/3/17 11:20 AM (Australian Eastern Standard Time (Victoria)) Description Resource Lease has expired. Reason for request Resource Lease has expired. Resource Information Name Windows 10-99832872 Type Deployment Created On 12/7/16 7:50 PM (Australian Eastern Standard Time (Victoria)) Lease 1 day Archive Days 5 Note the Name of the resource isn't the VM name which we have since cleaned up but rather another name. Any ideas as to how to target this now ? Thanks for all your input so far.
Cheers , that changed the status to Off - however in Managed machines when I try click on destroy, it still doesn't remove the VM.  Note these machines didn't actually build so there is no actual... See more...
Cheers , that changed the status to Off - however in Managed machines when I try click on destroy, it still doesn't remove the VM.  Note these machines didn't actually build so there is no actual VM associated to them. Is there a way to remove them now ?  I'll leave some feedback on the KB as well - thanks.
Hi , I have a few VM's listed as missing under Managed Machines - Originally these particular VM's didn't build properly and were in progress - however they never proceeded. I've tried a few t... See more...
Hi , I have a few VM's listed as missing under Managed Machines - Originally these particular VM's didn't build properly and were in progress - however they never proceeded. I've tried a few things to get rid of them ( and even changing them in SQL to be missing ) but constantly get these types of emails every few hours.  In managed machines I can't directly destroy them either - the option is there but doesn't do anything when clicked. I should add that the request is a fail message and has the following subject line - but the request ID keeps increasing. Your Request #99 for "Expire" has failed Request Information Resource Action Expire Requested by System User Request date 12/8/16 7:53 PM (Australian Eastern Standard Time (Victoria)) Description Resource Lease has expired. Reason for request Resource Lease has expired. Resource Information Name Windows 10-99832872 Type Deployment Created On 12/7/16 7:50 PM (Australian Eastern Standard Time (Victoria)) Lease 1 day Archive Days 5 I've had a look here and tried doing the cloud connect force unregister - it shows as successful but doesn't actually remove it. https://kb.vmware.com/kb/2144269 The KB mentions another script to remove them but it's not on the KB to download. To use the stored procedure: Extract the contents of the attached KB2144269_RemoveVMFromVRA7.zip file. Follow the instructions located in the readme. Any ideas to force remove them ? Message was edited by: Ski473
Yeah , was probably just going to migrate the database it holds onto another machine but thought I'd see if anyone else had come across this first. Thanks for your input though.
Hi , I have a VM which when snapshotted is ok however when the snapshot is consolidated the VM hard powers down and won't start back up - with an error: Reason: The parent virtual disk has bee... See more...
Hi , I have a VM which when snapshotted is ok however when the snapshot is consolidated the VM hard powers down and won't start back up - with an error: Reason: The parent virtual disk has been modified since the child was created I can correct this by modifying the VMX file and change the start file back to the base rather than the delta ( I would have thought it would complete this as part of the consolidate process - but it does look to fail in the logs and I guess it never happens ) : Dec 14 05:27:01.436: vmx| SNAPSHOT: Consolidating from '/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-000002.vmdk' to '/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM.vmdk'. Dec 14 05:27:01.438: vmx| DISKLIB-VMFS : "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-flat.vmdk" : open successful (24) size = 53687091200, hd = 4015828. Type 3 Dec 14 05:27:01.438: vmx| DISKLIB-DSCPTR: Opened [0]: "deadVM-flat.vmdk" (0x18) Dec 14 05:27:01.438: vmx| DISKLIB-LINK  : Opened '/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM.vmdk' (0x18): vmfs, 53687091200 sectors / 26214400 Mb. Dec 14 05:27:01.438: vmx| DISKLIB-LIB   : Opened "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM.vmdk" (flags 0x18). 89C4BB4 Dec 14 05:27:01.440: vmx| DISKLIB-VMFS : "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-000002-delta.vmdk" : open successful (24) size = 52430848, hd = 4167381. Type 8 Dec 14 05:27:01.440: vmx| DISKLIB-DSCPTR: Opened [0]: "deadVM-000002-delta.vmdk" (0x18) Dec 14 05:27:01.440: vmx| DISKLIB-LINK  : Opened '/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-000002.vmdk' (0x18): vmfsSparse, 53687091200 sectors / 26214400 Mb. Dec 14 05:27:01.441: vmx| DISKLIB-VMFS_SPARSE :Can't create deltadisk node 3f96d5-deadVM-000002-delta.vmdk failed with error The operation completed successfully Dec 14 05:27:01.441: vmx| DISKLIB-LIB   : Opened "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-000002.vmdk" (flags 0x18). 88E2604 Dec 14 05:27:01.441: vmx| DISKLIB-LINK  : Attach: Content ID mismatch (fcf6ef35 != 0d5c7eaa). Dec 14 05:27:01.441: vmx| DISKLIB-CHAIN : failed to attach Dec 14 05:27:01.441: vmx| DISKLIB-LIB   : Failed to attach 88E2604 to 89C4BB4: Chain->attach failed. Dec 14 05:27:01.441: vmx| DISKLIB-VMFS : "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-000002-delta.vmdk" : closed. Dec 14 05:27:01.441: vmx| SNAPSHOT: SnapshotCombineDisks failed: 5 Dec 14 05:27:01.441: vmx| DISKLIB-VMFS : "/vmfs/volumes/48bc3694-28959c38-5d31-001a64c32812/VeeamBackup/deadVM(64)/deadVM-flat.vmdk" : closed. Dec 14 05:27:01.441: vmx| SNAPSHOT: SnapshotConsolidate failed 5 Dec 14 05:27:01.441: vmx| SNAPSHOT: Consolidate failed 5 Dec 14 05:27:01.446: vmx| VMXVmdb_LoadRawConfig: Loading raw config Dec 14 05:27:01.450: vmx| Checkpoint_Unstun: vm stopped for 110291 us I particularly like this line : Can't create deltadisk node 3f96d5-deadVM-000002-delta.vmdk failed with error The operation completed successfully This only appears to occur for this VM and changing the VMX allows it to boot again - however this presents a problem because I can't snapshot it and hence can't actually get a backup via our veeam backup process. On a side note this VM has had issues in the past with the VEEAM backup process and recently when trying to replicate it the VM powered off and I had to recreate the VMDK descriptor file.  It did actually backup last night but then died due to the snapshot consolidate operation I gather. Any ideas ?