XIA Configuration provides several vCenter reports which include VMs, resources and specifications such as RAM, CPU, Disk, OS etc - find out more here: XIA Configuration - VMware Reporting Tool
Have you ensured the IP Address Range used for service mesh configured does not overlap with the VMware Cloud on AWS management subnet CIDR block or any other IP range already in use for services in ...
See more...
Have you ensured the IP Address Range used for service mesh configured does not overlap with the VMware Cloud on AWS management subnet CIDR block or any other IP range already in use for services in VMware Cloud on AWS.
@wilgee, No, the appliances from cloud are different that the On-Prem ones. You need the appliances from both sides. As I mentioned, you can take backup of the appliance on prem in case something ha...
See more...
@wilgee, No, the appliances from cloud are different that the On-Prem ones. You need the appliances from both sides. As I mentioned, you can take backup of the appliance on prem in case something happens and even maybe send the backup to the cloud and once On-Prem is back again, you can restore it there.
@wilgee, No, you do not need to replicate the VRM appliance as you will have all times one on each side, what you can do is take a backup of the appliance at On-Prem to have a copy of the server.
Hi, I got this information for you, I hope it will help you solve your issue. If your on-premises VRM appliance goes down and you do not have a replicated copy of it, you will not be able to use it...
See more...
Hi, I got this information for you, I hope it will help you solve your issue. If your on-premises VRM appliance goes down and you do not have a replicated copy of it, you will not be able to use it to failover your replicated servers. In this scenario, you would need to have a plan in place for recovering your VRM appliance or deploying a new one. If you have replicated your SRM servers along with your other servers, you can use the replicated copy of SRM to initiate the failover process. The replicated SRM server can communicate with the replicated VMs on the VMware on AWS environment and start the failover process. However, it's important to note that SRM and VRM work together to facilitate disaster recovery, and both are essential components of a complete disaster recovery solution. Therefore, it's recommended to have a plan in place for replicating both SRM and VRM appliances. In case of a disaster, you will then be able to use the replicated copies of both appliances to failover your replicated servers. Thanks
Hi dear community members, We are having an issue with our federated setup in VMWare Cloud services. An additional domain was added to federated setup, but the wizard failed on step 3 and now we wou...
See more...
Hi dear community members, We are having an issue with our federated setup in VMWare Cloud services. An additional domain was added to federated setup, but the wizard failed on step 3 and now we would like to revert it on second thoughts. Problem is that it now shows up as a linked domain, but the setup wizard was never completed. Non-federated accounts that were already defined (with that same domain name suffix) now cannot login anymore because it keeps redirecting to the federation. There seems to be no way to remove the settings, only to complete the federated setup. Any help would be greatly appreciated.
Hi, I try to deploy HCX and mount the tunnel between interconnect VMs trhough the Direct Connect. At destination my SDDC is member of a SDDC Group, the vTGW has a peering with the TGW whic is conne...
See more...
Hi, I try to deploy HCX and mount the tunnel between interconnect VMs trhough the Direct Connect. At destination my SDDC is member of a SDDC Group, the vTGW has a peering with the TGW whic is connected to on-premise by DX. See the schema attached. I have modified my network profile in order to deploy my service mesh with internal IP address in VMC but the tunnel is always down. is there something i forgot ? thanks for your help !
Hello, I had accidentally reset my ESXi host. (yeah. I know) I set the network and so on and after it was back online, I noticed that one of my 2 storage devices were offline. 1 NVMe SSD which is w...
See more...
Hello, I had accidentally reset my ESXi host. (yeah. I know) I set the network and so on and after it was back online, I noticed that one of my 2 storage devices were offline. 1 NVMe SSD which is working fine, and one SATA SSD that is just showing up a datastore with 1 partition and 0 bytes free. Any attempt to edit this datastore or device come up with an error. I can't even delete it. After some digging I discovered that the SSD, Samsung Evo 870, shows a status of "FROZEN." Not only does the trick to power cycle the drive get it un-frozen, but It's also showing up as a RAID drive?! So, standard tools to see the partition information will not work. I can see the old partition info and the files with a raid reconstructor, and could probably recover them if I had more than the freeware, but that won't get it out of this frozen state. Any one have some advice in dealing with getting this back to a useable state? Thank you.
On AWS portal, after completing the forms for onboarding proceess to VMC. I do not receive the email which should allow me to have access to the creation of the SSDC. when a renew the request I have...
See more...
On AWS portal, after completing the forms for onboarding proceess to VMC. I do not receive the email which should allow me to have access to the creation of the SSDC. when a renew the request I have one error message: Onboarding task has been submitted for this account !!!! I cna't renew my request !
These two docs should help you: https://bluexp.netapp.com/blog/aws-fsxo-blg-configure-hybrid-cloud-with-fsx-for-netapp-ontap-and-vmware-cloud-on-aws-sddc-using-vmware-hcx https://docs.netapp.com/us...
See more...
These two docs should help you: https://bluexp.netapp.com/blog/aws-fsxo-blg-configure-hybrid-cloud-with-fsx-for-netapp-ontap-and-vmware-cloud-on-aws-sddc-using-vmware-hcx https://docs.netapp.com/us-en/netapp-solutions/ehc/aws/aws-migrate-vmware-hcx.html#takeaways
SRM and vSphere Replication should be powered on at both sides. By design, it always looks so - that vCenter, SRM, vSphere Replication, vROPS and others are isolated from the prod environment. In m...
See more...
SRM and vSphere Replication should be powered on at both sides. By design, it always looks so - that vCenter, SRM, vSphere Replication, vROPS and others are isolated from the prod environment. In mgmt cluster.
Here's the question: We are replicating from on prem to VMW on AWS using SRM and VRM. We dont replicate the VRM appliance. My question is if our on prem goes kaput how are we going to be able to p...
See more...
Here's the question: We are replicating from on prem to VMW on AWS using SRM and VRM. We dont replicate the VRM appliance. My question is if our on prem goes kaput how are we going to be able to power on our replicated servers if VRM is on prem and goes down. Should we also replicate VRM so we can power it up or we do also need SRM to be up?
Clusters do not share a datastore. It is VMware Cloud on AWS and I can't create traditional on prem VM and Host groups. I can only create rules based on tags
Are these two clusters sharing the same Datastores? I can not imagine that VM can still be migrated between clusters. I have On-Prem env. nothing even close happened. Create a Host group from ho...
See more...
Are these two clusters sharing the same Datastores? I can not imagine that VM can still be migrated between clusters. I have On-Prem env. nothing even close happened. Create a Host group from hosts in Cluster A and then a rule that VM must/should run only on that Hosts. As for me topic/issue will be closed/solved.
Hello, we have 2 clusters under the same SDDC (VMC on AWS). Need to put a restriction that prevent VMs from Cluster 2 to be migrated to Cluster 1 (not via DRS, but manually). Configured host-vm af...
See more...
Hello, we have 2 clusters under the same SDDC (VMC on AWS). Need to put a restriction that prevent VMs from Cluster 2 to be migrated to Cluster 1 (not via DRS, but manually). Configured host-vm affinity rule based on tags but this didn't work - still can manually migrate VM from one cluster to another. Appreciate any ideas!