Hi, For VM's in VMC on AWS to access S3 via AWS backbone (not via public internet) there are 2x options, Gateway Endpoint and Interface Endpoint. As I understand the Gateway endpoint is used when y...
See more...
Hi, For VM's in VMC on AWS to access S3 via AWS backbone (not via public internet) there are 2x options, Gateway Endpoint and Interface Endpoint. As I understand the Gateway endpoint is used when you need access from the same VPC whilst Interface endpoint is for when you need access from a different VPC also. Since VMC on AWS is in a different VPC (VMware Managed), doesn't this mean that the only option is Interface endpoint? What does the Service Access - S3 Enabled really means under the hood? What is the SDDC ENI and is this the only interface between the SDDC and Connected VPC or there are more interfaces as recall seeing diagrams showing each ESXi Host having an interface in the Connected VPC? (This confused me as to which is used when?)
indeed, this is the key part: "It is best not to modify these routes manually, and also a best practice to dedicate the selected subnet for the SDDC, by deploying any native services in different su...
See more...
indeed, this is the key part: "It is best not to modify these routes manually, and also a best practice to dedicate the selected subnet for the SDDC, by deploying any native services in different subnets within the VPC. For this reason, make sure to size the VPC sufficiently large to accommodate current and future AWS native workloads that will interact with the SDDC "
Hi, When deploying an SDDC, a connected VPC and subnet is chosen and linked. For creating native services in Connected VPC (ex; S3 interface endpoint), do I need to create a new subnet in same AZ s...
See more...
Hi, When deploying an SDDC, a connected VPC and subnet is chosen and linked. For creating native services in Connected VPC (ex; S3 interface endpoint), do I need to create a new subnet in same AZ so I do not consume IP's from the SDDC Connected subnet? Thanks
Do you think the below is caused by the recently expired ESXi certificate or is there anything else as error is not clear? (vsphere client still connects from another windows machine, just not from ...
See more...
Do you think the below is caused by the recently expired ESXi certificate or is there anything else as error is not clear? (vsphere client still connects from another windows machine, just not from a new one) Should I generate a new certificate via CLI? Thanks
@Lalegre - I am wondering if Advanced Cross vCenter vmotion would also work especially if I had to move all workload off from vSAN to FC SAN Storage? If out of current 6x Host cluster I start by rem...
See more...
@Lalegre - I am wondering if Advanced Cross vCenter vmotion would also work especially if I had to move all workload off from vSAN to FC SAN Storage? If out of current 6x Host cluster I start by removing 2 of them and register them to the new vCenter (with same dvSwitches/LAG/PortGroups pre-setup), would I be able to migrate VMs to new vCenter and then gradually do rest of Hosts/VM's until all Cluster is moved? Basically destination storage and network would remain the same when migrating VM's across vCenter (Source/Destination Hosts will be in same DC and actually same rack) Let me know your thoughts as this could be a way of removing the requirement of having to reconfigure Hosts networking such as moving all connectivity to Standard Switches/PortGroups apart from giving the possibility to migrate VMs to new VCSA in a granular way. Thanks
Got HCI Bench and also a test VM in a HCX L2 Extended Network but both can only reach the SDDC VCSA on ICMP and HTTPS. MON is enabled on the extended L2 networks and necessary firewall rules are in ...
See more...
Got HCI Bench and also a test VM in a HCX L2 Extended Network but both can only reach the SDDC VCSA on ICMP and HTTPS. MON is enabled on the extended L2 networks and necessary firewall rules are in both outbound on Compute Gateway and Inbound on Management Gateway, any idea or known issue? (This is for HCI Bench as it needs to reach the ESXi Hosts on HTTPS as part of the validation)
@Lalegre yes I understand that the VDS can be exported from source vCenter and imported into destination vCenter, just not sure if this changes anything about the migration plan to new vCenter and if...
See more...
@Lalegre yes I understand that the VDS can be exported from source vCenter and imported into destination vCenter, just not sure if this changes anything about the migration plan to new vCenter and if it would help me avoid taking the VSS alternative? Let me know if understood this wrong and there is a simple way of just disconnect/unregister the Host from Source vCenter and connect/register to Destination vCenter when the dVS Switch/es and PortGroups have all been exported from Source to Destination vCenter. This would be great if the above can be done when Management/vMotion/vSAN and VM Traffic are all connected via dVS's and Host can simply be re-attached to imported DVS's/PortGroups on Destination vCenter. Even better if there would be no downtime expected to VMs. For example the following link still says to create and move Host to Standard Switch after exporting VDS so not sure how the VSS route can be avoided - https://kb.vmware.com/s/article/1029498
thanks @Lalegre So from what I see a good part of it will be about creating Standard Switches (vSS's) for each Service and VM Traffic along with respective portgroups and then moving uplinks, vmker...
See more...
thanks @Lalegre So from what I see a good part of it will be about creating Standard Switches (vSS's) for each Service and VM Traffic along with respective portgroups and then moving uplinks, vmkernels and VM Traffic to them. Then on new VCSA import dVS's and PortGroups from backups and revert the above work to move back from vSS's to Distributed Switches (dVS's) Was also hoping there would be some solution that would allow you to move Management plane from one VCSA to a new one without having to move all connectivity via vSS's but seems there isn't.
Need to move all Hosts in vSphere 7 vSAN Clusters to a new vCenter to be setup in another region/network. By move I mean their management plane as the Hosts will remain the same. From what I see on...
See more...
Need to move all Hosts in vSphere 7 vSAN Clusters to a new vCenter to be setup in another region/network. By move I mean their management plane as the Hosts will remain the same. From what I see one of the requirements is to have anything running on dVS moved over to vSS such as uplinks, vmkernels, PortGroups etc... (Its all on dedicated dVS's and with redundant uplinks for Management/vMotion/vSAN and VM traffic) Is there a best practice solution/approach to move vSAN Clusters to vSS and remove them from current VCSA and join them to new VCSA? Thanks
ok so there is no NFC without vMotion? (They are needed both even if doing just a clone - no vmotion) Thought they were different and that clone operation uses only NFC. Think it should also be bi-...
See more...
ok so there is no NFC without vMotion? (They are needed both even if doing just a clone - no vmotion) Thought they were different and that clone operation uses only NFC. Think it should also be bi-directional as there is import VM functionality that can be launched from destination VCSA.
Hi, With regards to ports can you confirm that for a VM export there is no need to open TCP/8000? So the below are enough: TCP/902 between Hosts Management of Source/Destination TCP/443 betwee...
See more...
Hi, With regards to ports can you confirm that for a VM export there is no need to open TCP/8000? So the below are enough: TCP/902 between Hosts Management of Source/Destination TCP/443 between Source VCSA and Destination VCSA From what I understand the Source VCSA will create an encrypted connection to destination VCSA and use it to transfer the VMDK/s from Source Host to Destination Host using NFC protocol - please confirm as did not see detailed traffic flow. Also will the NFC traffic consume all uplink bandwith? Is there a way to throttle this since traversing multiple firewalls? Thanks
if there is no solution I guess I need to fall back to enabling retreat mode for this cluster? I wanted to avoid this as cluster is to be decommissioned so the removal of vCLS VMs should happen auto...
See more...
if there is no solution I guess I need to fall back to enabling retreat mode for this cluster? I wanted to avoid this as cluster is to be decommissioned so the removal of vCLS VMs should happen automatically when all Hosts in a Cluster are placed in MM.
yes they powered off automatically but I expected them to get delete automatically Can we please confirm expected behaviour and what to do/check in such situation?
Hi, vCLS VM's are supposed to get automatically deleted when the last Host in the Cluster is placed in Maintenance Mode right? They just powered down but are not getting deleted and I need to unmou...
See more...
Hi, vCLS VM's are supposed to get automatically deleted when the last Host in the Cluster is placed in Maintenance Mode right? They just powered down but are not getting deleted and I need to unmount the SAN datastore they reside on, any reason why? I know I can retreat mode at Cluster level to force their deletion but can anyone explain why they are not deleted (Version: 7.0.3, 20036589) Thanks Andrea
can you share a screenshot of the respective Reset Rule that would trigger a notification when the Host Connectivity is restored? Otherwise the link is a generic one and does not seem to address my ...
See more...
can you share a screenshot of the respective Reset Rule that would trigger a notification when the Host Connectivity is restored? Otherwise the link is a generic one and does not seem to address my question specifically.
Hi, This is the trigger/notification for Host Not Responding How do I reset this alarm to receive an email notification as the below is not sending emails when Host connectivity is restored?...
See more...
Hi, This is the trigger/notification for Host Not Responding How do I reset this alarm to receive an email notification as the below is not sending emails when Host connectivity is restored? Thanks
apologies my bad as got another set of Servers which do have ESXi installed on SD. Ultimately I think I got the answer which is that a vSAN Cluster will always need VMFS datastore (shared or per Hos...
See more...
apologies my bad as got another set of Servers which do have ESXi installed on SD. Ultimately I think I got the answer which is that a vSAN Cluster will always need VMFS datastore (shared or per Host to function/follow best practice)
@IRIX201110141 - Its BOSS-S1 so possible to delete and create 2x virtual disks as per your comment. But in any case this will require the deletion of existing ESXi which is not ideal, so something t...
See more...
@IRIX201110141 - Its BOSS-S1 so possible to delete and create 2x virtual disks as per your comment. But in any case this will require the deletion of existing ESXi which is not ideal, so something that should be done first thing prior ESXi install. Actually these are vSAN Ready Nodes so came with ESXi pre-installed now that I remember. I wonder why these do not come pre-partitioned into 2x virtual disks knowing the vSAN requires a VMFS datastore? Bottom line, adding a VMFS datastore via the BOSS Card might be possible but not straightforward in this case.