I wrote a lengthy paper on the topic years ago: https://core.vmware.com/resource/vmware-vsphere-metro-storage-cluster-vmsc
What happens from a VMware perspective depends on where the VMs are locate...
See more...
I wrote a lengthy paper on the topic years ago: https://core.vmware.com/resource/vmware-vsphere-metro-storage-cluster-vmsc
What happens from a VMware perspective depends on where the VMs are located. Assuming the VMs align from a compute stance with the ownership from a storage point of view then NONE of the VMs will need to be restarted. If however your VMs are all over the place then VMs will be restarted as soon as VMware can acquire a lock on the datastore.
In other words, FOLLOW the guidance provided by NetApp and VMware to prevent ending up in a situation where VMs need to be restarted in this scenario. (https://kb.vmware.com/s/article/2031038)
But disabling HA and Admission Control doesn't prevent what is going to happen in this situation.
Your VMs are replicated between locations, and you have VMs in both preferred and the secondary. Wh...
See more...
But disabling HA and Admission Control doesn't prevent what is going to happen in this situation.
Your VMs are replicated between locations, and you have VMs in both preferred and the secondary. When the ISL goes down, the Preferred location will bind itself with the witness, the secondary will lose access to the witness. Which means that ALL virtual machines in the secondary instantly lose access to storage and will be marked inaccessible and killed by vSAN.
NORMALLY they would be restarted by HA in the preferred location, but in this case if you disable HA then nothing would happen.
I would highly recommend moving ALL the virtual machines to the preferred location before doing the maintenance.
that feature is called VM Monitoring, and you will still need a cluster for that to work as you enable it on a cluster level. but if it helps, you could create a cluster with a single host
You do not need vSAN to achieve this, there's a free replication solution called vSphere Replication which you can use to replicate your VMs in Site A to Site B. This is asynchronous replication. If ...
See more...
You do not need vSAN to achieve this, there's a free replication solution called vSphere Replication which you can use to replicate your VMs in Site A to Site B. This is asynchronous replication. If you need sync replication then you could talk to the storage team to see what the SANs in both locations support (if they are a similar make and model that is of course)
It is pretty straight forward:
Disable the performance service
Make sure all VMs are migrated
Delete diskgroups 1 by 1
Disable vSAN Service
Remove VMkernel interfaces
Yes you should be able to use it, a vTPM requires a key provider, the native key provider is part of vCenter Server at all license levels, VM Encryption is what requires Enterprise Plus, but if you o...
See more...
Yes you should be able to use it, a vTPM requires a key provider, the native key provider is part of vCenter Server at all license levels, VM Encryption is what requires Enterprise Plus, but if you only do vTPM and the Native Key Provider you don't need that.
https://core.vmware.com/vtpm-questions-answers
From a vSAN point of view, checksums are enabled by default, unless explicitly disabled, guessing this is indeed an Aria issue, so no need to worry about the vSAN aspect. (But indeed, this needs to b...
See more...
From a vSAN point of view, checksums are enabled by default, unless explicitly disabled, guessing this is indeed an Aria issue, so no need to worry about the vSAN aspect. (But indeed, this needs to be fixed)
Also, a witness only works with:
A two-node cluster
A stretched cluster
As mentioned by thebobkin, 3 nodes is sufficient for R1. And sufficient for R5 with vSAN ESA.
@anandgopinath wrote:
so we have VMs with both replication policy (storage has 2 copies . one in each failure domain ) and local site policy ( storage is only in 1 failure domain )
fo...
See more...
@anandgopinath wrote:
so we have VMs with both replication policy (storage has 2 copies . one in each failure domain ) and local site policy ( storage is only in 1 failure domain )
for VMs with local site policy ( storage is only in 1 failure domain ) ,
why should the option "full data migration" or "ensure accessability" not work if the other failure domain has storage capacity ?
I dont pin these VMs with " must run " rules anymore .
Because you specified in which domain the data needs to reside? If you specify Preferred Site the data can only move to another host in that fault domain, which you don;t have.
With a cross-over I would just use 1 link to be honest, chances are extremely low it fails, and it simplifies the setup also. And i would just separate it from the rest, that also limits the chances ...
See more...
With a cross-over I would just use 1 link to be honest, chances are extremely low it fails, and it simplifies the setup also. And i would just separate it from the rest, that also limits the chances of making mistakes operationally speaking.
When you google proactive HA the first hit is:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-3E3B18CC-8574-46FA-9170-CF549B8E55B8.html
Now, you need to confi...
See more...
When you google proactive HA the first hit is:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.avail.doc/GUID-3E3B18CC-8574-46FA-9170-CF549B8E55B8.html
Now, you need to configure this in conjunction with a vendor component, if for instance you have HPe, the document to use would be:
https://support.hpe.com/hpesc/public/docDisplay?docId=sd00001259en_us&page=GUID-C0C5F3B4-B952-454A-89C8-631F2B992F08.html
If you have Dell, you could use:
https://www.dell.com/support/manuals/en-us/openmanage-integration-vmware-vcenter/omivv_5.1_ug/hardware-component-redundancy-health%E2%80%94proactive-ha?guid=guid-b1ecfbfc-b756-4e6f-ba7d-26748c75923b&lang=en-us
The vSAN VMKernel would not be used for Witness Traffic, you would create a secondary (or use the management) vmkernel interface for witness traffic by tagging it. This is discussed here: https://doc...
See more...
The vSAN VMKernel would not be used for Witness Traffic, you would create a secondary (or use the management) vmkernel interface for witness traffic by tagging it. This is discussed here: https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.vsan-planning.doc/GUID-03204C22-C069-4A18-AD96-26E1E1155D21.html
The difference is:
Virtual License has licensed included
Physical needs to be licensed
Virtual is not allowed to run VMs
Virtual can run on ANY storage solution
Physical is allowed to run...
See more...
The difference is:
Virtual License has licensed included
Physical needs to be licensed
Virtual is not allowed to run VMs
Virtual can run on ANY storage solution
Physical is allowed to run VMs, but most customers don;t do this
Physical needs to have vSAN certified controller/spindles/flash
Compute Only nodes just need to be on the vSphere VCG including their NICs of course, none of those have to be on the vSAN list.
The Compute Only nodes personally I would keep the same from a RAM...
See more...
Compute Only nodes just need to be on the vSphere VCG including their NICs of course, none of those have to be on the vSAN list.
The Compute Only nodes personally I would keep the same from a RAM perspective as it is easier from a HA/DRS point of views, but it. could even be lower as less processes related to vSAN will be running there.
It could also be Compute Only HCI mesh indeed, it would use less memory also as it would just be the "client" and not the rest that would need to run there
You cannot change that, ESXi uses UTC always.
https://communities.vmware.com/t5/VMware-vSphere-Discussions/Change-time-zone-on-esxi-6-7-host/td-p/482251
https://communities.vmware.com/t5/ESXi-Dis...
See more...
You cannot change that, ESXi uses UTC always.
https://communities.vmware.com/t5/VMware-vSphere-Discussions/Change-time-zone-on-esxi-6-7-host/td-p/482251
https://communities.vmware.com/t5/ESXi-Discussions/esxi-7-0-modify-time-zone/td-p/2920955