Put the host in maintenance and then reboot the host... also you can migrate the VMs manually to a different host and ignore the warning message about the VM being managed by SRM.
Take a look the following blog post and let me know if helps: After installing or upgrading the environment to vsphere replication 6.5, VR appears to be missing from the web client. …
The following VMware KB article includes important information before upgrading to vSphere 6.5: VMware Knowledge Base Also I recommend you take a look at the following blog post: vSphere 6.5 ...
See more...
The following VMware KB article includes important information before upgrading to vSphere 6.5: VMware Knowledge Base Also I recommend you take a look at the following blog post: vSphere 6.5 Upgrade Considerations Part-1 - VMware vSphere Blog
You can try migrate to vCSA 6.0 Update 2m first, see the following blog post for additional details: vCenter Server Migration Tool: vSphere 6.0 Update 2m - VMware vSphere Blog
You should always consider the secondary VMs, then you can have maximum of 8 primary VMs (and 8 secondary) assuming your VMs have 2 x vCPU. Here is an example of VMs distribution: Gre...
See more...
You should always consider the secondary VMs, then you can have maximum of 8 primary VMs (and 8 secondary) assuming your VMs have 2 x vCPU. Here is an example of VMs distribution: Green squares are Primary VMs and yellow squares are the Secondary VMs.
now want to know if i had a cluster with 8 esxi host is that your means totaly can run 32vms with 64vcpu ? Yes, you are correct, just keep in mind that the VMs can be either primary (protected) ...
See more...
now want to know if i had a cluster with 8 esxi host is that your means totaly can run 32vms with 64vcpu ? Yes, you are correct, just keep in mind that the VMs can be either primary (protected) or secondary (shadow).
You can divide your vSphere ESXi licenses and upgrade to version 6.0 only the number of licenses that you want to use on the new hosts. See the following VMware KB article: VMware Knowledge Base
Sometimes after the backup job, the backup solutions fails to delete the snapshot created during the backup process, you can check the following VMware KB article on how to consolidate snapshots,...
See more...
Sometimes after the backup job, the backup solutions fails to delete the snapshot created during the backup process, you can check the following VMware KB article on how to consolidate snapshots, and if it still fails to consolidate, let us know. Link to VMware KB article: VMware Knowledge Base
This is the expected behavior since the replica is the exact same copy of the source virtual machine, if you don't want those alarms, you can disable them. See the following Veeam KB article for ...
See more...
This is the expected behavior since the replica is the exact same copy of the source virtual machine, if you don't want those alarms, you can disable them. See the following Veeam KB article for details: KB1669: "VM MAC address conflict" and "VM instance UUID conflict" alarms
Olá Igor, primeiramente gostaria de solicitar que evite criar posts duplicados, e que a resposta para essa sua pergunta foi adicionada em seu outro post: Re: Redimensionar um vmdk Voltando ao...
See more...
Olá Igor, primeiramente gostaria de solicitar que evite criar posts duplicados, e que a resposta para essa sua pergunta foi adicionada em seu outro post: Re: Redimensionar um vmdk Voltando ao assunto, a forma suportada é utilizando o VMware vCenter Converter, e vou deixar aqui dois links que mostra o passo a passo (em inglês) de como diminuir o tamanho de um disco utilizando o VMware vCenter Converter: Shrink a VMDK using VMware Converter How to shrink a VMDK using VMware Converter - TechRepublic Se ao invés de querer redimensionar o tamanho do disco virtual você deseja apenas mover a VM de um host para o outro, uma otima alternativa é o Veeam utilizando o Quick Migration, veja esse guia: Mover uma VM VMware de um host para outro sem o vMotion No seu outro post eu questionei se voce utiliza o vCenter, me confirma se utiliza ou não e eu posso dar outras alternativas.
1- beforehand is that correct max number of vcpu that a vm can support FT is 4vcpu ? Yes, see: 2- is your means from primary = protected vm and secondary = shadow of protected vm ? Y...
See more...
1- beforehand is that correct max number of vcpu that a vm can support FT is 4vcpu ? Yes, see: 2- is your means from primary = protected vm and secondary = shadow of protected vm ? Yes. 3- you said ' the maximum number of FT VMs (primary or secondary) is 4 and sum of vCPU should be no more than 8 vCPU ' is that your means just 1vm with 4vcpu or 2vm with 2vcpu or 4vm with 1vcpu can be put on an esxi and Protected? No, you can can 2 VMs with 4 vCPUs, or 4 VMs with 2 vCPU... and VMs can be either primary (protected) or secondary (shadow)... this is just math... the maximum number of VMs is 4 and the sum of vCPU is 8, no matter how you configure your VMs, of course you can't assign more than 4 vCPU per VM.
my only road block is that if i upgrade ESXi 5.5 to 6.5, then does it make any problem by rest ESXi host running ESXi 5.5 in same cluster? The best practices is to have all hosts within the c...
See more...
my only road block is that if i upgrade ESXi 5.5 to 6.5, then does it make any problem by rest ESXi host running ESXi 5.5 in same cluster? The best practices is to have all hosts within the cluster running the same version and build number, but no problem to have different ESXi versions during an upgrade. does it get any problem with VDs version 5.5 ? Your ESXi 6.5 will support the vDS 5.5, since the vDS 5.5 is supported by ESXi 5.5 and later. And after you upgrade all the hosts connected to the vDS, you can upgrade the vDS version.
You're still missing that you cannot set the VLAN on physical switch and at virtual switch level, since you defined the VLAN 50 as the native on physical switch, you need to remove that VLAN from...
See more...
You're still missing that you cannot set the VLAN on physical switch and at virtual switch level, since you defined the VLAN 50 as the native on physical switch, you need to remove that VLAN from virtual switch port group, see the explanation from the VMware KB article again:
Yes, just remember to change the native VLAN of the trunk ports to a VLAN different from the 10, 50, 99, 1007... you can for instance configure the native VLAN as VLAN 1.
So from what you are saying, if I want to assign VLANs on vSphere level, the physical switchports my hosts is connecting to, they need to be configured as trunk ports instead of accessports? You...
See more...
So from what you are saying, if I want to assign VLANs on vSphere level, the physical switchports my hosts is connecting to, they need to be configured as trunk ports instead of accessports? You can configure the physical switch port as Trunk (and let the native/default VLAN different of 10) and assign VLAN 10 on TESTNETWORK port group and to Management Network as well.
The problem is that you cannot assign VLAN on both sides (physical switch and virtual switch port group), and you can find that on the following VMware KB article: VMware Knowledge Base Caut...
See more...
The problem is that you cannot assign VLAN on both sides (physical switch and virtual switch port group), and you can find that on the following VMware KB article: VMware Knowledge Base Caution: Native VLAN ID on ESXi/ESX VST Mode is not supported. Do not assign a VLAN to a port group that is same as the native VLAN ID of the physical switch. Native VLAN packets are not tagged with the VLAN ID on the outgoing traffic toward the ESXi/ESX host. Therefore, if the ESXi/ESX host is set to VST mode, it drops the packets that are lacking a VLAN tag.
Where you're assign the VLAN10, on vSphere side or at physical switch? And do you have different vSwitch for management and virtual machines? If using the same virtual switch and assigning the VL...
See more...
Where you're assign the VLAN10, on vSphere side or at physical switch? And do you have different vSwitch for management and virtual machines? If using the same virtual switch and assigning the VLAN on vSphere side, can you confirm if you assigned the VLAN10 to the virtual machine port group, see: Please, if possible, post a print screen with your vSphere virtual switch and port group settings.
1 - The maximum number of fault tolerance vms allowed on a host in the cluster Both primary vms and secondary vms count towards this limit .the default value is 4 Is that means maximum vm that ...
See more...
1 - The maximum number of fault tolerance vms allowed on a host in the cluster Both primary vms and secondary vms count towards this limit .the default value is 4 Is that means maximum vm that can FT protected on an esxi host is 4vms with 1vcpu? No, that means that the maximum number of FT VMs (primary or secondary) is 4 and sum of vCPU should be no more than 8 vCPU. For instance, you can have 4 Primary VMs with 2 vCPUs each, or another scenario is to have 2 Primary VMs and 2 Secondary VMs, each VM with up to 2 vCPUs. 2- Maximum number of vcpu aggregated across all fault tolerance vms on a host vcpu from both primary vms and secondary vms count toward this limit . The default value is 8 Is that means when we enable FT protected on vm1 that ave 4vcpu that's shadow (secondary vm) will be create on the other host with 4vcpu The shadow VMs will always have the exact configuration of the primary VM. But what the item #2 says is that the sum of FT VMs running on host should be at maximum of 8. For instance, you can have 2 Primary VMs and 2 Secondary VMs, each VM with up to 2 vCPUs running on a particular host, and the total number of vCPU in use by FT VMs are 8.