DavidDAVE's Posts

Thanks everybody for your answers, all of them very usefull . Case closed. Kind regards, David   
Hi there everybody, I'm trying to achieve having some VMs with VMotion disabled, so I used frankdenneman.nl tips (disabling powered ON vmotion permissions), but I dont want the system moving t... See more...
Hi there everybody, I'm trying to achieve having some VMs with VMotion disabled, so I used frankdenneman.nl tips (disabling powered ON vmotion permissions), but I dont want the system moving them with DRS, so I disabled DRS automation level for them. But now I'm worried about my anti-affinity rule, defined to avoid this VMs running in the same ESXi... is this rule also disabled when i mark the VM as disabled in the "DRS - Virtual Machine Options (automation level)" tab? I'm asking this specific question because I'm afraid of what's going to happen if one of the involved ESXi cracks. Will HA consider the DRS affinity rule of those VMs, keeping them in separate hosts, once it starts restarting machines? The wording "automation level" makes me guess that I only disable the auto-vmotioning, but not the affinity rule for deploying the VM in a new hosts, but.... not sure at all. A possible solution would be disabling HA on those VMs, to get sure they are not gonna stick together, but I would like to avoid that. Scenario wanted: - VM A - running on ESXi1 - VM B - running on ESXi2 - Antiaffinity rule for those VMs - vMotion disabled role for those VMs - DRS automation level disabled for those VMs Thanks in advance to everybody, I hope I have been sufficiently clear. David
Yes, the costumer is finally moving into that idea. Shutting down everything, unregistering, unmount&mount, register and power on. I was just trying to avoid the downtime. Five datastores, hug... See more...
Yes, the costumer is finally moving into that idea. Shutting down everything, unregistering, unmount&mount, register and power on. I was just trying to avoid the downtime. Five datastores, huge ones: 3Tb for the smaller, 15Tb for the bigger. Lot's of VMs inside each of them.         Thanks a lot!
By the way, I understand that the best option is to mount a completly new and different NFS datastore, migrate everything there, unmount&mount NFS datastores, and migrate back. But not enough fre... See more...
By the way, I understand that the best option is to mount a completly new and different NFS datastore, migrate everything there, unmount&mount NFS datastores, and migrate back. But not enough free           space for that I apologize for the double-posting, but could find the "edit" button
Hi there! As stated in KB 1005930 (and several other places as this communities), the UUID of a NFS datastore is generated from a hash between server and folder. In my case I have all my VMs i... See more...
Hi there! As stated in KB 1005930 (and several other places as this communities), the UUID of a NFS datastore is generated from a hash between server and folder. In my case I have all my VMs in some NFS datastore mounted by IP. Now I need to have everything mounted by FQDN, and for this purpose I have modified the host file on the ESXi. Once I mount NFS with FQDN, it is named with a (1) at the end, as said. Now, my plan is to Storage vMotion everything. I have an empty ESXi with the NFS mounted by FQDN, and so named NFS01 (1). It is the same vol on the nas than NFS, but in VMware they have different UUID. My fear is if this Storage vMotion can suppouse any locking or corruption, as it is moving from one datastore to itself. I want to try this way because it's preferable avoiding cold migration (and stopping all VMs). Working on 5.1 build 799733. Any tip? Thank you very much in advance for your time David