anandgopinath's Posts

Dear Community We plan to migrate  VMs online from old   7.0.3 esx cluster  to the new 7.0.3 esx Cluster  with the same storage . Both the clusters are in the same Vcenter   is sharing datastore c... See more...
Dear Community We plan to migrate  VMs online from old   7.0.3 esx cluster  to the new 7.0.3 esx Cluster  with the same storage . Both the clusters are in the same Vcenter   is sharing datastore clusters / standalone datastores across multiple esx clusters like this fully supported  ? can the VM be migrated online via vmotion in this setup  ?  Any caveats to be prepared of  ?  Thanks in advance for the support  
@depping  :  Got it now . Thanks  , so basically for these options to work, we need to have more than 1 host per failure domain  .  Sorry for all the questions , we have been testing various failure... See more...
@depping  :  Got it now . Thanks  , so basically for these options to work, we need to have more than 1 host per failure domain  .  Sorry for all the questions , we have been testing various failure / maintenance  scenarios like this  and at times what you read / understood before from the documentation seems lost    Thanks for taking time out from your busy schedule to support the community . Much appreciated  
@depping  @TheBobkin  Thanks for the quick response  as always  i am a bit lost here .  so we have VMs with both  replication policy  (storage has 2 copies . one  in each failure domain  )  and... See more...
@depping  @TheBobkin  Thanks for the quick response  as always  i am a bit lost here .  so we have VMs with both  replication policy  (storage has 2 copies . one  in each failure domain  )  and   local site policy  ( storage is only in 1 failure domain )  for VMs with  local site policy   ( storage is only in 1 failure domain )  ,  why should the option "full data migration" or  "ensure accessability" not work   if  the other failure domain has storage capacity  ?   I dont pin these VMs with " must run " rules anymore .     Same goes for VMs with replication policy  (storage has 2 copies . one  in each failure domain  )  .  why should the option "full data migration" or  "ensure accessability" not work   if  the other failure domain has storage capacity  ?      
@TheBobkin   , Thanks for the quick help   We have the same issue of maintenance mode not working when we choose   " ensure accessibility "  as well as  "full data migration"   .  Same behaviou... See more...
@TheBobkin   , Thanks for the quick help   We have the same issue of maintenance mode not working when we choose   " ensure accessibility "  as well as  "full data migration"   .  Same behaviour even if we disable the  "must run" rules for the VMs pinned to each site" .  The only option which works is "No data migration"  Is this also a limitation of the 2 Node stretched cluster  ?    Thanks in advance for your continued support  & guidance      
@depping  .  it seems issue is with the HA Admission control setting below .if HA admission control is disabled , host can enter maintenance mode  .  So does this mean that HA Admission control is n... See more...
@depping  .  it seems issue is with the HA Admission control setting below .if HA admission control is disabled , host can enter maintenance mode  .  So does this mean that HA Admission control is not compatible with 2 node Stretched cluster  ?  CPU reserved for failover: 50 % Memory reserved for failover: 50 %
Thanks @depping   for the help  Very good point  , i will check this and revert  .     
@TheBobkin  ,  There is no issue with the VM vmotion etc  . When we poweroff the esx host in question , VM is restarted on the other esx host  .  It is only when we try to enter  maintenance mode th... See more...
@TheBobkin  ,  There is no issue with the VM vmotion etc  . When we poweroff the esx host in question , VM is restarted on the other esx host  .  It is only when we try to enter  maintenance mode that the VM is not moved .     
@TheBobkin  :  Thanks for the quick help As mentioned in my post  , even with the "no data migration" option  , we cannot get the host into maintenance mode as the host cannot migrate a VM with repl... See more...
@TheBobkin  :  Thanks for the quick help As mentioned in my post  , even with the "no data migration" option  , we cannot get the host into maintenance mode as the host cannot migrate a VM with replicated storage and "should run" rule to the other host . Not sure what we are doing wrong      
Dear Community   We have a VSAN  stretched cluster as below.  ( VMware ESXi, 7.0.3, 21313628 )  2 Data nodes  ( 1 in each Site )  and a witness host   Most  of the VMs have  storage replicated , w... See more...
Dear Community   We have a VSAN  stretched cluster as below.  ( VMware ESXi, 7.0.3, 21313628 )  2 Data nodes  ( 1 in each Site )  and a witness host   Most  of the VMs have  storage replicated , we use Should rules to spread them across the 2 Sites   We have some VMs which are pinned to a specific site as below   .We use Must rules for them  . We can afford downtime on them during maintenance activities  .  Site disaster tolerance None - keep data on Secondary (stretched cluster) Failures to tolerate No data redundancy Site disaster tolerance None - keep data on Preferred (stretched cluster) Failures to tolerate No data redundancy   When we try to put a Data node in either site into maintenance mode via "ensure accessibility" or "no data migration" , the operation is failing  . We even tried to power off the VMs using local storage on the impacted Data node but even then , VSAN cluster is not able to migrate the VM with replicated storage to other Data node  .  Is this expected behaviour   ?  are we doing something wrong  ?  Appreciate your help & guidance as always  
Dear Community   We have a stretched cluster  with 2 Data nodes  ( one in each site  )  + witness as below. We have 2 disk groups per host . all flash .  no storage efficiency or encryption .  For... See more...
Dear Community   We have a stretched cluster  with 2 Data nodes  ( one in each site  )  + witness as below. We have 2 disk groups per host . all flash .  no storage efficiency or encryption .  For VMs pinned to each site  (  Site disaster tolerance None - keep data on Secondary (stretched cluster) / Failures to tolerate No data redundancy )  will we loose a VM  for ever  ( recoverable only from backups )  in case of a  cache disk / capacity disk or full disk group failure  ?  as per the stretched cluster guide , it says vm will survive a disk or diskgroup failure by moving data to the other disk group , disks  . is this true  ?  i am lost  . appreciate your quick response  .        Site disaster tolerance None - keep data on Preferred (stretched cluster) Failures to tolerate No data redundancy
@TheBobkin   :  Thanks for the quick response as always .   Just one last query ,  does this hold good if we reduce the redundancy of the vm home / swap objects as well ?    ie   my scenario is t... See more...
@TheBobkin   :  Thanks for the quick response as always .   Just one last query ,  does this hold good if we reduce the redundancy of the vm home / swap objects as well ?    ie   my scenario is that for a  "Pinned" VM to a local site storage  ,  is it ok , if i  apply the same vsan storage policy  ( Site disaster tolerance :  preferred site    ) to the vm home / swap  ?     
Dear Community  Is it ok to change  the VM Storage policy for VM Home / Swap  ( which by default is the VSAN Default storage policy  ) to to the Custom storage policy assigned to the respecitve VM  ... See more...
Dear Community  Is it ok to change  the VM Storage policy for VM Home / Swap  ( which by default is the VSAN Default storage policy  ) to to the Custom storage policy assigned to the respecitve VM  to make life simple  ?    Any issues foreseen with this  ? 
@depping  :  Thanks Alot  once again  Proud to be part of this great Community  ..
@depping  . Thanks for the quick help  . much appreciated I am still a bit lost on the VMs pinned to each site  Please can you confirm if my below understanding is correct  w.r.t the 2 scenarios ? ... See more...
@depping  . Thanks for the quick help  . much appreciated I am still a bit lost on the VMs pinned to each site  Please can you confirm if my below understanding is correct  w.r.t the 2 scenarios ?   VMs pinned to One site with  "Must" rules  ==============================   VM to be recovered from backup during  disk / diskgroup failure    If disks are intact and the host / site goes down , VM to be available once the site is available again                                                                          For planned events , disable must rules and change VM policy  to point to the other site site  storage and vmotion VM to other site  2.     VMs pinned to One site with  "should" rules  ==============================   VM to be recovered from backup during  disk / diskgroup failure    If disks are intact and the host / site goes down , VM to be available once the site is available again                                                                          For planned events ,  change VM policy  to point to the other site site  storage and vmotion VM to other site     
Dear Community   We have a stretched cluster as below. We have a couple of queries related to HA Settings  &  DRS Rules to be set for some VMs pinned to specific fault domain storage.  Appreciate ... See more...
Dear Community   We have a stretched cluster as below. We have a couple of queries related to HA Settings  &  DRS Rules to be set for some VMs pinned to specific fault domain storage.  Appreciate your quick response  .  ============ On both Data nodes , we have the management VMK now having 2 tags management + vsan witness and having management vlan x ip we have the VSAN VMK with just vsan tag with default gateway ( as gateway does not matter since network is l2 stretched ) having vsan data vlan y ip   On the witness, we have management vmk0 having management tag and having management vlan A ip we have the vsan vmk1 with vsan tag and having the vsan witness vlan z ip ( routable with vlan x ) ========== 1. For the HA settings , is it ok if we just set a single HA isolation address ( ie gateway of vlan y ) via das.isolationAddress0 instead of the 2 recommended by VMWARE ? 2. We plan to pin some VMs to specific Fault domains via the below vsan policy settings . Our requirement is that in case of a planned maintenance activity ( other than disk failure ) , we are able to move the VM storage to the other site temporarily . For such VMs, which type of DRS rules is recommended ? Should or must ? Site disaster tolerance None - keep data on Secondary (stretched cluster) Failures to tolerate No data redundancy Site disaster tolerance None - keep data on Preferred (stretched cluster) Failures to tolerate No data redundancy
Dear @TheBobkin   / All   Issue was related to a backend network config issue on the esx cluster hosting the witness appliance  . some esx hosts in the cluster did not have the witness vsan vmk vla... See more...
Dear @TheBobkin   / All   Issue was related to a backend network config issue on the esx cluster hosting the witness appliance  . some esx hosts in the cluster did not have the witness vsan vmk vlan configured and hence witness was being isolated when starting on such esx hosts during the vsan cluster stop / start  .   Thanks for your guidance  .
@TheBobkin  :    The reason why we want to separate vsan traffic on the witness on vmk1 ( & not vmk0 ) is to be able to seggregate management traffic and vsan traffic on different VLANs on the Witn... See more...
@TheBobkin  :    The reason why we want to separate vsan traffic on the witness on vmk1 ( & not vmk0 ) is to be able to seggregate management traffic and vsan traffic on different VLANs on the Witness.   Is my below understanding correct  on the traffic flow with our setup    ? or am i missing something  ?  VLAN X & VLAN Z are routable with each other via their respective gateways  .  From Data nodes   to Witness  =============== source VLAN  : VLANX X  ( management + witness traffic vlan on vmk0  of both data nodes via WTS )  Destination VLAN  :   VLAN Z  ( vsan traffic on vmk1  of witness )  From witness to  Data Nodes  =============== source VLAN : VLAN Z  ( vsan traffic on vmk1  of witness )    Destination  VLAN : VLAN  X ( management + witness traffic vlan on vmk0  of both data nodes )   Please help    
Dear Team We are facing a wierd issue on our VSAN 7.0.U3 stretched cluster  our setup below . ============ On both Data nodes ,  we have the management VMK now having 2 tags management + vsan wi... See more...
Dear Team We are facing a wierd issue on our VSAN 7.0.U3 stretched cluster  our setup below . ============ On both Data nodes ,  we have the management VMK now having 2 tags management + vsan witness  and having management vlan x  ip  we have the VSAN VMK with just vsan tag with default gateway  ( as gateway does not matter since network is l2 stretched  ) having vsan data vlan y ip                                                           On the witness,  we have management vmk0 having management tag and having same  management vlan A   ip  ( routable with vlan x ) we have the vsan vmk1 with vsan tag and having the vsan witness vlan z ip  ( routable with vlan x  ) ========== Our issue is that vsan cluster goes to partitioned state if we configure VMK1 on VSAN Witness with "vsan" traffic tag Routing is configured on our network between vlan z & vlan x   and even we added  static routes on the vsan witness for vmk1 to reach vlan x on data nodes via the gateway on vlan z.  override default gateway option was also tried on vmk1 but to no effect  The moment we untag VSAN taffic from vmk1 and move it to vmk0  on the vsan witness, everything works fine.  I am lost and just wanted to understand if this kind of expected behaviour or a bug ( that with WTS we should use only 1 vmk on the witness  ? )   as we cannot find any fault on our network side    Thanks in advance for any useful pointers  .   
Thank You @TheBobkin  .  Much appreciate all your help 
@TheBobkin  :  Thank you .    One last question .  Can i go ahead and remove the vmk1 and the associated network adapter from the witness appliance OVA  ?