VMware Cloud Community
kgottleib
Enthusiast
Enthusiast
Jump to solution

VM shows physcial mode RDMs as unreadable after DRS powers up VM on new host

Need some help with an issue:

a VM with pRDMs that is part of a VCS configuration was shut down, manually migrated to new host, then when powering up, DRS moved it to yet another host.   and for some reason the RDMs are now unreadable

Does anyone have any idea why this happened and how to resolve this?

Notes -  the DRS setting for this virtual machine was set to "manual" while the greater clusters DRS setting is fully automatic.     Given that it is set to manual, I wasn't the one who powered up the VM, but I assume the user who did was prompted to place the VM somewhere and he chose to place it on a new host then the one he had just migrated it to.

Did this DRS recommendation and move to new host effect the clustered VM's ability to see the pRDMs?   

In the guest OS I ran diskpart and the disks are online but show up as unreadable.

Thanks for any tips on this one in helping us understand the behavior here and what could have caused this.

I am fully aware the VNMware can't vmotion VMs with shared SCSI controllers when attempting vmotion.  But its not clear to me what the process is under the covers when DRS wants to move a VM somewhere during power on.

0 Kudos
1 Solution

Accepted Solutions
weinstein5
Immortal
Immortal
Jump to solution

DRS uses vmotion to move the VMs between hosts and it should failed if they are configured for shared scsi - I am betting when the VM was powered up and it was placed on the wroong host that does not have access to the RDMs - if you power off the VM and start on the correct host do you have access ot the RDMs?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

View solution in original post

0 Kudos
4 Replies
weinstein5
Immortal
Immortal
Jump to solution

DRS uses vmotion to move the VMs between hosts and it should failed if they are configured for shared scsi - I am betting when the VM was powered up and it was placed on the wroong host that does not have access to the RDMs - if you power off the VM and start on the correct host do you have access ot the RDMs?

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
LikeABrandit
Enthusiast
Enthusiast
Jump to solution

Someone correct me if I'm wrong, but I believe that DRS will run through its algorithm and select a host in the cluster during power on and not even ask the user (which honestly, is what you'd typically prefer, less vMotions down the road). What you probably want to do here is configure an affinity rule for that VM to keep that from happening.

As far as fixing it right now, /agree with weinstein, power off and get it back on the right host.

0 Kudos
weinstein5
Immortal
Immortal
Jump to solution

DRS will do that if the VM is configured for partially automated or fully automated - if it is set to manual when powering on you will be presented with a prioritized list of the hosts in the DRS cluster to power the vm on -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
LikeABrandit
Enthusiast
Enthusiast
Jump to solution

Ah yes, jogged my memory; you're definitely right, it only does that on the automated settings. I didn't realize the list presented for manual was prioritized though, really good info, thanks!

0 Kudos