VMware Cloud Community
DaddyDee
Enthusiast
Enthusiast

Setting RDM on each Host to “PerenniallyReserved” using PowerCLI not appearing to work!!

Hi Community,

I was wondering whether anyone out there has either experienced issue or at lest can point me in the correct direction?

I'm running the following PowerCLI script against 6.5 VC on a cluster consisting of 12 ESXi 6.0 U2 hosts which I'm in the process of upgrading to 6.5 U2. 

i'm experiencing slow start up of hypervisor, at the "vmw_satp_alua loaded successfully" stage, which apparently is down to RDM searches (used for MS Clustering) slowing down the boot process.

My PowerCIi script:

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Import-Module VMware.VimAutomation.Core

Connect-VIServer "VC" -User domain\DomainAdminAC

$RDMList = Get-VM -Location "Cluster" | Get-HardDisk -DiskType "RawPhysical", "RawVirtual" | Select Parent,Name,DiskType,ScsiCanonicalName

$RDMList = $RDMList.scsicanonicalname

Connect-VIServer "ESXIHost" -User root

$esxcli = Get-EsxCli

Foreach ($address in $RDMList)

{

$esxcli.storage.core.device.setconfig($false, $address, $true)

}

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

The above has also been done directly on ESXi host, having obtained list of RDM LUNS:

Example:

Import-Module VMware.VimAutomation.Core

Connect-VIServer "ESXiHost" -Credential (Get-Credential)

$esxcli = Get-EsxCli

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number) ", $true)

Disconnect-VIServer * -Confirm:$false | Out-Null

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

I've SSH'd onto given hosts and run the following against known RDM Luns

Example:

esxcli storage core device list -d (RDM Identifier Number)

The "Is Perennially Reserved: True" statement is as it should be.  So scratching my head as to what else to do?

I've observed, having upgraded previous hosts from 6 - 6.5 in same cluster, which took forever too start up, the "Is Perennially Reserved: false" statement has reverted back.  I'm wondering whether this is a non persistent issue between reboots?

Any help or advice in the above will be kindly appreciated.

Reply
0 Kudos
11 Replies
LucD
Leadership
Leadership

I use the following script to set this.

Can you see if that works for you?

You will have to change the 'IBM Fibre Channel Disk' string of course with a text that reflects your storage solution.
Check first on 1 ESXi node with the list method to find that text.

I'm only connected to the vCenter, not the individual ESXi nodes.

Get-VMHost | %{

    $esx = $_

    $esxcli = Get-EsxCli -VMHost $esx -V2

    $esxcli.storage.core.device.list.Invoke() | where{$_.DisplayName -match "^IBM Fibre Channel Disk"} | %{

        $sConfig = @{

            sharedclusterwide = $true

            device = $_.Device

            perenniallyreserved = $true

        }

        $esxcli.storage.core.device.setconfig.Invoke($sConfig) > $null

    }

}


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos
DaddyDee
Enthusiast
Enthusiast

Hi LucD,

I now won't be able to check until I'm back in the office tomorrow morning.

How would I incorporate specific cluster in this case "Cluster-B" into script as slow startup of Hosts has only come about in named cluster?

Cheers.

Reply
0 Kudos
LucD
Leadership
Leadership

Change the line to

Get-Cluster -Name MyCluster | Get-VMHost | %{


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos
DaddyDee
Enthusiast
Enthusiast

Cheers for that LucD,

Unfortunately having run script this morning, while connected to respective VC and then SSH'd into one of the nodes and run:

esxcli storage core device list -d

pastedImage_0.png

The same was true of other random RDM LUNs I picked which are connected to cluster. 

The only time this changes is when I run a separate script directly on ESXi host:

Import-Module VMware.VimAutomation.Core

Connect-VIServer "ESXi_Host" -Credential (Get-Credential)

$esxcli = Get-EsxCli

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

$esxcli.storage.core.device.setconfig($false, "(RDM Identifier Number)", $true)

I should mention at this point that server hardware is HPE G7 & Storage Vendor is 3PARdata.  I'm starting to wonder as G7's are not supported for ESXi 6.5, this may be the problem?

Reply
0 Kudos
LucD
Leadership
Leadership

Indeed strange, for me it works (I'm using an IBM SVC SAN)


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos
75vining75
Contributor
Contributor

We just performed the same work on 5 ESX v6.5 hosts. In my script, I had to connect to each host using Connect-VIServer before running the Get-ESXCli commands to update the RDM settings. I also checked each RDM PerenniallyReserved Flag before updating to $true. Here is my code for this portion. There are variables that I set before running this portion but you'll be able to see what I am doing......

    # Loop through each disk and check if it needs updating.

Foreach ($VMHost in $VMHostsName) {

        Write-Host "Connecting to VMHost $VMHost...." -ForegroundColor Cyan

        Foreach ($RDMDisk in $RDMDisksNamesSorted) {

        # Get the connection to each VMhost in the cluster to use with ESXCLI.

        $esxcli = Get-EsxCli $VMHost

        # Check to see if the RDM PereniallyReserved flag is set to true.

        # If it is true, then skip

        if (($esxcli.storage.core.device.list("$RDMDisk").IsPerenniallyReserved) -eq "true") {

            Write-Host "RDM Device $RDMDisk does not require updating." -ForegroundColor Green

            }

        else {

        # If the flag is set to false, set the configuration to true.

            Write-Host "Updating RDM Device $RDMDisk" -ForegroundColor Yellow           

# Set the configuration to "PereniallyReserved".

# setconfig method: void setconfig(boolean detached, string device, boolean perenniallyreserved)

            $esxcli.storage.core.device.setconfig($false, ($RDMDisk), $true) | Out-Null

            Write-Host "RDM Device $RDMDisk configuration has been updated" -ForegroundColor Cyan

            }

        }

    }

Reply
0 Kudos
DaddyDee
Enthusiast
Enthusiast

I'd just like to thank you both for your input, I really do appreciate that.

Recap:

  • All perenniallyreserved flags on hosts are set to true
  • ESXTOP has been referred to both at VM and HBA level and nothing showing out of the norm in terms of thresholds breached
  • RDMs present in other non impacted cluster (x15), no slow start on HP G8 servers upgraded from HP Custom ISO ESXi 6.5U2
  • Problem cluster are all HP G7 servers, RDMs (x16), 6.5U2 installed from customised HP ISOs (combination of ESXi 6.U3 & ESXi 6.5 U2) using Image Builder PowerCLI due to known PSOD caused by 6.5 U2 driver.

It really is a mystery!!

Unfortunately it would appear that due to HP G7 hardware being none supported with ESXi 6.5 U2, we've had to plod on with builds taking up to nearly 4hrs per host!! Not ideal I know.. But down to final 3 hosts from a cluster of 12... 🙂

Reply
0 Kudos
svillar
Enthusiast
Enthusiast

Hi LucD,

Is it detrimental to change PerennialReserved=True on RDMs which are not part of Microsoft Clusters?  I cannot find documentation on this.  I used your script to change all the RDMs on my SVC not understanding it was only needed if the RDM is used for MSCS.

If so, I'd like to change them all back to False, identify the specific RDMs used only in MSCS, and reapply your script to specific RDMs.  Do you have revised scripts for this?

Thank you,

Scott.....

Reply
0 Kudos
LucD
Leadership
Leadership

Yes, only for RDMs used in MSCS cluster, otherwise you might get the message documented in KB2040666

The issue would be to identify the LUNs used in a MSCS cluster.

Is there a rule (naming convention of the participating VMs for example) to identify the VMs participating in a MSCS cluster?

If you have the names of the VMs that form the MSCS cluster, you could do something like this

# VMs participating in the MSCS

$vmMSCS = 'vm1', 'vm2'


# Get the LUNs used in a MSCS

$lun = @()

Get-VM -Name $vmMSCS | Get-HardDisk -DiskType "RawPhysical", "RawVirtual" |

ForEach-Object -Process {

   $lun += $_.ScsiCanonicalName

}


# Determine vSPhere cluster in which the MSCS VMs are located

$cluster = Get-VM -Name $vmMSCS | Get-Cluster


# Set the MSCS LUNS to perennially reserved

Get-VMHost -Location $cluster -PipelineVariable esx |

ForEach-Object -Process {

   $esxcli = Get-EsxCli -VMHost $esx -V2

   $esxcli.storage.core.device.list.Invoke() |

   Where-Object { $lun -contains $_.DisplayName } |

   ForEach-Object -Process {

   $sConfig = @{

   sharedclusterwide = $true

   device = $_.Device

   perenniallyreserved = $true

   }

   $esxcli.storage.core.device.setconfig.Invoke($sConfig) > $null

   }

}


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos
svillar
Enthusiast
Enthusiast

LucD,

Thanks for the information, but how do I revert the ones which are not MSCS?  I can identify the RDMs in the SVC with their "naa...." identifiers.  I have 2 hosts with many RDMs and I want to revert them back asap.  A one liner would help me out...  something like:

Get-host "hostname"  | get-rdm "na...." | Set-perennially False

I am not a coder, so I realize the above does not exist.  It would be the easiest and quickest way to revert.  I don't mind running it multiple times with the same hose and correct "na...".

Thanks for the help.  I didn't realize that my data was at risk!

Reply
0 Kudos
LucD
Leadership
Leadership

To set all LUNs on a number of ESXi nodes back, you could do

Get-VMHost -Name esx1, esx2 |

ForEach-Object -Process {

   $esxcli = Get-EsxCli -VMHost $_ -V2

   $esxcli.storage.core.device.list.Invoke() |

   ForEach-Object -Process {

   $sConfig = @{

   sharedclusterwide = $true

   device = $_.Device

   perenniallyreserved = $false

   }

   $esxcli.storage.core.device.setconfig.Invoke($sConfig) > $null

   }

}

After that, you could use the previous snippet to only set the LUNs that are involved in the MSCS to perennially reserved.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

Reply
0 Kudos