VMware Cloud Community
saitoz
Contributor
Contributor
Jump to solution

Replicated LUNs and RDMs

Hello there,

we have 2 nodes and 2 storages in same location. storage 1 is replicated to storage 2. initially on storage 1 we have created 4 LUNs and added them as RDMs to 2 nodes. if first storage goes off or disconnected, we need to add replicated 2 LUNs to 2 nodes from storage 2. if system works on storage 1 then LUN numbers 100 and 101, if on storage 2 then LUN numbers 110 and 111. we need to make this using powerCLI.

Any help would be greatly appreciated.

LUCD, I have read your amazing solutions/answers, would you please help me? Thank you.

0 Kudos
1 Solution

Accepted Solutions
LucD
Leadership
Leadership
Jump to solution

My mistake, I'm using the VIProperties for ScsiLun, adn LunID is in there.

You can use the formula instead

New-VIProperty -Name lunID -ObjectType ScsiLun -Value {
    param($lun)

    [int](Select-String ":L(?\d+)$" `
        -InputObject $lun.RuntimeName).Matches[0].Groups['lunID'].Value
} -Force


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

View solution in original post

0 Kudos
25 Replies
LucD
Leadership
Leadership
Jump to solution

How, and where, will you detect that storage1 becomes unavailable ?

Because that would be the place to trigger the script that will connect the LUNs on storage2 as RDMs.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

Thank you for your quick responce.

we will do this manually by a script that will "unexport" the LUNs that are from storage 1 and "export" the LUNS that are from storage 2. so this script trigger the powercli script.

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

Hey Luc,

I am planning to use your solution below, do you think it will cover what I need ?

http://communities.vmware.com/message/2057164

0 Kudos
LucD
Leadership
Leadership
Jump to solution

If I understood the question correctly, the following shows the way to find the backup LUNs and replace the harddisks connected to both nodes by RDM to the backup LUNs.

$luns = Get-VMHost | Get-ScsiLun -LunType disk
$lun1 = $luns | where {$_.LunID -eq 110}
$lun2 = $luns | where {$_.LunID -eq 111}

$node1 = Get-VM -Name node1 
$node2
= Get-VM -Name node2
Get-HardDisk
-VM $node1 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false
New-HardDisk -VM $node1 -DeviceName $lun1.ConsoleDeviceName -DiskType RawPhysical Get-HardDisk -VM $node2 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false
New-HardDisk -VM $node2 -DeviceName $lun2.ConsoleDeviceName -DiskType RawPhysical

But like I said I'm not sure I understood the question correctly.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
LucD
Leadership
Leadership
Jump to solution

Or answers must have crossed each other :smileygrin:

That should work, but the code above does more or less the same thing.

And the problem I mentioned in that other thread might have been solved in the latest PowerCLI build.

You'll have to try that out in your environment.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

in your script it's supposed "Hard disk 2" is already added from storage 1. so I think the script should be:

$luns = Get-VMHost | Get-ScsiLun -LunType disk
$lun1 = $luns | where {$_.LunID -eq 110}
$lun2 = $luns | where {$_.LunID -eq 111}

$node1 = Get-VM -Name node1 
$node2 = Get-VM -Name node2
Get-HardDisk -VM $node1 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false
Get-HardDisk -VM $node2 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false

at this point, LUNs from storage 1 must be unexported and LUNs from storage 2 must be exported.
Then scan for new LUNs and add the disks.

Get-VMHost | Get-VMHostStorage -RescanAllHBA

New-HardDisk -VM $node1 -DeviceName $lun1.ConsoleDeviceName -DiskType RawPhysical
New-HardDisk -VM $node2 -DeviceName $lun2.ConsoleDeviceName -DiskType RawPhysical

this looks correct. I only don't know when I add (replicated) 2 disks that are from storage 2 by using this script then I will lost the data on these disk or not. what do you think about this ?

Thank you.

0 Kudos
LucD
Leadership
Leadership
Jump to solution

The unexport and export of the LUNs is from the point of view of the storage controllers, right ?

And that is done via the management interface to these controllers ?

In that case you will indeed need to do a scan to "see"  the new LUNs.

When storage1 becomes unavailable, is that done gracefully ? In other words is all IO finished before the underlying LUNs are unexported ?

And I assume that the synchronisation to the LUNs on storage2 continues ?

In that case there shouldn't be any data lost, provided the application(s) can cope with a removal of a disk.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

yes, the unexport and export of the LUNs is from the point of view of the storage controllers. say a database works on nodes, after we shutdown the database, we unexport the LUNs, export replicated LUNs from storage 2, and start the database. yes there is no I/O traffic to LUNs at this time.

when we start to use storage 1, synchronisation continous. replication works reverse, from storage 2 to storage 1.

I will try the script and let you know if it works or not Smiley Happy

Thank you.

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

are you sure Get-ScsiLun has "LunID"property ? becase it gives nothing

0 Kudos
LucD
Leadership
Leadership
Jump to solution

My mistake, I'm using the VIProperties for ScsiLun, adn LunID is in there.

You can use the formula instead

New-VIProperty -Name lunID -ObjectType ScsiLun -Value {
    param($lun)

    [int](Select-String ":L(?\d+)$" `
        -InputObject $lun.RuntimeName).Matches[0].Groups['lunID'].Value
} -Force


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

as you said I need to add/define the LunID property. I also found it at your web page: http://www.lucd.info/2011/07/26/the-making-of-a-new-viproperty-called-lunid/

Smiley Happy

then final script is:

New-VIProperty -Name lunID -ObjectType ScsiLun -Value {
  param($lun)
  [int](Select-String ":L(?<lunID>\d+)$" `
    -InputObject $lun.RuntimeName).Matches[0].Groups['lunID'].Value
} -Force | Out-

$esxName = "XXXX"  #used, becase there 5 hosts in cluster and 5 lines goes to LUN1
$luns = Get-VMHost -Name $esxName | Get-ScsiLun -LunType disk
$lun1 = $luns | where {$_.LunID -eq 110}
$lun2 = $luns | where {$_.LunID -eq 111}

$node1 = Get-VM -Name "NodeA"
$node2 = Get-VM -Name "NodeB"

Get-HardDisk -VM $node1 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false
Get-HardDisk -VM $node2 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false

Get-VMHost | Get-VMHostStorage -RescanAllHBA

New-HardDisk -VM $node1 -DeviceName $lun1.ConsoleDeviceName -DiskType RawPhysical
New-HardDisk -VM $node2 -DeviceName $lun2.ConsoleDeviceName -DiskType RawPhysical

Thank you very much dude, you are great !

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

in the script, we need to add same disk to two nodes. I mean after adding RDM to nodeA, I must add same RDM to nodeB. do you have any short solution?

0 Kudos
LucD
Leadership
Leadership
Jump to solution

Do you mean that you need to share the RDM between 2 VMs ?

If yes, then you will also have to create a new SCSI controller with one of the 2 SCSI Bus sharing modes


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

yes I need to share RDM betwen two nodes. it says nodes must be "powered-off"

so in my scenario, after removing disk, I dont need to add/remove scsi controller to add new disk. so I need to use existing scsi controller.

New-VIProperty -Name lunID -ObjectType ScsiLun -Value {
  param($lun)
  [int](Select-String ":L(?<lunID>\d+)$" `
    -InputObject $lun.RuntimeName).Matches[0].Groups['lunID'].Value
} -Force | Out-Null


$esxName = "XXXX"
$luns = Get-VMHost -Name $esxName | Get-ScsiLun -LunType disk
$lun1 = $luns | where {$_.LunID -eq 100}

$vm1 = Get-VM -Name "NodeA"
$vm2 = Get-VM -Name "NodeB"

$disk = Get-VM $vm1 | Get-HardDisk | Select -First 5
$ctrl1=Get-ScsiController -HardDisk $disk | where {$_.BusSharingMode -eq "Virtual"}

$disk = Get-VM $vm2 | Get-HardDisk | Select -First 5
$ctrl2=Get-ScsiController -HardDisk $disk | where {$_.BusSharingMode -eq "Virtual"}

Get-HardDisk -VM $vm1 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false
Get-HardDisk -VM $vm2 -Name "Hard disk 2" | Remove-HardDisk -Confirm:$false

$DeviceName=$lun1.ConsoleDeviceName

$hd1 = New-HardDisk -VM $vm1 -DeviceName $DeviceName -DiskType RawVirtual
$spec = New-Object VMware.Vim.VirtualMachineConfigSpec
$spec.DeviceChange += New-Object VMware.Vim.VirtualDeviceConfigSpec
....

..... Here what to do Smiley Happy

.....

$vm2.ExtensionData.ReconfigVM($spec)

0 Kudos
LucD
Leadership
Leadership
Jump to solution

Afaik you have to start with a SCSI controller that allows bus sharing.

See Sharing a .vmdk between two VMs for an example script.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

saitoz
Contributor
Contributor
Jump to solution

what I mean is;

when two nodes are powered-off, I added scsi controllers and shared RDMs.

then two nodes are powered-on, then removes disks but now scsi controllers. then I need to add replicated disk using existing scsi controllers.

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

well,

$crtl1 is on vm1 and $crtl2 on vm2, when I try

$hd1 = New-HardDisk -VM $vm1 -DeviceName $DeviceName -DiskType RawVirtual -controller $ctrl1

New-HardDisk -VM $vm2 -DiskPath $hd1.Filename -controller $ctrl2

it add the disk on vm1 but it gives following error on vm2

New-HardDisk : 16.04.2013 14:25:47    New-HardDisk        Incompatible device backing specified for device '0'.   
At line:31 char:13
+ New-HardDisk <<<<  -VM $vm2 -DiskPath $hd1.Filename -controller $ctrl2
    + CategoryInfo          : NotSpecified: (:) [New-HardDisk], InvalidDeviceBacking
    + FullyQualifiedErrorId : Client20_VirtualDeviceServiceImpl_AttachHardDisk_ViError,VMware.VimAutomation.ViCore.Cmdlets.Commands.VirtualDevice.NewHardDisk

Smiley Sad

0 Kudos
LucD
Leadership
Leadership
Jump to solution

I suspect this might be because the SCSI controller is not configured for sharing.

If you do a "Get-ScsiController", is the BuSharingMode showing physical or virtual ?


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
saitoz
Contributor
Contributor
Jump to solution

busSharing mode is virtual.

when nodes powered-off I already added scsi controllers and disk, and power on them, and remove the disk. I just want to add same disk to nodes using existing scsi controllers on nodes. when I look I see controllers exist on nodes.

$disk = Get-VM "NodeS" | Get-HardDisk | Select -First 5
$ctrl1=Get-ScsiController -HardDisk $disk | where {$_.BusSharingMode -eq "Virtual"}
$disk = Get-VM "NodeB" | Get-HardDisk | Select -First 5
$ctrl2=Get-ScsiController -HardDisk $disk | where {$_.BusSharingMode -eq "Virtual"}

$esxName = "vmserver01.ibb.gov.tr"
$luns = Get-VMHost -Name $esxName | Get-ScsiLun -LunType disk
$lun1 = $luns | where {$_.LunID -eq 200}

$vm1 = Get-VM -Name "NodeA"
$vm2 = Get-VM -Name "NodeB"

$DeviceName=$lun1.ConsoleDeviceName

$hd1 = New-HardDisk -VM $vm1 -DeviceName $DeviceName -DiskType RawVirtual -controller $ctrl1

until now script works, now I need to add same disk to nodeB, $hd1.Filenam ise [XXX_VMFS01] NodeA/NodeA_1.vmdk

then I run the command in the script:

New-HardDisk -VM $vm2 -DiskPath $hd1.Filename -controller $ctrl2

here it give the error

0 Kudos