VMware Cloud Community
Butcha
Contributor
Contributor

SCSIController Object Behaving Differently in vSphere 5 than vSphere 4.1

Hi guys!

About a month ago, I put some automation in place at a customer's site for adding RDM disks to replicated VMs being recovered on ESXi 4.1 hosts. All worked well before I left. However, during my absence, the customer upgraded the ESXi hosts to 5 and now, certain steps in the script no longer function as expected. See the portion of code in question below:

Import-Csv "C:\TEMP\RDM_USRI.csv" -UseCulture | %{
    $vm = Get-VM -Name $_.VM
    $vm2 = Get-VM -Name $_.VM2
    $sc = Get-ScsiController -VM $vm | where {$_.Extensiondata.BusNumber -eq 1}
    $sc2 = Get-ScsiController -VM $vm2 | where {$_.Extensiondata.BusNumber -eq 1}

    $vmLun = $_.CanonicalName #[$_.VM + "|" + $_.SCSIDevNum]
    $rdmPaths = @()

Here, I am declaring some variables for use in the script, the most relevant being $sc and $sc2. Basically, I am assigning SCSI device 1 to this variable, but since the ESXi host upgrades, this is no longer working and the variables end up evaluating to NULL despite a SCSI controller existing at this location. This problem doesnt cause the script to fail outright because VMs could be recovered without a SCSI 1 device, in which case there is code that will actually add this device to VMs, but we are checking the $sc variable later in order to make decisions on whether or not to add SCSI 1 in the first place so this is troubling!

Another wierd occurance since the vSphere upgrade:

Foreach ($disk in $vm.HardDisks){
        if ($disk.DiskType -eq "RawPhysical") {
            $rdmPaths += $disk.Filename

Here, we are collecting the RDM file paths into an array for later processing but, for some reason, $rdmPaths will not populate with the data, even though I can verify that the $disk and $vm ($vm.HardDisks) variables ARE being populated with valid data.

What are the differences in vSphere versions that will render these pieces of logic inoperable?

Better yet, would a vSphere upgrade alone cause this?

0 Kudos
8 Replies
LucD
Leadership
Leadership

You didn't say which type of SCSI Controller is used, by from some quick tests it looks as if in vSphere 5, the BusNumber is still there and follows the same logic.

Are you sure there are 2 SCSI Controllers on those VMs ?

If not, you will not see a BusNumber 1.

Same for the DiskType property, that hasn't changed in vSphere 5.

Are you sure you defined $rdmPaths as an array ?

$rdmPaths = @()

Or are these RDMs perhaps of type RawVirtual ?

I normally test for the 2 types to detect RDMs.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
Butcha
Contributor
Contributor

thanks for your timely interaction, LucD!

Just for clarification, we are adding RDM disks in physical compatibility mode using VirtualLSILogicSAS using physical bus sharing mode.

So, I was able to solve issue A, the populating of my variables with the SCSI controller of my choosing. Actually it was my own error.....lol. Sorry to waste time with that.

However the 2nd issue is most troubling. Mostly, because this was working like clockwork up until this week when I tested it subsequent to my customer's upgrade of the related ESXi hosts from 4.1 to 5.

Below is the line in question:

I must reiterate that this WORKED like clockwork BEFORE ESXi host upgrades from 4.1 to 5

powergui_array_contains_data.jpg

**** I have cleansed customer specific data from the images, so any gaps are not really present in the code****

the $2ndVMhd variable setting is where the error comes now, complaining about an invalid backing but as you can see the variable that I have holding the diskpath parameter (of the New-Harddisk cmdlet) is actually containing valid data that I pulled from the first VM which has this disk attached already. $rdmPaths is actually an array, initialized at the beginning of the script, and since this loops through, adding one disk at a time to the first VM, I am forcing only the last disk added during each loop to be used in $rdmPaths.

Yet when this line executes, it bombs out with:

powergui_error.jpg

To make matters even more interesting, I am able to add this very disk to the VM in question just fine using the VI client, along with the SCSI controller that it should use, using this same path, which I navigate using the GUI browser function.

0 Kudos
LucD
Leadership
Leadership

Is that RDM still attached to the first VM when you do the New-Harddisk cmdlet ?


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
Butcha
Contributor
Contributor

Yes it is, LucD, as I have verified using breakpoints in my IDE and constant checking in the VI client. This has remained the case even before the script's present malfunction.....

0 Kudos
Butcha
Contributor
Contributor

LucD,

I was speaking with my customer about this and it turns out that I did not have ALL of the relevant info with regard to the scope of their ESXi upgrades. So the protection site happens to still be running these VMs on ESXi 4.1, however we have been attempting to actually recover them on ESXi 5 hosts.

LOL.....I had no idea about this......can you shed any light on what kind of bind we are in actually trying to do this?????

Does the VMs being on a downlevel vmtools and/or virtual hardware break this?

0 Kudos
Butcha
Contributor
Contributor

Quick update:

After updating the VM's vmtools and virtual hardware to version 8, it still fails in the same exact part.

My next test will be to create the same conditions on the source cluster, which is ESXi 4.1, and test there to verify whether or not it even makes a difference...

0 Kudos
LucD
Leadership
Leadership

I wouldn't expect that the VMware Tools version has any impact.

The HW version perhaps, but as you already demonstrated, it doesn't seem to do so.

That same message was reported several time in SRM it seems.

Could have something to do with the way the LUN is seen on the target datacenter.

Could be that the .vmdk header file has some erroneous info.


Blog: lucd.info  Twitter: @LucD22  Co-author PowerCLI Reference

0 Kudos
Butcha
Contributor
Contributor

Im not sure about corruption, since the 1st VM can obviously make use of, first, the actual LUN during its initial add, and then, secondly, the VMDK file after its addition. Plus the 2nd VM can actually add this mapping, without error, if I configure it manually via the VI client.

I plan on using Onyx to see what it spits out when I perform the operation manually again just as an exhaustive measure.....I'll definitely report back what I find.

0 Kudos