OK, now my ESX servers see the new LUNs but I am still unable to add any Raw Device Mappings. Is this because I am using an evaluation version of the ESX server and/or the VI client?
Yes, all of the 10GB and 4GB LUNs are the ones I want to add as RDM disks.
Ok, one more thing, can you run fdisk -l
This should show if any of those disks have any partitioning defined.
-KjB
Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 9728 78149128+ 7 HPFS/NTFS
Disk /dev/sdb: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 * 1 13 104391 83 Linux
/dev/sdb2 14 650 5116702+ 83 Linux
/dev/sdb4 9394 9729 2698920 f Win95 Ext'd (LBA)
/dev/sdb5 9394 9462 554211 82 Linux swap
/dev/sdb6 9463 9716 2040223+ 83 Linux
/dev/sdb7 9717 9729 104391 fc Unknown
Disk /dev/sdc: 364.3 GB, 364353886720 bytes
255 heads, 63 sectors/track, 44296 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 44296 355807556 fb Unknown
Disk /dev/sdd: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdd doesn't contain a valid partition table
Disk /dev/sde: 20 MB, 20971520 bytes
1 heads, 40 sectors/track, 1024 cylinders
Units = cylinders of 40 * 512 = 20480 bytes
Disk /dev/sde doesn't contain a valid partition table
I/O error: dev 08:40, sector 40
Disk /dev/sdf: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdf doesn't contain a valid partition table
Disk /dev/sdg: 4294 MB, 4294967296 bytes
133 heads, 62 sectors/track, 1017 cylinders
Units = cylinders of 8246 * 512 = 4221952 bytes
Disk /dev/sdg doesn't contain a valid partition table
Disk /dev/sdh: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdh doesn't contain a valid partition table
Disk /dev/sdi: 4294 MB, 4294967296 bytes
133 heads, 62 sectors/track, 1017 cylinders
Units = cylinders of 8246 * 512 = 4221952 bytes
Disk /dev/sdi doesn't contain a valid partition table
Disk /dev/sdj: 10.7 GB, 10737418240 bytes
64 heads, 32 sectors/track, 10240 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Disk /dev/sdj doesn't contain a valid partition table
Disk /dev/sdk: 4294 MB, 4294967296 bytes
133 heads, 62 sectors/track, 1017 cylinders
Units = cylinders of 8246 * 512 = 4221952 bytes
Disk /dev/sdk doesn't contain a valid partition table
Disk /dev/sdl: 20 MB, 20971520 bytes
1 heads, 40 sectors/track, 1024 cylinders
Units = cylinders of 40 * 512 = 20480 bytes
Disk /dev/sdl doesn't contain a valid partition table
I/O error: dev 08:b0, sector 40
I don't know what sde or sdl are. I did not create them.
/dev/sde and /dev/sdl look like the same LUN 7, but are appearing differently to the ESX host.
The rest of the disks look fine.
Can you mask LUN 7 away, and run a rescan on vmhba1 and vmhba2?
Then, go to the add storage wizard, and see if any LUNs appear. Of course, I know you're not trying to add VMFS, but see if the LUNs show up. Also, on the storage adaptors configuration section, click on vmhba1/2 and post that screenshot. Just want to confirm from the vc GUI what you see in the service console.
Last thing is to try and add the RDM manually using the vi client directly to the ESX host.
-KjB
What do you mean by "mask LUN 7 away"? Meanwhile, below is the screenshot you requested.
The "add storage wizard" shows the LUNs correctly. I don't see any differences when connecting my VI client directly to the ESX Host. The option is still greyed out.
By the way, I found the type of HBA we have:
LSI Logic / Symbious Logic FC929 (rev 02)
Trying again with the screenshot...
Trying again. The attachment might show up better or try looking at http://i25.tinypic.com/15s09ox.jpg
http://i25.tinypic.com/15s09ox.jpg[/IMG]
Ok, I think I see the problem now. Your HBAs are seen as SCSI, not Fiber Channel, or iSCSI. To use RDM, you need an FC/iSCSI SAN, otherwise, the disks look as local attached, and can not be used as an RDM.
-KjB
Hmm, that is interesting. The SAN is clearly Fiber Channel. I wonder why it isn't being seen as such?
I'm not sure if there is a way to change the presentation of the fiber channel array. Your disks look as if they are DAS, instead of an FC SAN.
Is the array you are using on the HCL?
-KjB
This is the same HBA and SAN hardware we use in a different environment where this problem does not exist. The hba adapters show up as SCSI in that environment too. I have been unable to find the HCL so far but I expect it is supported based on previous use of like hardware.
Are you able to add RDM in your other environment?
Did you say you've already tried a reboot? If not, I'd say give that a shot first, but it is strange.
-KjB
My other environment has VMs with Raw Device Mapping. I tried adding another but there are no available LUNs so the option is greyed out now. I've rebooted my ESX servers several times over the last couple of days with no improvement.
May be time to open a case with vmware. You can see the disks, and they're available to add as storage, but not for RDM.
-KjB
Found the probable cause to this problem. Apparently, it is not supported to directly connect the SAN to the HBA on the ESX server without going through a SAN switch. VMWare sees this as Direct Attached Storage I guess. Anyway, since I don't have a SAN switch nor budget to buy one right now I will just use VMFS volumes.