I am using Linux+SCST for many of my non-critical needs and it works great.
Recently, I built a new ESXi 5.0 cluster and can no longer add the iSCSI devices as a datastore. Mind you, I can see them under the software adapter as "mounted" under "operational state".
I have exhausted my resources on the SAN side of the equation (even going so far as to clone an existing installation that works fine) with no luck.
Are there diagnostics or logs that I should be using inside of ESXi to get more info on why this is happening?
My issue turned out to be the partition table on the LUN that was exported. It had been previously used as a VMFS5 data store and it was 3TB. The issue appears to be that VMFS5 only supports 2TB and uses some cludgey partitioning to extend to 3TB.
In order to remedy the issue, I did the following:
1) Use 'fdisk -l' to find the disk. Mine was '/vmfs/devices/disks/eui.XXXXXXXXX'.
2) 'fdisk /vmfs/devices/disks/eui.XXXXXXXXX'
3) Use 'd' to delete any existing partitions (use 'p' to print partition info to the screen).
4) Use 'n' to create a new partition.
5) 'p' for primary partition, '1' for partition 1.
6) When prompted for ending track, simply enter a value less than 2TB (I used '+1000G' for 1TB).
7) Hit 't' for setting the partition type. 'fb' to set the partition to VMFS.
😎 Hit 'w' to write the new partition info.
Now, run the following command to format and mount the disk:
vmkfstools -C vmfs5 /vmfs/devices/disks/eui.XXXXXXXXX
It will now appear in the vSphere client with a GUID name (you can rename it accordingly). Extend the disk to the size that you desire above 2TB.