VMware Cloud Community
Simon_H
Enthusiast
Enthusiast

RDM settings to passthrough local disk to VM on ESXi 4.1

Hello

I'm setting up a Filer VM and for the data disk(s) (which I will never need to snapshot etc) I'm trying to pass through a locally attached SAS raw disk to the VM using RDM. I have just upgraded to ESXi 4.1U1 and was hoping the Advanced Settings->RdmFilter.HbaIsShared was going to be the answer...

Steps:

1) I have a free device (vmhba1:C0:T1:L0) that, if I went into "Add Storage" I could put a vmfs filesystem on. In case it is important, this previously was used for vmfs, though that Datastore was deleted.

2) When I first went into Advanced Settings->RdmFilter.HbaIsShared it was ticked. I unticked it (though I've actually tried both ticked and unticked)

3) I did Storage "Rescan All..."

4) When I try to add a disk to a VM the "Raw Device Mappings" radio button is greyed out - I was hoping I was going to be able to add a raw disk for vmhba1:C0:T1:L0 to a VM.

Am I missing something (e.g. some kind of extra line in hostd/config.xml to tell it to treat local storage like a SAN), or this just not actually possible yet? (e.g. KB article 1017704 suggests it's greyed out if "You are using a SAS Direct Attached Storage (DAS) Array that is being represented as Local Storage")

Notes:

  • we are talking about local storage, not SAN storage (unlike this SAN question: http://communities.vmware.com/message/1666865, though the very last, slightly cryptic post, is intriguing!)
  • this is the free ESXi version so I'm using vSphere Client (not Server)

Thanks!

Simon

Tags (4)
Reply
0 Kudos
19 Replies
krishnaprasad
Hot Shot
Hot Shot

Hello Simon,

Local storage cant not be used for creating Raw Device Mapping. Refer VMware KB 1017530 for more details

Thanks,

Krishnaprasad

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal

The LUN also has to be presented with a LUN serial number which is often not the case.  This is unsupported, but this is a work around for local storage RDMs - http://www.vm-help.com/esx40i/SATA_RDMs.php.

Simon_H
Enthusiast
Enthusiast

Thanks for your replies Dave & Krishnaprasad - now that I know what I'm looking for I see I'm not the first.

Dave, thanks too for your SATA instructions. Your vm-help.com website is a great resource - thanks for the work you put into it.

Unfortunately I am getting the dreaded "Failed to create virtual disk: Invalid argument (1441801)" error. Now that I know what I'm looking for I can see that this is quite common when trying to set up RDM for local storage. This was when I was running the command to put the RDM .vmdk in the VM directory (seemed a logical place) from the TSM console:

cd /vmfs/volumes/<vmfs-vol>/<vm-name>

vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a313a30 nexenta-data1-rdmp.vmdk -a lsilogic -v 20
DISKLIB-LIB   : CREATE: "nexenta-data1-rdmp.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=lsilogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a313a30'Failed to create virtual disk: Invalid argument (1441801).

When I look in the syslog I see:

Mar  7 09:56:25 shell[79469]: vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a313a30 nexenta-data1-rdmp.vmdk -a lsilogic -v 20
Mar  7 09:56:25 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2
Mar  7 09:56:29 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2

It seems to be important where you create the RDM files - someone on Dave's blog suggested it needed to be where ESXi boots from. I have 3 Hypervisor directories under /vmfs/volumes and it doesn't let me create it under one of those (not that it would be ideal there). I tried to create a /rdm folder and put it in there but that's not VMFS so presumably why this didn't work either:

vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a313a30 /rdm/test-m2-r0-rdmp.vmdk -a lsilogic -v 20
DISKLIB-LIB   : CREATE: "/rdm/test-m2-r0-rdmp.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=lsilogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a313a30'
DISKLIB-LIB   : Unable to get file system ID for filename "/rdm/test-m2-r0-rdmp.vmdk"
Failed to create virtual disk: Operation not permitted (65545).

System details:

This is an HP Proliant DL G5 with P400i RAID card. It is booting from an internal USB memory stick (not a local hard disk).

So near and yet so far...

Simon

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Oh-oh: rather worryingly I now seem to be getting this every 4 seconds in the syslog:

Mar  7 18:41:49 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2

Mar  7 18:41:53 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2
Mar  7 18:41:57 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2
Mar  7 18:42:00 Hostd: [2011-03-07 18:42:00.016 2ACAAB90 verbose 'Statssvc'] HostCtl exception Unable to complete Sysinfo operation.  Please see the VMkernel log file for more details.
Mar  7 18:42:00 Hostd: [2011-03-07 18:42:00.022 2ACAAB90 verbose 'Statssvc'] HostCtl exception Unable to complete Sysinfo operation.  Please see the VMkernel log file for more details.
Mar  7 18:42:01 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2
Mar  7 18:42:05 storageRM: Storage I/O Control: connection with vobd failed, error code: -1 errno: 2

I suspect it's something to do with the vmkfstool RDM command but am not totally sure. As far as I'm aware the VMKernel log file in ESXi is syslog too so I don't have a lot of info to go on. I'd rather not reboot the host if I don't have to.

Well, I'm really stumped now - a lot of people seem to have the RDM to vmdk working (mostly SATA though) so I can't work out if it is something about my configuration (e.g. firmware on the controller)...

Simon

PS. I did check the HP P400i SAS/SATA RAID controller (I'm using SAS disks only) and that is on the HCL http://www.vmware.com/resources/compatibility/detail.php?device_cat=io&device_id=586 (I hadn't checked it for a long time but it's a mainstream server)

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal

In the inital command I would enclose the filename in quotes (or skip using a hyphen) vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a313a30 nexenta-data1-rdmp.vmdk -a lsilogic -v 20

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

KB Article: 1017530 says "It is mandatory that RDM candidates or devices support SCSI Inquiry VPD page code 0x83 to be used for RDMs". "This is capability generally not possible or included on local controllers and their attached storage, although some controllers may have an implementation for this."

See http://en.wikipedia.org/wiki/SCSI_Inquiry_Command which shows "83h - Device Identification". So a raw disk will presumably pass back some kind of unique ID. So, I wonder where many people have been successful it is because they have a SATA controller which is passing the ID straight from the disk. On the other hand for my P400i controller the logical RAID0 device I'm trying to use is made from 2 different disks so there's no obvious ID to use (though of course the controller could make one up if it chose too - or even have it adjustable in the UI/CLI for disk management).

Anyone know when a vmfs filesystem is created if the VPD 0x83 is held in a file somewhere on ESXi? That might give a clue as to whether the problem is at the controller layer or something I'm doing...

Simon

PS. perhaps this is why it's upsupported by VMware Smiley Wink - perhaps SAN LUNs always have the 0x83 response set.

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Dave Mishchenko wrote:

In the inital command I would enclose the filename in quotes (or skip using a hyphen) vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a313a30 nexenta-data1-rdmp.vmdk -a lsilogic -v 20

Thanks Dave - I think I've already tried most of the permutations, but removed the hyphens and quoted both names but no joy (still 1441801).

I'm also looking back to your first post where you said "The LUN also has to be presented with a LUN serial number which is often not the case." - do you think this serial number is the 0x83h Device Identification VPD?

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Well, I've been picking through the HP ORCA/ACU to try to see if there's any mention of a unqiue ID for logical disks but can't see anything. I suspect it's not implemented. 

I do know you can often move disks between slots in a disk array and the controller will recognise the disk and disk group it belongs to, that must be done with an ID on the disk itself (perhaps the same 0x83 one), with the logical disk knowing what disk IDs it contains. For a direct attached controller I suppose there's no need for an ID for a logical disk, even though it presumbaly looks like a regular SCSI disk to the OS.

Unless anyone has a brainwave I suspect I'm going to have to concede defeat Smiley Sad and just slap on a VMFS filesystem and then one big virtual disk in it. It's annoying though as I can only see disadvantages to this approach for a Filer VM's data disks.

Thanks all the same to the thread contributors!

Simon

Reply
0 Kudos
Saibot
Contributor
Contributor

Hi Simon ,

I had a similiar problem trying to get RDM to work on a HP DL380 G7 with ESXi 4.1U1 booted from usb  stick, and 12 SAS disks connected to a p812 raidcontroller configured as  3 logical units (4x3 disks) . I had the virtual machine files stored on  a NFS mount and that was the problem ! when i configured a local vmfs  store on a separate disk connect to a p410i controller and moved the virtual  machine files there it worked !

it seems esx was unable to create the files needed for the RAW device mapping on nfs storage.

/Tobias

Simon_H
Enthusiast
Enthusiast

Interesting. Were the RDMs, created in VMs stored on VMFS filesystems on the disk connected to the P410i, then pointing to raw devices on the P812?

Although I do have this host connected to an NFS server I don't run VMs from NFS shares and I was definitely trying to create the RDM file for VMs with local storage (mounted on logical volumes from the P400i controller). The RDM would have been pointing to a logical disk on the same P400i though. I tried lots of permutations too! As an unsuppported feature there's no documentation/HCL so I don't suppose I'm going to get to the bottom of it...

Reply
0 Kudos
VM0Sean
Enthusiast
Enthusiast

I know this is an old thread, but with ESX 4.1 and 5 it seems to work perfectly fine to do as is described in http://www.vm-help.com/esx40i/SATA_RDMs.php -

vmkfstools -r /vmfs/devices/disks/vml.{id number} RDM1.vmdk -a lsilogic

This works and is even detected by the UI as a Raw-mapped device despite the UI not allowing you to do this directly.  The excuse of the 83h unique ID seems fishy to me since ATA supports something similar (serial number which is unique).  If that's not unique enough combine with size and model number.

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Errm - we're talking about SAS direct attached storage here Sean, not SATA.

If anyone has had RDM working through a SAS controller (like the P400i/410i) I'd be very interested to hear.

Reply
0 Kudos
VM0Sean
Enthusiast
Enthusiast

Just to confirm everything that I have seen you quote has the -z option instead of the -r option - have you tried -r ?

have you tried to create the .vmdk descriptor file manually ?

Do you have the same list of devices in /dev/disks ?

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Thanks Sean. It's quite a while ago now but I did try various permuations - here are my notes:

~ # ls -l /dev/disks |grep "C0:T1:L0"
-rw-------    1 root     root        97855242240 Mar  7 08:53 mpx.vmhba1:C0:T1:L0
lrwxrwxrwx    1 root     root                 19 Mar  7 08:53 vml.0000000000766d686261313a313a30 -> mpx.vmhba1:C0:T1:L0

===> This is used to determine the VML identifier for the disk:
vml.0000000000766d686261313a313a30

# ls -l /vmfs/devices/disks/vml.0000000000766d686261313a313a30
lrwxrwxrwx    1 root     root                 19 Mar  7 09:09 /vmfs/devices/disks/vml.0000000000766d686261313a313a30 -> mpx.vmhba1:C0:T1:L0


# cd /vmfs/volumes/<vmfs-vol>/<vm-name>/
/vmfs/volumes/491711e1-314425c4-fe8c-00215aaae292/<vm-name> # ls -l
-rw-------    1 root     root         4294967296 Mar  6 21:05 nexenta-root-flat.vmdk
-rw-------    1 root     root                449 Mar  6 21:05 nexenta-root.vmdk


===> now create the RDM:

-z == --createrdmpassthru

===> couldn't get it to work

(and then I referenced this post)

I don't think I tried -r  => that seems to be described as:

-r , -createrdm /vmfs/devices/disks/...
    Map a raw disk to a file on a VMFS file system. Once the mapping
    is established, it can be used to access the raw disk like a
    normal VMFS virtual disk. The `file length` of the mapping is
    the same as the size of the raw disk that it points to.

whereas -z is:

-z , -createrdmpassthru /vmfs/devices/disks/...
    Map a passthrough raw disk to a file on a VMFS file system. This
    allows a virtual machine to bypass the VMKernel SCSI command
    filtering layer done for VMFS virtual disks. Once the mapping is
    established, it can be used to access the passthrough raw disk
    like a normal VMFS virtual disk.

Unfortunately I don't have a spare LUN on this server any more to try it on, but when I do I'll give it a go. In the meantime if anyone else is trying to use RDM to SAS DAS I'd be interested to hear if you have any success.

Simon

Reply
0 Kudos
VM0Sean
Enthusiast
Enthusiast

Yeah -r basically just creates a unix file system link to the device that represents the hard drive.  As I understand it the -z command uses internal os-level driver commands to bypass the file system entirely and go straight to the device.  Although -r hypothetically adds some overhead I would think that it would be trivial and should work in cases where -z fails.

Reply
0 Kudos
Knabsi
Contributor
Contributor

I just went through this with a DELL MD3000/1000. I added RDM for all the 2TB slices with the -z Option but I could only see one of the 8 slices in Windows.

I then deleted all RDMs and created them with the -r option .... now I see all of my 2TB slices and the OS was able to open the entire filesystem.

EDIT: This is on ESXi 5 or whatever it is called now.

EDIT2: Had to go back to ESXI 4.1 because ESXi 5 has issues with the MD3000.  Kept getting messages that "I/O latency increased" and the guest would freeze for ~2 seconds under load.

Reply
0 Kudos
DerekFlint
Contributor
Contributor

Simon,

I'm experiencing the same results as you in trying to create an RDM to a local drive on a SAS controller. I'm on ESXi 5.0 trying to map a 320G SATA drive that's on an Adaptec 4800SAS. I tried both the -z and -r options, but vmkfstools just throws the "Invalid Argument (1441801)" error. To anyone that has been succesful in doing this, please post a response as to how you did it.

Thank You!!

Reply
0 Kudos
DerekFlint
Contributor
Contributor

Simon.H,

Adaptec's scsi-aacraid driver now supports local disk RDM's on a SAS controller. I installed it on my ESXi 5.0 (not update1) server and now a "vmkfstools -z" succesfully created an RDM for a locally attached SATA drive on an ASR-4800SAS controller. Too bad I hadn't found this sooner, like before setting up a 4TB RAID5 array on my file server using 2 datastores and an extent.

1. Driver name: scsi-aacraid

2. Driver version: 5.0.5.1.7.28700-1OEM.500.0.0.406165

3. Compatible ESX version(s): ESXi5.0

4. Dependencies e.g. NIC f/w version, Flex-10 versions etc: NA

5. Bugs fixed (compared to earlier release of driver):
- Fixed "SAS_CoreDump" certification test failure issue by including "vmklnx_scsi_register_poll_handler" in the driver.
- Included SCSI upstream driver patch(Patch: [SCSI] aacraid: fix File System going into read-only mode)
  http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=cacb6dc3d7fea751879a...
- Added RDM(RAW DEVICE MAPPING) support
  For RDM support, set "aac_wwn = 1" RAID controller response to inquiry VPD page 0x80 to return a unique serial number

- Changed sync. timeout from 30s to 300s for rx and src

6. New hardware supported: Added new hardware device 0x28b interface for PMC-Sierra's SRC based controller family.
- new src.c file for 0x28b specific functions
- new XPORT header required
- sync. command interface: doorbell bits shifted (SRC_ODR_SHIFT, SRC_IDR_SHIFT)
- async. Interface: different inbound queue handling, no outbound I2O queue available, using doorbell ("PmDoorBellResponseSent") and response buffer on the host ("host_rrq") for status
- changed AIF (adapter initiated FIBs) interface: "DoorBellAifPending" bit to inform about pending AIF, "AifRequest" command to read AIF, "NoMoreAifDataAvailable" to mark the end of the AIFs

7. Known Issues and Workarounds: No known issues/limitations

8. Additional configuration options supported by the driver (should be tested and supported by the partner): NA

Reply
0 Kudos
Simon_H
Enthusiast
Enthusiast

Ah ha - that is good to know Derek - thanks for posting. So ultimately it's down to the firmware on the controller card - I wonder whether the newer HP RAID cards, like P420 (based on LSI IIRC) are any better in this respect? (though admittedly this is probably quite a rare use case)

Note: on a separate thread (http://communities.vmware.com/message/2025131) I've been discussing the cost of going through VMFS and apparently it isn't that much these days.

Reply
0 Kudos