VMware Cloud Community
dfir
Contributor
Contributor

Help to do RDM to local RAID array

Hi guys,

I'm trying to get raw access to a RAID volume on my ESX server (connected to an Adaptec 5805 RAID controller), but it seems much more difficult than I expected. The array holds an NTFS partition. ESX4 is installed on a regular SATA disk (/dev/sdb).

Do you have any ideas as to what I'm doing wrong?

Let me start with some information about the array:

root@localhost test# fdisk -l

Disk /dev/sda: 39.6 GB, 39625687040 bytes

255 heads, 63 sectors/track, 4817 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sda1 1 4817 38692521 7 HPFS/NTFS <---- This is the NTFS partition that I would like to have raw access to.

Disk /dev/sdb: 160.0 GB, 160041885696 bytes <---- ESX4 is installed onto this seperate harddisk which is located on one of the motherboard SATA ports (not the Adaptec controller)

64 heads, 32 sectors/track, 152627 cylinders

Units = cylinders of 2048 * 512 = 1048576 bytes

Device Boot Start End Blocks Id System

/dev/sdb1 * 1 1100 1126384 83 Linux

/dev/sdb2 1101 1210 112640 fc VMware VMKCORE

/dev/sdb3 1211 152627 155051008 5 Extended

/dev/sdb5 1211 152627 155050992 fb VMware VMFS

Disk /dev/sdc: 8304 MB, 8304721920 bytes

255 heads, 63 sectors/track, 1009 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System

/dev/sdc1 1 117 939771 82 Linux swap / Solaris

/dev/sdc2 118 372 2048287+ 83 Linux

/dev/sdc3 373 1009 5116702+ 5 Extended

/dev/sdc5 373 1009 5116671 83 Linux

-


root@localhost test# ls -l /vmfs/devices/disks/

total 545021409

-rw------- 1 root root 39625687040 Aug 5 20:22 mpx.vmhba1:C0:T0:L0

-rw------- 1 root root 39621141504 Aug 5 20:22 mpx.vmhba1:C0:T0:L0:1

-rw------- 1 root root 160041885696 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP

-rw------- 1 root root 1153417216 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:1

-rw------- 1 root root 115343360 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:2

-rw------- 1 root root 158772232192 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:3

-rw------- 1 root root 158772215808 Aug 5 20:22 t10.ATA_____ST3160811AS_________________________________________6PT07MEP:5

lrwxrwxrwx 1 root root 19 Aug 5 20:22 vml.0000000000766d686261313a303a30 -> mpx.vmhba1:C0:T0:L0

lrwxrwxrwx 1 root root 21 Aug 5 20:22 vml.0000000000766d686261313a303a30:1 -> mpx.vmhba1:C0:T0:L0:1

lrwxrwxrwx 1 root root 72 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:1 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:1

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:2 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:2

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:3 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:3

lrwxrwxrwx 1 root root 74 Aug 5 20:22 vml.010000000020202020202020202020202036505430374d4550535433313630:5 -> t10.ATA_____ST3160811AS_________________________________________6PT07MEP:5

-


root@localhost test# esxcfg-scsidevs -l

mpx.vmhba1:C0:T0:L0

Device Type: Direct-Access

Size: 37790 MB

Display Name: Local Adaptec Disk (mpx.vmhba1:C0:T0:L0)

Plugin: NMP

Console Device: /dev/sda

Devfs Path: /vmfs/devices/disks/mpx.vmhba1:C0:T0:L0

Vendor: Adaptec Model: RAID0 Revis: V1.0

SCSI Level: 2 Is Pseudo: false Status: on

Is RDM Capable: false Is Removable: false

Is Local: true

Other Names:

vml.0000000000766d686261313a303a30

Here's what I've tried:

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30'

Failed to create virtual disk: Invalid argument (1441801).

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30:1'

Failed to create virtual disk: Invalid argument (1441801).

root@localhost test# vmkfstools -z /vmfs/devices/disks/vml.0000000000766d686261313a303a30:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vml.0000000000766d686261313a303a30:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0'

Failed to create virtual disk: Invalid argument (1441801).

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0\:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

root@localhost test# vmkfstools -z /vmfs/devices/disks/mpx.vmhba1\:C0\:T0\:L0\:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/mpx.vmhba1:C0:T0:L0:1'

Failed to create virtual disk: Invalid argument (1441801).

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0\:0 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0:0'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

root@localhost test# vmkfstools -z /vmfs/devices/disks/vmhba1\:C0\:T0\:L0\:1 /vmfs/volumes/Storage1/test/test123.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/test/test123.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/vmhba1:C0:T0:L0:1'

DISKLIB-LIB : Only disks up to 2TB-512 are supported.

Failed to create virtual disk: The destination file system does not support large files (12).

Any ideas? I remember reading a post once where a person stated that one cannot make an RDM to a VMHBA device?

Thanks in advance,

Jesper

0 Kudos
9 Replies
RParker
Immortal
Immortal

If I understood correctly, you can't make a LOCAL disk RDM. RDM are LUN's or separate network storage devices, but SCSI won't let you make it RDM. Or maybe that's how you configure it, if it's partitioned as 1TB lets say and you install ESX and left the remainder as ext3 with no configured datastore, and there is 800GB left over, then you can't use that free space as RAW space.

You would have to reconfigure the RAID, make a small 10-20GB local partition install ESX, the other virtual disk drive (configured by the RAID controller) would then be visible as RAW space, that may work...

0 Kudos
dfir
Contributor
Contributor

Hi Parker,

Sorry that I have not made myself clear. I've edited the post to hopefully make more sense.

ESX is installed on a separate harddrive and is not related to the RAID array in any way. The RAID array consists of an NTFS partition only. So the ESX host has 3 harddrives, 1 for ESX and 2 configured in RAID0 for the RAID array.

Does it matter that the Adaptec 5805 is SAS/SATA? So its not natively a SCSI controller (even though ESX sees it as a Parallel SCSI controller for some reason).

0 Kudos
RParker
Immortal
Immortal

Does it matter that the Adaptec 5805 is SAS/SATA? So its not natively a SCSI controller (even though ESX sees it as a Parallel SCSI controller for some reason).

It has more to do with HOW you setup the RAID.

If you have 3 drives, and they are ALL 1 big partition (assuming they are collectively smaller than 2 TB) than you can still use the RAID controller to divide the space as Virtual Hard Drives (VHD in this case not anything to do with ESX or Virtual Software, RAID controllers calls these 'Virtual Hard Drives'). Then you can allocate space as 10 GB, install ESX. The left over space is RAW.

ESX sees this space as 10GB of FREE (unformatted space) and 990 GB (for instance) space as unallocated / fdisk / RAW space.

If however you leave it as FREE space, and have ESX allocate ALL the space on the drives (RAID setup as 1 physical drive, no Virtual Hard Drives) then ESX will attempt to allocate the space as FREE space, and whatever is left over it will assume will be Datastore, thus FREE space is not the same as RAW space.

So 1TB of FREE space cannot be used as RAW space to mount your VM's. Your configuration (not knowing how big that 1 drive you have reserved for ESX is) you are wasting ALL that space on the first drive for ESX. That's why its better to just join ALL 3 drives ( by RAID configuration) to make this 1 large drive (because you gain performance with each drive spindle) and you don't have to segregate ESX, in fact its worse if you do because of space you lose. You can still divide it up for ESX at the RAID level.

That's how I do my drives, 1 big RAID configured drive (as large as possible) up to 2 TB (which is the limit).

When I refer to SCSI I meant SCSI as a local RAID and SAS/SATA are drives. SCSI is the megaraid/LSI local controller. LUN/NFS/iSCSI is external data. Normally when you install ESX you let ESX configure the RAID and don't partition the drive at the RAID level. If do ESX totally you will end up with FREE space, and no RAW space and thus you can't use RDM on local ARRAY. That's my point.

This is the NTFS partition that I would like to have raw access to.

Also RAW implies NO previous data existing, when you mount this as RAW the VM will ASSUME it's NOT in use and your OS will need to format it to make it usable. You can't mount an existing drive as a VM drive.

0 Kudos
wademn
Contributor
Contributor

I had a similar issue using a 3ware-9650. Below is the output I used to create a RDM using vmkfstools.

1. ls -lh /vmfs/devices/disks/

total 4.1T
-rw----- 1 root root 932G Sep 28 13:50 t10.AMCC____9QJ0RGK7C0E6780029E0
-rw
--- 1 root root 932G Sep 28 13:50 t10.AMCC____9QJ0Y2RKC0E67800097E
-rw
--- 1 root root 1.1G Sep 28 13:50 t10.AMCC____9QJ0Y2RKC0E67800097E:1
-rw
--- 1 root root 110M Sep 28 13:50 t10.AMCC____9QJ0Y2RKC0E67800097E:2
-rw
--- 1 root root 931G Sep 28 13:50 t10.AMCC____9QJ0Y2RKC0E67800097E:3
-rw
--- 1 root root 931G Sep 28 13:50 t10.AMCC____9QJ0Y2RKC0E67800097E:5
*-rw
--- 1 root root* 233G Sep 28 13:50 t10.AMCC____CQ500093C0F6DC00FB22
-rw
----- 1 root root 233G Sep 28 13:50 t10.AMCC____CQ500094C0F6DC004518

2. # vmkfstools -z\

0 Kudos
wademn
Contributor
Contributor

2. root@super ~# vmkfstools -z\

/vmfs/devices/disks/t10.AMCC____CQ500093C0F6DC00FB22 \

/vmfs/volumes/Storage1\ \(2\)/StorMagic\ SvSAN/rdm.1.vmdk

The above example creates an RDM called rdm1.vmdk in the directory of the SVA StorMagic_SVSAN on a datastore named Storage1. YOu now needto edit setting on SVA or VM and add to add the RDM.

To do this run Edit Settings for the

SVA or VM again, and select Add... then choose Hard Disk and Use an existing virtual disk and then browse

for the RDM you created:

0 Kudos
CraigKC
Contributor
Contributor

I know this post is almost 2 months old, but this exact same issue is killing me too in ESXi 4.0 and I think I have some helpful (though disappointing) info. I've read several posts with all the same syntax you're using here and several people doing it correctly and I've nailed the syntax and tried every combination possible much like you. Yet I get the same results, same error messages.

Here's the real kick in the pants... I've noticed is that the few reports of struggling and failing that are never resolved are always using Adaptec RAID controllers. I have an Adaptec 3805 with a 6x1TB RAID5 array and a 2x750GB RAID1 array attached to it and neither will work with RDM. Better yet, I think I've really proven this because I can successfully do RDM to NTFS volumes on SATA disks on the on-board JMicron SATA controller as well as drives hanging off of the Intel ICH10R controller (Gigabyte EP45-UD3P board) on the same ESXi physical server. They work great, I've attached them to VMs and everything's fine. It's just the volumes on the Adaptec that fail with the error. Again -- RDM works on the same physical ESXi server as the Adaptec adapter, but only with volumes connected to different adapters.

I get the same error as you do when I nail the syntax:

"Failed to create virtual disk: The destination file system does not support large files (12)."

I just checked the Adaptec site for updated firmware and the last update is October 2008, the same firmware I'm running. I also double-checked the ESX/ESXi 4.0 HCL and my Adaptec 3805 as well as your 5805 are both supported (). This is feeling like the end of the road with no explanation to me. Anyone actually tried to speak with VMWare about this?

0 Kudos
n0va
Contributor
Contributor

bump

does anyone have any updates on this issue ?

we're having the same problem on my test(lab) server:

root@lab2 disks# vmkfstools -z /vmfs/devices/disks/mpx.vmhba0\:C0\:T1\:L0 /vmfs/volumes/Storage1/srv/rdm1.vmdk --verbose 1

DISKLIB-LIB : CREATE: "/vmfs/volumes/Storage1/srv/rdm1.vmdk" -- vmfsPassthroughRawDeviceMap capacity=0 (0 bytes) adapter=buslogic devicePath='/vmfs/devices/disks/ mpx.vmhba0:C0:T1:L0'

Failed to create virtual disk: Invalid argument (1441801).

We're not getting the "The destination file system does not support large files (12)" error though.

(Adaptec 2610SA in a HP ML150 G2)

if there's nothing new I am going to open a Support Request tomorrow

PS: is this a vSphere 4 only issue ? I haven't had time to test on ESX3.5 yet.

0 Kudos
awc1
Contributor
Contributor

Another option is to setup VMDirectPath configuration so you can give a VM direct control of your RAID controller card. Since it is NTFS I suspect you want some Windows VM to access the data anyhow.

If you are using a hardware RAID controller card, you can actually

assign the card to a VM with some hardware. It is a two step process:

(1) Allow PCI/PCIe cards to be directly accessed by a VM.

In vSphere, go to the configuration tab for the server (not a VM).

Click on "Advanced settings"

Click on edit. From here you select which PCI/PCIe cards to be directly accessed by a virtual machine.

Put check boxes by the hardware you want to assign to VM(s).

You will need to reboot ESXi.

Note: If you don't get this option, in your BIOS see if there is a VT-d option and enable it.

WARNING:

If you screw up and assign some hardware that is required by ESXi in

STEP1 you will completely bonk your ESXi setup. As in kill it. So

make sure you only assign the correct hardware to VMs.

(2) Add PCI device to VM.

Go

to your powered off VM. Edit settings. Add hardware. Add a PCI

device. You should be able to select any of the cards assigned to VMs

in step 1 above.

Your VM can now access this card as though it were

physically attached to the VM. You get close to full bus speed as well

with no slow VMDK access to bog you down.

-


I

have successfully setup RAID arrays on an Adaptec 5805 card that

an OpenfilerVM can access directly in an ESXi server. (Openfiler sees and

uses the Adapter card.) I have even installed Adaptec Storage Manager

within Openfiler and can edit, create, delete RAID arrays from ASM.

I

have also installed PCI parallel port cards and given a VM access to

the card successfully. (For parallel port dongle bound software that I

wanted to virtualize.)

The biggest downside with this is live

snapshots no long work (shutdown and take snapshots only) and vmotion

is not longer possible with the VM as it is tied to hardware in the box.

The

above is extremely dependent on your motherboard. Better, relatively

new boards will allow this to work. But it will typically mean digging

in your BIOS to enable it.

0 Kudos
n0va
Contributor
Contributor

Unfortunately my rather old Lab server doesn't support VT-d, so I've to settle with something different.

So heres my update on this:

Bad news:

  • The problem seems to exist since ESX 3.5. (tested on 3.5u4)

    • On ESX 2.5 and ESX 3.0 it seems that you could raw map local scsi devices. (unproven)

  • VMWare officiallly doesn't support RDM to local LUNs. As a SR was opened the response was:

    • Local LUNs are not allowed as RDMs. Please see page 145 of the following VMware documentation, thanks:

Good news:

There is a workaround! Digging through the forums and google results I found this article:

it describes a (rather dirty & unsupported) way of how to access any local vmhba device via RDM.

It worked for me on my Lab server (HP ML150 G2, Adaptec 2610SA) so far: ESX on SCSI controller, Raids on SATA controller.

Next test is going to be ESX on one Raid on the sata controller mapping the second Raid on the sata controller

0 Kudos