VMware Cloud Community
markinjapan
Contributor
Contributor

ESX3.03 & Unable to create partition

Hi,

I have recently blown away my SAN recently and am busy now creating new virtual disks for ESX to use. My SAN is a Dell MD3000i with 30 x 400GB SAS disks. I have assigned 8 of the physical disks into a RAID 5 group for use by an ESX cluster. I am able to mount and format LUN 0 successfully but I cannot get LUN 2 to format no matter what I try. I have looked at all the posts on this board related to this issue and they all talk about using fdisk which doesn't work for me.

This is where I am at right now.

- ESX can see all the LUN's on the SAN as per the output below

esxcfg-mpath -l

Disk vmhba0:0:0 /dev/sda (69376MB) has 1 paths and policy of Fixed

Local 2:14.0 vmhba0:0:0 On active preferred

Enclosure vmhba0:264:0 (0MB) has 1 paths and policy of Fixed

Local 2:14.0 vmhba0:264:0 On active preferred

Disk vmhba40:0:0 /dev/sdb (512000MB) has 1 paths and policy of Fixed

iScsi sw iqn.1998-01.com.vmware:mbtkyesx003-5e8e60e7<->iqn.1984-05.com.dell:powervault.6001e4f0003e0b4c0000000047fdc353 vmhba40:0:0 On active preferred

Disk vmhba40:0:2 (512000MB) has 1 paths and policy of Fixed

iScsi sw iqn.1998-01.com.vmware:mbtkyesx003-5e8e60e7<->iqn.1984-05.com.dell:powervault.6001e4f0003e0b4c0000000047fdc353 vmhba40:0:2 On active preferred

- Doing a "fdisk -l" does not show up anything related to the new non-accessible LUN

- I can mount vmhba40:0:2:0 on a windows machine, format it and use it without any issues.

- This is the output from /vmfs/devices/disks

ls: vml.02000200006001e4f0003e139200001fac4910c89f4d4433303030: No such file or directory

vmhba0:0:0:0 vml.0200000000600188b03eb8f8000ce3fd966390184b504552432035

vmhba40:0:0:0 vml.02000000006001e4f0003e0b4c000034874910c4004d4433303030

vmhba40:0:2:0

How can I get fdisk to create a partition on the new LUN when it won't even see it?

I did see a post elsewhere that talked about a reprovisioned HP SAN having similar issues due to NTFS data being left behind and I tried everything in that post to no avail.

Please help if you can!

Cheers

0 Kudos
8 Replies
Erik_Zandboer
Expert
Expert

Hi,

I see some strange things in your config. First of all, the second LUN is vmhba0:264:0. the number 264 is your SCSI ID. I do not know how you got that configured, but I am almost certain that VMware will have a problem with that.

Second, I would vote against creating partitions for VMFS using fdisk. Just login to your ESX node using the VI client, go to configuration... storage and select "add storage" there. But I think you won't be able to see the second LUN there as well, because of the SCSI ID of 264...

Visit my blog at http://www.vmdamentals.com
0 Kudos
markinjapan
Contributor
Contributor

I'm using Dell PE1950 servers and vmhba0:264:0 is the SAS backplane for the onboard disks.

These are my volumes on the SAN as seen by ESX as FYI

vmhba40:0:0 - 500 GB DISK - Usable and formatted as VMFS

vmhba40:0:1 - 500 GB DISK - Unusable

vmhba40:0:2 - 500 GB DISK - Unusable

vmhba40:0:3 - 500 GB DISK - Usable and formatted as VMFS

When I try to add storage through VI, the two unusable LUNS show their capacity but do not show the available space.

0 Kudos
Erik_Zandboer
Expert
Expert

Hi,

These are all iSCSI LUNs. The unusable LUNs appear to have some unrecognisable formatting. Have you tried to list the partitions on the unusable LUNs using fdisk? Most simple solution is propably to delete the LUNs from your SAN, then recreate them. That should empty out any previous formatting, and allow the VI client to add storage.

Visit my blog at http://www.vmdamentals.com
0 Kudos
markinjapan
Contributor
Contributor

Hi,

fdisk won't recognise those LUNs at all unfortunately. I've recreated the LUNs several times over with no success. I've even tried changing the size of the bad LUNs but still no go.

This discussion is along the lines of my problem but fdisk doesn't work in my case: http://communities.vmware.com/message/539212

0 Kudos
Erik_Zandboer
Expert
Expert

Hi,

You say fdisk does not recognise the LUNs... What do you see exactly? If you perform a "esxcfg-mpath -l" you get a list of devices connected to the LUNs. They do show up there. What happens next if you start fdisk using "fdisk /dev/sdN" and seledct the "p" (print) option? What output do you get?

Visit my blog at http://www.vmdamentals.com
0 Kudos
markinjapan
Contributor
Contributor

The bad LUNs never even show up as /dev/SDN at all which is why I'm scratching my head.

fdisk -l shows up everything on vmhba0: and the two working LUNs on vmhba40:

0 Kudos
JohnZooi
Contributor
Contributor

I'm havning the exact same problem...

Did you find a fix for the solution?

esxcfg-mpath -l :

Disk vmhba0:0:0 /dev/sda (346880MB) has 1 paths and policy of Fixed

Local 2:14.0 vmhba0:0:0 On active preferred

Enclosure vmhba0:264:0 (0MB) has 1 paths and policy of Fixed

Local 2:14.0 vmhba0:264:0 On active preferred

Disk vmhba40:0:0 (2084862MB) has 1 paths and policy of Fixed

iScsi sw iqn.1998-01.com.vmware:esx01-3ef833f6<->iqn.1984-05.com.dell:powervault.60022190007baba10000000048babd00 vmhba40:0:0 On active preferred

ls /vmfs/devices/disks:

lls: /vmfs/devices/disks/vml.020000000060022190007baba1000006c24918faa64d4433303030: No such file or directory

vmhba0:0:0:0 vml.0200000000600188b04e195a000d3fee146c31ff50504552432035

vmhba40:0:0:0

Any idea anybody?

0 Kudos
JohnZooi
Contributor
Contributor

I found the fix,

2 management interfaces on the SAN, changed ownership of the virtual disk+pool on the san to the other controller resetting al the connections.. refresh on the esx. problem fixed.

0 Kudos