Frans_P
Contributor
Contributor

No datastores visible on single ESX server

Jump to solution

Hi everyone,

One of our customers is running two ESX 3.01 servers with VC2.01, all connected with fibercables to an HP MSA1000 controller, which is filled with two 800GB volumes. (Both HP Proliant DL385 G1 servers loaded with an Qlogic QLA2310/2340 fibercard). Today i added an 3rd ESX server; an HP Proliant DL385 G2 server with an Qlogic 2340 card, installed ESX3.01 on it with exactly the same updates. Everything works fine except the storage.

When i press the Storage Adapters button at the configuration tab i see the QLogic card (QLA2432) and when i press one of the HBA cards i can see both 800GB volumes nicely in the details screen. Once i press the Storage (SCSI, SAN, and NFS) button both 800GB datastores are not listed!

What is going wrong?

What i already tried:

- rebooting ESX03

- rebooting the Virtual Center server

- Checking Selective Storage presentation on the MSA1000 controller (Disabled)

- Rescanned a few times for new devices

- Refreshed the datastore list a few times...

Anyone here that can help me out with this one?

0 Kudos
1 Solution

Accepted Solutions
hicksj
Virtuoso
Virtuoso

Weird, I've never seen the onboard raid controller get picked up as the second storage adapter (hba1) before. I've only ever seen this as hba0. Now, I don't think that the vmfs volume signatures have anything to do with the physical device names - but my guess is that the screenshot from this system is inconsistent with your other 2 hosts. The logs show nothing about the disks?

I'm normally the last person to suggest a nuke/pave, but in this case, there's little effort in starting over with all the hardware properly seated. It really only takes a few minutes to build a host. On the other hand, figuring out the root cause of this would be very nice.

Regards, J

View solution in original post

0 Kudos
18 Replies
oreeh
Immortal
Immortal

You have to use the MSA1000 CLI and configure the presentation of the volumes to the ESX server.

Check HPs MSA CLI guide for details.

0 Kudos
Frans_P
Contributor
Contributor

Thanks for your quick reply.

Selective Storage Presentation is turned OFF so all servers connected to the MSA should be able to see all the volumes.

0 Kudos
kukacz
Enthusiast
Enthusiast

Any chance that server seen those datastores before? If their LUN ID changed between, ESX will not allow to attach them by default. It would generate message like "vmhba1:0:1:1 may be snapshot" in "/var/log/vmkwarning" log and similar "snapshot" warning in the VI Client event log.

--

Lukas Kubin

0 Kudos
mikepodoherty
Expert
Expert

Have you checked QLogic's set up application to verify that the HBAs are seeing the SAN? (CTRL-Q) - choose the scan option and see if any SANs are visible.

I'm not famililar with the MSA1000 so I don't know if there is a utility similar to IBM's storage manager for small SANs. If there is, you should be able to verify that that the SAN is seeing the World Wide Names of the HBAs.

0 Kudos
Frans_P
Contributor
Contributor

Thanks for the replies guys,

The QLogic card is actually seeing the SAN, take a look at the attachment. When you look at this attachment you can see the QLA2432 beeing loaded and seeing two Volumes, when i plug out the fibercables they disapear.

Everything shown on the attachment is exactly the same as on the other ESX servers!

0 Kudos
VMwareSME
Enthusiast
Enthusiast

Just a curious try.

Have you tried connecting to the host directly? Not through VC?

0 Kudos
Frans_P
Contributor
Contributor

I connected with SSH, opened /vmfs/volumes and only the local SCSI controller was listed there.

0 Kudos
kukacz
Enthusiast
Enthusiast

Have you checked your logs for the snapshot warning?

--

Lukas Kubin

0 Kudos
Frans_P
Contributor
Contributor

Currently i'm not able to check the logs, i will tommorow.

Just a note: When i installed ESX one of my collegues forgot to plug in the Fibercard. After installation was done i powered down the server, inserted the card and booted ESX. During boot up, ESX detected the new device, and after installation it rebooted automaticly.

Everything looked well, but should this way of installing a fibercard be a problem?

0 Kudos
oreeh
Immortal
Immortal

but should this way of installing a fibercard be a problem?

No, this isn't a problem.

0 Kudos
kjb007
Immortal
Immortal

Kukacz is most likely correct in that your server may see the volumes as a snapshot LUN on the ESX host. I would run resignature on those, and enable this value in the ESX server configuration advanced settings, set LVM.DisallowSnapshotLun to 0

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Frans_P
Contributor
Contributor

Thanks for your help guys, much appriciated!

I changed both values as you said, rescanned and refreshed the storage list and still nothing.

0 Kudos
kastlr
Expert
Expert

Hi,

sounds like the partition tables are cleared for unknown reason.

Perform a esxcfg-vmhbadevs[/b] and figure out the value of /dev/sd[i]X[/i][/b] for your two LUN's.

You then need to perform the following steps to rewrite the partition table.

If you initially created the VMFS with VIC using the full size of the LUN, you need to perform the following commands.

fdisk -l /dev/sd[b][i]X[/i][/b] simply to confirm that there's no partition table on that disk

fdisk /dev/sd[b][i]X[/i][/b] does open the fdisk utility for further usage

Choose

n to create a new partition,

p for a primary partition,

1 for the partition number,

then confirm the default parameters for first and last cylinder by pressing the return key.

Now you got a partition on the disk, but it's not declared as a VMFS partition.

Choose

t to change the partition's system id

Enter fb to change the partition type to Unknown.

Enter w to write the partition table to the disk.

You don't harm your data when performing this procedure, because you're simply changing a few bytes in the MBR region of the disk.

After the task is completed, perform a rescan on the affected host and he should see the VMFS again.

UNTILL THEN, DON'T PERFORM A RESCAN OPERATION FROM ANY OF YOUR RUNNING HOSTS !

DOING SO WILL CAUSE THE HOSTS TO REREAD THE PARTITION TABLE WITH THE SAME RESULT[/b]

Hope this helps a bit.      Greetings from Germany. (CET)
0 Kudos
Frans_P
Contributor
Contributor

Partitions are not cleared since the other ESX servers can read the volumes correctly.

0 Kudos
kastlr
Expert
Expert

When a process clears the partition table, atteched hosts won't recognizes this until you perform a rescan from these hosts

So you should at least check with fdisk -l /dev/sd[i]X[/i][/b] if you could see a partition table or not.

Hope this helps a bit.      Greetings from Germany. (CET)
0 Kudos
hicksj
Virtuoso
Virtuoso

Weird, I've never seen the onboard raid controller get picked up as the second storage adapter (hba1) before. I've only ever seen this as hba0. Now, I don't think that the vmfs volume signatures have anything to do with the physical device names - but my guess is that the screenshot from this system is inconsistent with your other 2 hosts. The logs show nothing about the disks?

I'm normally the last person to suggest a nuke/pave, but in this case, there's little effort in starting over with all the hardware properly seated. It really only takes a few minutes to build a host. On the other hand, figuring out the root cause of this would be very nice.

Regards, J

View solution in original post

0 Kudos
Frans_P
Contributor
Contributor

Allright, what i did: reinstall the ESX server with all the hardware properly seated and now everything works fine.

Thanks everyone for your help!

Rgrds,

0 Kudos
Chanaka
Contributor
Contributor

Hey thats weird, If Im not mistaken, VMWare recommends that you disconnect all the network storage during the ESX install.

0 Kudos