I'm going to layout all the details here for full disclosure in case some of it is relevant.
I had two 300GB VMFS volumes on a three node cluster.
I removed both Datastores through Virtual Center
I disolved both vDisks on the SAN (HP EVA)
I created a new vDisk of 1TB on the SAN and presented it to all the cluster hosts
Rescanned storage adapters on host 1 which saw the new LUNs
Created a new Datastore from Node 1
Rescanned Storage on nodes 2 and 3, both of them see the new LUNs
Neither of them see the new Datastore
Node 2 showed a changed LUN error like some posts referred to. I rebooted that host and there are no more errors but it still doesn't see the datastore
Node 3 shows this error:
LVM: 5569: Device vml.0200010000600508b40010224c0000700002ac0000485356313130:1 detected to be a snapshot:
These hosts were all built and configured identically with all the same patch levels I'm curious why they have different error states. I think I saw a VMworld article that went over this Advanced VMFS maybe? I'll go check that out. In the meantime I'm happy to assign points!
If it was useful, give me credit
http://communities.vmware.com/blogs/polysulfide
VI From Concept to Implementation
Hi,
glad to hear that the problem is solved (and yes, I wanna get the FULL points
Prior adjusting the LUN ID's for the other LUN's you're using, you should move all running VM's to that node that wouldn't be affected by your change.
And I would recommend to perform that task as described below
- remove LUN access for all hosts which does see the wrong LUN ID
- perform rescan operation on these hosts so that the datastore is removed from repository
- check advanced LVM settings and use default settings, if changed
- add LUN access on a per host base
- perform rescan operation to reread datastore informations
Hope this also helps a bit.
The LVM.EnableResignature setting works on any 1 host but the other hosts then lose visability.
If it was useful, give me credit
http://communities.vmware.com/blogs/polysulfide
VI From Concept to Implementation
Hi,
it's vital that all of your cluster nodes does see the LUN with the same LUN ID.
ESX does store the LUN ID somewhere on the disk as part of the LVM data and compares it with the data received from the storage array.
If there's a mismatch, it will handle the datastore as a snapshot and doesn't mount it.
You should have tried LVM.DisallowSnapshotLun, as it doesn't change anything on the disk.
But it would allow the host to mount that datastore.
Hope this helps.
Strange. My other VMFS volumes do not have the same LUN mapping across hosts yet they work fine.
After changing this one so it has the same LUN across all hosts it is now working. I'm going to remap my other volumes so they have consistent LUNs also.
Thanks Ghost. I only gave you a helpful score, if you respond again, I'm give you the answer points.
If it was useful, give me credit
http://communities.vmware.com/blogs/polysulfide
VI From Concept to Implementation
Hi,
glad to hear that the problem is solved (and yes, I wanna get the FULL points
Prior adjusting the LUN ID's for the other LUN's you're using, you should move all running VM's to that node that wouldn't be affected by your change.
And I would recommend to perform that task as described below
- remove LUN access for all hosts which does see the wrong LUN ID
- perform rescan operation on these hosts so that the datastore is removed from repository
- check advanced LVM settings and use default settings, if changed
- add LUN access on a per host base
- perform rescan operation to reread datastore informations
Hope this also helps a bit.
Thanks again, I didn't have any problem with the process I just didn't know identical LUNs were required.
I think I knew when I first set this up but I wasn't able to manually specify the LUNs at the presentation layer and everything worked so I didn't worry about it.
The problem (on EVA Command View) was that you can't specify LUNS if you present to more than one host at a time.
If it was useful, give me credit
http://communities.vmware.com/blogs/polysulfide
VI From Concept to Implementation