This is a strange one. I added two NFS datastores that have existing vms on them, to a 2 node cluster this is a DR site and we're doing testing.
I added vms from one datastore without an issue. When I add one from the second datastore it registers the vm but is grayed out with (invalid) following the name.
I checked the NFS permissions and they are good. I can create and delete a folder on the datastore while browsing. I ahve the same datastore on another cluster in the site and I can register the same vms on these without a problem. Iremoved and readded thedatastore to the hosts with the same result.
Any ideas?
Are the export files configured the same on both shares? Do you have
no_root_squash on the problem one?
On Jul 8, 2009, at 5:32 PM, pdrace <communities-emailer@vmware.com
What options do you have when you right click the invalad VM? Is there another vm with the same name? How many NFS datastores are attached to the host? do you get the same if you register the VM from the console? any errors in /var/log/messages ?
Are all of the hosts the same build?
Message was edited by: Chamon
What options do you have when you right click the invalad VM? Is there another vm with the same name? How many NFS datastores are attached to the host? do you get the same if you register the VM from the console? any errors in /var/log/messages ?
Are all of the hosts the same build?
Message was edited by: Chamon
I can remove it not much else. No vm with the same name, I even went to each host using the VIC and there were no phantom vms. Tried to register it directly on a host and it creates an unknown(invald) vm .Only 2 datastores attached to the host. I copied a folder from this datastore to the other NFS datastore and was able to register a vm so the problem seems to be with this particular datastore and hosts for some reason. It happens with any vm on it with these hosts No error messages in any logs. All hosts are at the same build level.
Message was edited by: pdrace
Are the export files configured the same on both shares? Do you have
no_root_squash on the problem one?
On Jul 8, 2009, at 5:32 PM, pdrace <communities-emailer@vmware.com
Export options are the same other hosts have no problem with this share.
This is strange. Have you looked at this post.
http://communities.vmware.com/message/1006733#1006733
How many VMs are on this data store?
This one too.
There are a dozen vms on it. I tried deleting all files except the vmdks and vmx on one, same result.
If i create a new vm and then try to add existing disks on these hosts no valid vmdk files are found.
Take a look at this and see if anything here helps.
http://itsupportjournal.com/2008/12/09/fix-invalid-guest-on-virtual-center/
I got back to looking at this today. After registering a vm from the problematic datastore this shows up in the hostd log.
Task Created : haTask-ha-folder-vm-vim.Folder.registerVm-30719
Register called: []/vmfs/volumes/7f5dcc1d-6a22a0b0/PSFSDO/PSFSDO.vmx
Foundry_CreateEx failed: Error: You do not have access rights to this file
Failed to load virtual machine.
State Transition (VM_STATE_INITIALIZ
ING -> VM_STATE_INVALID_CONFIG)
Marking VirtualMachine invalid
Event 603 : Registered Unknown 4 on strsviweb1dr.strs.us in ha-datacenter
Task Completed : haTask-ha-folder-vm-vim.Folder.registerVm-30719
Task Created : haTask-ha-root-pool-vim.ResourcePool.updateConfig-30721
Task Completed : haTask-ha-root-pool-vim.ResourcePool.updateConfig-30721
The NFS export permissions are exactly the same on this datastore as the other datastore mounted on this host.
Mystery solved. When I added these hosts the clones of these volumes already existed.
The particular volume I was having a problem with had a QTREE, Netapp term for directory that the vms sit in.
We are replicating at the Volume level not the Qtree level so you end up with two exports on at the volume level and one at the qtree level. I had set permissions at the volume level but not at the qtree (directory) which is the mount point.
What was confusing is that you could create new folders and vms even though no permissions had been granted. you just couldn't write to the existing replicated folder structure.