I have 3 LUN's that runs on my farm, after doing a service console memory upgrade on 1 server, then a reboot of this server, all my Luns have gone missing.
All my vm's still run perfectly fine on the other esx servers, but im unable to find and mount the volumes on the server i did the memory upgrade on.
i have tried rescanning the storage adapters without any luck.
Anyone have any idea or something i can try to fix this, dont want to keep the server in maintenance mode for too long.
Have FC switches connected to a IBM Ds4700 storage.
reading the arcticle now, but fdisk -l and esxcfg-mpath -l did not show the LUNS and paths.
How bout scanning the LUN via adapter BIOS? and if your HBA unable to detect the LUN during bios, this is not ESX problem.
VMware newbie..
Zen Systems Sdn Bhd
Changing the LVm settings had no effect. Will see if i can manage to scan the via adapter ib Bios, but i just dont see why this would be the cause.
I started this morning by setting the esx server in maintenance mode, it migrated the computers to the other hosts, and had full access to the LUNS
did the simple service console memory upgrade, change some numbers, and reboot the server, and now they wont come back up again.
nobdy has been messing with any cables or anything, just a simple reboot.
Can you still see the HBAs in the VIC client under storage adapters?
I suppose you have done a full poweroff of the system than boot (Should not make a diffrence but it is easy to do)
Check your /var/log/vmkernel log for any warnings or errors.
I can see the LUns on the other esx server in the VI client, and and i can see the LUN's when going to datastores in the virtualcenter, just no on this specific host.
the var/log/vmkernel shows alot of Warnings about LegacyMP plugin could not claim path, but all articles i found here about this was related to dell servers, but we use IBM
I am more an HP guy then IBM but i would check IBM support site for your server version for indications of issues like those you are having. Contact IBM as this sounds like a system specific bug that hopefully they are aware of. Obviously perform no actions on the other hosts until you have an answer on this issue.
Sorry I cannot offer more help
No luck there yet. I have opened a Service Request with Vmware, to see if they can help me sort out the issue.
It could aslo seem like the QLA2432 adapters have failed to initialize properly on reboot, but all these servers are provided complete for compability issues., since they are just flashing on the server after reboot.
Message was edited by: coiterbos
Jun 5 08:52:25 esx02 vmkernel: 0:17:09:35.788 cpu2:1040)SCSI: 861: GetInfo for
adapter vmhba1, , max_vports=64, vports_inuse=0, linktype=0,
state=0, failreason=2, rv=0, sts=0
failure according to vmkernel. Looking in the book, it states.
2: Fabric does not support NPIV, please enable NPIV capability on the Fibre Channel Port Switch.
I cant find anywhere in the switch config where i can enable such a thing, and why now?