VMware Cloud Community
davparker
Contributor
Contributor

Upgrade to ESX 4 loses iSCSI datastores

We upgraded to vCenter v4, then upgraded the first of 3 ESX 3.5 hosts to version 4. After rebooting, the new ESX 4.0 host cannot see the iSCSI datastores that were previously attached. When I try to add the datastores back from vCenter, I can see the datastore I need to mount, but when I try to mount, I get an error "Cannot change the host configuration." If I connect directly to the new ESX 4 host, I can mount the iSCSI datastore, but after rebooting, the datastore goes away again. We are using an EMC Celerra for our iSCSI storage. I can see the host is connected from the Celerra console. I can ping, and vmkping from the host successfully. I currently have a ticket open with VMware support, but they seem to be stumped at the moment. Anybody else experiencing this or have suggestions how to troubleshoot?

Thanks,

David

Reply
0 Kudos
3 Replies
Datto
Expert
Expert

You might want to make sure the firewall port for iSCSI is open -- you can do this via the VIC to the destination ESX host -- access this via Configuration Tab of your ESX host/Security Profile/Properties -- after you check tje box to open the port to let ISCSI traffic through the firewall, you should see acticity to open that port at the bottom task status screen in the VIC -- give it some time to complete.

Datto

Reply
0 Kudos
tysonmartin
Contributor
Contributor

Davparker,

I had this same problem after upgrading my first ESX 3.5 host last week. What I found to be the problem is one of my physical NIC's that were attached to my machine under the host configuration, networking tab, had found it's way into the wrong vswitch. I removed the NIC from this Vswitch and attempted to add it to the proper vswitch with no luck. The simple fix for this was to restart this host. After the restart the NIC aligned itself with the proper Vswitch that it was configured with originally and my iSCSI datastore was reattached to the host. Possibly a member of the vmware team can shead some light on why this happens during an upgrade but the solution worked great for my pinch. My scenario was exactly the same in the following ways: I could ping the hosts, I could connect through console, and I could even view the volume information from the host that would not connect.

Reply
0 Kudos
davparker
Contributor
Contributor

Hey all,

The problem stemmed from the fact that the iSCSI datastore, in this case a Celerra NS 40, had a but that caused the iSCSI luns to be presented with an incorrect device ID. It is documented in the following EMC publication:

Configuring iSCSI Targets on EMC Celerra P/N 300-004-153 Rev A04 Version 5.6.42

"A bug in Celerra Network Server version 5.5 causes the VMWare ESX Server to use the wrong method for constructing device IDs for Celerra iSCSI LUNs.

Version 5.6 corrects this problem. If you upgrade from version 5.5 to 5.6, the ESX Server misinterprets the older, mismatched device IDs and might see those disks as snapshot LUNs. To correct this problem, you must perform a resignature to generate proper device IDs for the Celerra iSCSI LUNs. Primus article emc189395 provides detailed information."

We originally configured our iSCSI luns using Celerra Network Server version 5.5, then upgraded to ver 5.6.

The fix was to either resignature the datastores, which would cause too much downtime, or to storage vMotion the VMDKs to a newly created datastore, then delete and recreate the Datastore. This was time consuming because we had 8 Datastores, but we didn't have to take any VMs offline. Deleting the datastores was necessary because our 2 remaining Hosts were at ESX 3.5 and you cannot unmount an iSCSI datastore in that version. Apparently an oversight by VMware which has been corrected in version 4.0.

Reply
0 Kudos