I'm updating vSphere cluster (2 hosts) from 4.0 to 5.5U1.
The situation now is the folllowing:
created a fresh new 5.5 vCenter Server on Windows 2008
upgraded one host to 5.5U1
second host is on 4.0
We are esperiencing strange VMs behavior
There is only one shared datastore (vmfs 3.33) with 3 extents forming a 3.5TB datastore.
The 4.0 host see correctly the datastore and 3 extents.
The 5.5U1 host see the datastore with right size, but doesn't see 2 extents.
If from the 4.0 host I try to add new storage, I don't see any new LUN available
If from the 5,5 host I try to add new storage, I see 2 LUNs from the storage.
Some VMs starts and runs without problem (if registered on 4.0 or 5.5 host), but some others doesn't start on 5.5, but works on 4.0 host, cannot change vmx configuration (device busy), and various other strange behavior.
I think the problem is on the storage site (extents, vmfs version, etc....)
How can I troubleshoot and solve this situation ?
Thanks in advance
I do never recommend to work with datastore extension. must of customers have more problems than successes.
I would now recommend to create a new datastore (or multiple if needed) with vmfs3, svmotion all the virtual machines to the new datastore and then upgrade the second esxi server.
then create a vmfs5 datastore (or multiple if needed) and svmotion the virtual machine to the vmfs 5 datastore...
btw. with vmfs5 you can create datastores bigger than 2TB. but I would not recommend it because of performance
have a look on this calculator:
if you understand this one, you know how big your datastores can be...
I know capabilities of vmfs 5, but I'm working on environment with less (near to zero) resources, customer has Essentials Kit (without any useful feature) and I haven't space to move VMs, delete current storage, recreate and move back.
I must work on a production environment, being very very careful to don't damage any VM.
The only way is find a way to fix the issue, or rollback to 4.0.