I finally dived in and decided to upgrade to 4.1. First I upgraded vCenter from 4.0 to 4.1 and it's running fine. Next I took an HP DL series server (which ran 4.0 before) and installed ESX 4.1 on it from scratch. The host seemed to install just fine and then I joined it to my vCenter server.
My trouble began when I added some guest VM's to this new 4.1 host; all the guest VM's become "inaccessible" the moment they're added to the new host's inventory. No reason was given to as why. I checked the NetApp's storage and all seems to be just ducky. After rebooting the host, restarting agents, heck even restarted the vCenter server, I still couldn't get a single VM to stay "accessible" after being added to this 4.1 host.
Last thing I tried was to run vConverter on a random VM. Lo and behold the newly converted VM can be successfully added to the new 4.1 host. I tried adding the original again and no dice...got the same problem.
So before I proceed with upgrading the rest of my hosts from 4.0 to 4.1, I'm just wondering is this normal? I mean the VM's are all version 7 so I don't get why running converter without changing any settings would suddenly let ESX 4.1 host accept the VM properly? Has anyone else encountered this or have any suggestions?
Thanks very much.
OK the problem was caused by my own brilliancy;
When I setup that new version 4.1 host I made a typo in the vKernal IP and didn't catch it. Of course the bonus IP didn't correlate with what I setup on the NetApp's permissions. After I made the IP change...everything all seems to be ducky now.
I will proceed with the upgrade to 4.1!