Dell have recently released version 1.6 of their vCenter plugin, and I have come across a few bugs / annoyances so far. I tried to post feedback on the official Dell blog, but although the post says successfully posted nothing ever appears online ... so thought that I would share this here for hopefully everyone elses benefit.
Official blog for the Dell VC plugin, version 1.6;
I successfully upgraded to the current version 1.6.0.34 (shown correctly in the administration portal), but the GUI within vCenter still shows up as version 1.5.1 build 477 - technically not a problem, just not cool.
The next thing that stands out is that if you are not running the latest version of OpenManage, then all your hosts are shown as "non-compliant". You can check this from the host console as per below;
Non-compliant;
~ # esxcli software vib list | grep Dell
OpenManage 7.0-0000 Dell VMwareAccepted 2012-10-03
Compliant;
~ # esxcli software vib list | grep Dell
OpenManage 7.1-0000 Dell VMwareAccepted 2012-10-18
What this in effect pushes you to do is to install version 7.1, which I understood provided support for 12G systems, regardless of the fact that the hosts are only 11G systems. Whilst I can see the befifits of being up to date, I wouldn't class this as a "non-compliant" status.
I then used the fix compliance wizard to update OpenManage to version 7.1, and once again like previous verions of this plugin the deployment of OpenManage overwrites all your existing SNMP configuration, and replaces it with a single entry pointing back to the VC plugin (using IP address, noth DNS) ... effectively stopping all traps going to your existing OME infrastructure, or other destinations that you have configured. So you need to reconfigure SNMP on every host that use this method on. I know this is documented behaviour, but in my opinion it would be far better to simply append the existing configuration with the additional entry rather than overwriting the entire configuration.
As part of the remediation process the plugin puts the host into maintenance mode and reboots etc, as expected, BUT it also disables HA by stopping the vSphere High Availibility Agent and setting the startup policy to start and stop manually ... effectively disabling HA, as this daemon is not restarted with the host as before.
Hopefully there arent other majors, but these are the few that I have noted so far ... let's also hope that Dell reads this and feeds it back into their product develoment team and addresses these issues.
Cheers,
Jon
Small update to get around the plugin issues ... simply download and import the Dell OpenManage version 7.1 Offline Bundle and VIB for ESXi 5.0 into your repository and remediate using VUM. This ensures that your SNMP configuration is not touched and that your hosts are "compliant". It also ensures that HA is not disabled.
Where can I download OMSA 7.1, please? I can see OMSA 7.1 for Windows and Redhat only on Dell web http://en.community.dell.com/techcenter/systems-management/w/wiki/1968.dell-openmanage-downloads-exp.... And OMSA 7.0 VIB for VMware vCenter ESXi 5.0 looks to be unavailable in the moment :smileyconfused:
Great! Thanks a lot
It looks like not applicable to ESXi 5.1 😞
I must wait. Anyway thanks for help.
Excellent! First PowerEdge is remediating now. Many thanks. You're the Man! Or Santa Claus?
Can I ask some more questions, please? I'm quite confused after few days playing with OMSA and OpenManage Essentials :smileyconfused:
We have two PowerEdge R610 cluster connected to PowerVault MD3200 DAS. I've remediated OMSA on both R610 and set SNMP. But I still can't see Health Status, Model, Service Tag, ... for them in OpenManage Essentials (contary to MD3200). I've ran Dell Troubleshooting Tool and OMSA Remote Enablement, SNMP, WSMAN, ... looks well.
And I have no idea how to access GUI (Server Administrator Home Page). Nothing runs on https://x.x.x.x:1311/. Along to Dell OpenManage Server Administrator Installation Guide Dell OpenManage CIM OEM provider should be enabled by default in ESXi 5.x (on the other way I have no idea about esxcli equivalent for vicfg-advcfg). I'm running through manuals there and back but maybe I'm blind.
Hi Thomas,
We have two PowerEdge R610 cluster connected to PowerVault MD3200 DAS. I've remediated OMSA on both R610 and set SNMP. But I still can't see Health Status, Model, Service Tag, ... for them in OpenManage Essentials (contary to MD3200). I've ran Dell Troubleshooting Tool and OMSA Remote Enablement, SNMP, WSMAN, ... looks well.
I have found OMSA to be very unreliable with regards to Discovery and inventory, and have needed to delete the current entries and rediscover / inventory for both the iDRAC and Servers. I would recommend doing the following;
** let me know if you need any help with any of these above steps.
And I have no idea how to access GUI (Server Administrator Home Page). Nothing runs on https://x.x.x.x:1311/. Along to Dell OpenManage Server Administrator Installation Guide Dell OpenManage CIM OEM provider should be enabled by default in ESXi 5.x (on the other way I have no idea about esxcli equivalent for vicfg-advcfg). I'm running through manuals there and back but maybe I'm blind.
This is only relevant on windows servers with the agents installed so you wont see this on the hypervisor.
Cheers,
Jon
Hi Jon, thanks for your answer.
I can't re-install ESXi in the moment 'cause cluster is in production use and I can maintanance it sometimes through night only. So I did what I can:
I'll have to check and probably upgrade iDRAC. It looks not to work well and Agent Global Status is <?>.
Thanks again for your help.
Tomas