VMware Cloud Community
qc4vmware
Virtuoso
Virtuoso
Jump to solution

VcPlugin.getAllVirtualMachines crapping out and/or large array handling blowing up post 5.5 upgrade

I just updated our 5.1 environment to 5.5.  We have 9 vCenters and 19000 virtual machines reporting into this vCO.  I bumped the java heap size to 4096 initial value and 6144 max values.  I am trying to use the VcPlugin.getAllVirtualMachines() method.  This used to work with the 5.1 plugin.  Now it just goes off into the weeds.  To work around it I tried instead using getAllVirtualMachines() on each sdkConnection and concating it all into one array which I can search through to try and locate a vm by name.  Now when I start to traverse the array it craps out after  a couple of hundred items.  I see these errors in the wrapper.log file:

[VimSession] Destroying filter 'session[5222d81f-7b94-08c5-3eb1-7a8b6763f4b4]525fc1be-62ec-dc63-e7ce-575610553e7b' from domain\user@https://someserver.domain.com:443/sdk/Main#1e2a5111 useIS: true

[FiltersManager] destroyPropertyFilter() [user\domain@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true, Nb filters used 1] --> Filter 'PropertyFilter<session[5222d81f-7b94-08c5-3eb1-7a8b6763f4b4]525fc1be-62ec-dc63-e7ce-575610553e7b>' destroyed

[VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadIs] INFO  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [VimSession] createFilter() --> Creating filter for '3557' VimManagedObjects

2014-02-20 16:56:55.082-0800 [VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadIs] INFO  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [VimSession] created filter 'session[5222d81f-7b94-08c5-3eb1-7a8b6763f4b4]522e386e-7793-9533-1002-dbc23f12d07e' on domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true

2014-02-20 16:56:55.082-0800 [VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadIs] INFO  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [PluginCacheManager] Removing cache element 'vm-1148' (1658147079) from 'domain_user.service_https___server.domain.com_443_sdk_Main_1e2a5111 useIS_ true'/1814380472

2014-02-20 16:56:55.082-0800 [VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadIs] INFO  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [PluginCacheManager] Removed cache element 'vm-1148' (1658147079) from 'domain_user.service_https___server.domain.com_443_sdk_Main_1e2a5111 useIS_ true'

2014-02-20 16:56:55.082-0800 [VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadIs] INFO  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [PluginCacheManager] Ghost cache element 'vm-1148' (1658147079) in 'domain_user.service_https___server.domain.com_443_sdk_Main_1e2a5111 useIS_ true'/1814380472

2014-02-20 16:56:55.300-0800 [VimSession [domain\user@https://server.domain.com:443/sdk/Main#1e2a5111 useIS: true] - UpdateThreadPc] WARN  {user@domain.com:QCcpuCoreThreadCountCsv:11d8be17-f693-457e-92e8-16ff832afd4c:8ab5cc694451e707014451ee6a230005} [UpdateThreadPc] processUpdateSet() --> Unable to find 'vm-1148' in cache

Has anyone seen anything similar to this.  Is there something I need to tweak in the cache?  Something in the java configuration?  Have I gone beyond some maximum and need to split this out into multiple vCO's (which would suck) ? This was working fine in 5.1 .

Paul

Reply
0 Kudos
1 Solution

Accepted Solutions
qc4vmware
Virtuoso
Virtuoso
Jump to solution

So it looks like my issue was related to the jvm configuration.  I am not sure if these same changes would be appropriate on the appliance as I only have the windows version deployed.  In the wrapper.conf file I had this value set wrapper.java.maxmemory=6144 since I as pretty sure I was hitting the previous 4096 maximum.  Even though I had that set it did not seem to be using more than about 4096 and this is also about the time I would start to experience issues with vCO.  We added another line to the wrapper.conf file like so:

wrapper.java.additional.23="-Xmx6144m"

where the ".23" was the next available number in the line of switches being passed to the initialization of the jvm .  I'm not sure if this is a java thing or something that will be fixed in an update to better handle the wrapper.java.maxmemory option in that file but it did seem to fix things for me.  Now I see the memory balloon up to about 5.5GB and the workflows are completing as they did previously.

View solution in original post

Reply
0 Kudos
2 Replies
cdecanini_
VMware Employee
VMware Employee
Jump to solution

I would suggest opening a SR to make sure this issue is addressed.

If my answer resolved or helped you, please mark it as Correct or Helpful to award points. Thank you! Visit http://www.vcoteam.info & http://blogs.vmware.com/orchestrator for vCenter Orchestrator tips and tutorials - @vCOTeam on Twitter
qc4vmware
Virtuoso
Virtuoso
Jump to solution

So it looks like my issue was related to the jvm configuration.  I am not sure if these same changes would be appropriate on the appliance as I only have the windows version deployed.  In the wrapper.conf file I had this value set wrapper.java.maxmemory=6144 since I as pretty sure I was hitting the previous 4096 maximum.  Even though I had that set it did not seem to be using more than about 4096 and this is also about the time I would start to experience issues with vCO.  We added another line to the wrapper.conf file like so:

wrapper.java.additional.23="-Xmx6144m"

where the ".23" was the next available number in the line of switches being passed to the initialization of the jvm .  I'm not sure if this is a java thing or something that will be fixed in an update to better handle the wrapper.java.maxmemory option in that file but it did seem to fix things for me.  Now I see the memory balloon up to about 5.5GB and the workflows are completing as they did previously.

Reply
0 Kudos