I'm seeing this as well, fresh install of vROPs on a 4-node cluster and I'm getting some very strange network stats (dropped packets / network congestion reported even though the physical interfaces are showing virtually no loading) and the following alert in 'Recommendations':5 objects impacted | 1 Recommendation(s)Check if the packet drops are due to high CPU resource utilization or uplink BW utilization. Use VMotion to migrate the virtual machine that the port is attached to to a different host.Going in to 'details' it shows each of the 5 objects (they are all vDS portgroups) experiencing packet drops of 100%.I suspect an issue with the collector, this environment is our first configured with NSX-v networking (which may or may not be relevant), but was occurring before adding the v2.0 NSX management pack (and still occurs after this).I have another issue caused by this - the network monitor in vROPs reports duplicate IP addresses between hosts in the cluster as they all have an interface with the 169.254.1.1 address assigned to communicate with the vmservice-vmknic-pg portgroup for access to the guest introspection virtual appliances used for NSX. I was able to rectify this by manually re-assigning the host interfaces to non-conflicting 169.254.1.x addresses, but this isn't a long-term fix and needs to be resolved within vROPs and/or the NSX pack.
I am seeing the exact same symptoms. My environment is only a lab as I am POCing vROPs 6 and is a very simple vSphere with vDS design. I do a ping across any of the port groups and I am not seeing a single lost packet from all possible scenarios I could think to simulate. ESXi hosts to VMs, storage, network devices etc and I do not see a single issue.
When I have some some time I will port mirror and use wire-shark to see whats traversing the network. I do have a vShield edge device in place and connected to a few of the portgroups, but even the PGs not connected to the Edge like vMotion and vSAN are triggered as losing packets.
Odd for sure.
I thought it was me, to the point that I actually rebuilt my entire lab, changed my networking config and still had the same issues (although nothing in the switch was indicating issues with contention, no dropped packets etc).
My lab environment is a simple 3 host cluster with five running VM's, nothing network heavy at all so I was starting to wonder about a bug.
I will chime in here aswell. I am seeing this issue aswell. 59 ports report dropped packages but checking the VM connected to the ports show no receive/transmit packets dropped. Very strange. Seems like a bug to me too.
I am also only seeing this if I log in using the "admin" user. Logging in with a vCenter Administrator does not show this alarm. Strange.
I forgot to mention, my network switch isn't showing any dropped packets or errors either, I am just doing a deployment in to another environment now to see if I experience the same thing there (currently the test deployments have all been extra small.
I am seeing similar problems and recommendations being reported via vROps 6 - dropped packets, ports going down etc. I have checked all the networking components, regular vSphere reporting and even the VMs which have non-redundant connections to the ports that are supposedly down - no problems to be found anywhere. This is annoying.
For my setup, vROps only complains about uplinks and distributed ports from a dvSwitch hosting a non-routable subnet with VMs that vROps cannot reach but I don't expect it needs to.
Seeing the same thing, with HP Virtual Connect networking/HP blades and dvSwitch on vSphere 5.5/vCOps 6.0. No evidence from the VMware tools (vSphere Client/resxtop etc) or from the upstream Cisco switches that anything is amiss. Has anyone tried logging a case with VMware? If not, I might do so, to see if they can either help investigate problems, or eliminate it as a known issue...
Interesting. It is the exact same setup as we have. The only cluster receiving these errors is the one running on HP BL460c Gen8 servers with Virtual Connect and vSphere 5.5/vRops 6.0.
Have not opened a case with VMware about this yet.
Well, I've logged a case and we'll see what they come back with! I'll keep you posted.
I have just raised a ticket as well. I mentioned it to my BCS contact last week and it wasn't anything he was aware of at the time.
Again let's see what happens.
So that was my work account.
I got the following back from VMware today.
"The project management team confirmed that this metric is incorrect and as long as your environment is not showing issues, can be disregarded.
This alert is planned to be removed in the next release of Operations Manager 6.0.1"
So we are just waiting on the next update and this will be resolved.
Update from VMware: Initially they pointed me in the direction of this, which may affect some people: http://kb.vmware.com/kb/2052917
However, we had already installed the patches in question. I mentioned the comments you'd received, and they replied:
After analysing logs provided, and performing a deep research about symptoms reported, I have found out that this is a known bug in vR OPs 6.0, since there are times when packet drops can be reported by vDS due to normal network behaviour, which is leading to false positives in some cases.
So there you have it. I've kept a 5.8 instance of vCOps running alongside 6.0 as I'm still not entirely sure I know how to use it... Hopefully 6.0.1 will iron out a few creases.
Same here in VRops 6.0. VMware support resolution: This alert will be disabled in the next version of Operations Manager
Just rolled on vRops 6.0.1 and I still see this error.
The release notes also don't mention this being fixed though so I didn't expect it but had my hopes.