Do you have MLAG enabled on your switches? If not, you shouldn't use LAG in this configuration as LACP protocol is not suited for use in situation where adapters are connected to different switches.
put all the active nics in use .. no LAG required..
Use Route Based on Physical NIC Load as the load balancing policy..
this is very simple and equally good..
check the below link for more explanation
As recommended to you by hussainbte, I would also strongly cast a vote for not using any form of LAG if you have an entitlement to use a vDS. LBT is a far better option and requires no physical upstream configuration aside from regularly-provisioned ports and it has even better benefits than LACP or others.
Ok, so my assumption that i have to create an additional vDS with normal uplinks instead of LAG uplinks for those servers was correct.
I'll create a copy of the existing vDS, change the uplink type and migrate a few hosts some time next week.
Hopefully then everything is working as intended.
Thanks a lot for your answers!
Configured LAG on the vDS because all uplinks on the newer servers are connected to the same switch
That's fine, but it's not required, even if they all are connected to the same switch it doesn't require a LAG.
Ok i thought so because without LACP Uplinks i had troubles with packetloss and high latency which disappeared after i set it to LACP and changed the uplink type
As hussainbte mentioned, you have to change your Teaming and failover policy. Its happen that when you have LACP or Etherchannel in your switchs and wrong failover policy is configured the packets don't know where to go so you have connectivity loss
Configure the Teaming and failover policy.to Route Based on IP Hash: