We have three PS5000 series right now. The physical switches are independant. They tie back to a pair of 4900Ms to do L3. On the Nexus 5020 all storage ports are configured on a separate VLAN, as access ports and are not trunked. Those cables / ports only carry storage traffic and since it's all on the same VLAN, nothing should be requiring L3 intervention. We do also have jumbo frames enabled and spanning tree disabled on those ports (portfast).
On configuration in vSphere, an esxcfg-vswitch -l shows...
Switch Name Num Ports Used Ports Configured Ports MTU Uplinks
vSwitch2 64 9 64 9000 vmnic3,vmnic5
PortGroup Name VLAN ID Used Ports Uplinks
iSCSI6 0 1 vmnic5
iSCSI5 0 1 vmnic3
iSCSI4 0 1 vmnic5
iSCSI3 0 1 vmnic3
iSCSI2 0 1 vmnic5
iSCSI1 0 1 vmnic3
Here you see the 6 port groups (each has an IP for vmk) and each vmnic is assigned to 3 port groups, and 3 only as you described. Also, here is a modified esxcfg-vmknic -l output
Interface Port Group/DVPort IP MTU
vmk0 VMotion IPv4 172.24.18.21 1500
vmk1 iSCSI1 IPv4 172.24.17.40 9000
vmk2 iSCSI2 IPv4 172.24.17.41 9000
vmk3 iSCSI3 IPv4 172.24.17.42 9000
vmk4 iSCSI4 IPv4 172.24.17.43 9000
vmk5 iSCSI5 IPv4 172.24.17.44 9000
vmk6 iSCSI6 IPv4 172.24.17.45 9000
I won't paste it in, but when I run 'esxcli swiscsi nic list -d vmhba33' we do see all the nics appropriately bound the same way it's seen on the vSwitch output above. The Dell doc does give most of thos pieces and also from running these commands again, each nic does show packets sent and received (from that esxcli command). It's bizarre to me why we have this problem. If each nic is working, then we know both switches are working, not just one since I would think if we were sending half our storage packets into a blackhole we would see serious issues all the time, which we're not. It's only when we induce a failure during testing that we see issues. Of course, the point is to make sure it works so if we do have a real switch failure we'll be OK.