I am installing new 4.1 servers. I have 12 network ports on 3 cards (built in and 2 intel cards). I am thinking of having 4 vSwitches with 3nic's per switch, one for management, vm network access, nfs\iscsi, and one for vmotion. I am planning on creating vlans for the nfs/iscsi, and the vmotion networks. The vm network access and management will be on the regular local network.
1. Does this look correct?
2. Do I need to set any link agragation on the physical hp switch to each esx vswitch? or just having them in a seperate vlan with nic teaming enabled on esx be fine?
Thanks
Nick