Hey everyone I'm trying to get some information on design best pratices for distributed switches using ESXi 4.1. Several things have changed with 4.1 allowing I/O control etc....so most of the stuff I've found out there would be related to 4.0. Here are some specific areas that I'm looking for more info:
-do you now use 1 big vDS with I/O control? or have seperate vDs's for different traffic
-do you leave your vMotion and management traffic on a standard vSwitch in the event vCenter goes down? although traffic will still flow
I have a fairly simple environment at the moment with 4 ESXi 4.1 hosts each with 4 x 1Gb ports and 4 x 10Gb ports. My SAN storage is 10Gb as well and all tied together with Cisco Nexus switching and some 3750-X's.
Anyways here are my initial thoughts for design:
-2 x 1Gb ports for management traffic on a standarv vSwitch
-2 x 1Gb ports for vMotion traffic on a standard vSwitch (could be the same as management or a seperate one)
-2 x 10Gb ports for iSCSI traffic to the SAN
-2 x 10Gb ports for network traffic
With ESXi 4.1 and it's new changes, I am wondering everyone's thoughts on perhaps a slightly different design. Thanks