Hello All,
I am in the process of building a new ESX 4.1 U 1 cluster with 3 servers, each server has 2 x 1 Gbps NICS and 2 x 10 Gbps NICS.
So far I have been using standard switches on each host with the 1 Gb NICs being used for the service console in a failover team Active / Standby and the 10 Gbps NICS setup using IP Hash.
I am using 2 x Cisco Nexus 5010 switches and the networking guy has configured these as a stacked i.e. they are for all intensive purposes the same switch. Each host's 10 GB card has one NIC connected to switch 1 and the other NIC to switch 2 these ports are then configured in a Virtual Port Channel and I have configured the standard switch on the host to use IP hash load balancing so giving me 20 Gbps throughput.
I now want to move from the standard switches to a DcSwitch and I am a little confused about what I need to get my network guy to do on the Nexus switches.
I think that he needs to make all 6 ports that the 10 Gb NICs are connected to and make then the same VPC? I then create 2 dvUplinks (dvUplink1 and dvUplink2) and assign 1 10 Gb port from each server to Uplink1 and the other to Uplink2 e.g.
Server 1 10 Gb NIC 1 = Uplink1
Server 1 10 Gb NIC 2 = Uplink2
Server 2 10 Gb NIC 1 = Uplink1
Server 2 10 Gb NIC 2 = Uplink2
Server 3 10 Gb NIC 1 = Uplink1
Server 3 10 Gb NIC 2 = Uplink2
Then configure the IP Hash load balancing using Uplink1 and 2 on the dvPortGroup.
Is this correct?
Many thanks in advance you any help