Hello there,
I started deploying the Nexus 1000v to a 6 host cluster, all running vSphere 4.1 (vCenter and ESXi). The basic configuration, licensing etc is already completed, and so far no problems.
My doubts are regarding the actual creation of system-uplinks, port-profiles, etc. Basically I want to make sure I'm not making any mistakes in the way I want to set this up.
My current setup per host is like this with standard vSwitches:
vSwitch0: 2 pNICs active/active, with Management and vMotion vmkernel ports.
vSwitch1: 2 pNICs active/active, dedicated to a Storage vmkernel port
vSwitch2: 2 pNICs active/active for virtual machine traffic.
I was thinking of translating that to the Nexus 1000v like this:
system-uplink1 with 2 pNICs where I'll put Management and vMotion vmk ports
system-uplink2 with 2 pNICs for Storage vmk
system-uplink3 with 2 pNICs for VM traffic.
These three system-uplinks are global, right? Or do I need to set up three unique system-uplinks per host?. I thought that by doing 3 global uplinks would make things a lot easier since if I change something in an uplink, it will be pushed to all 6 hosts.
Also, I read somewhere that if I use 2 pNICs per system-uplink, then I need to set up a port-channel on our physical switches?
Right now the VSM has 3 different VLANs for mgmt, control and packet, I'd like to migrate those 3 port groups from the standard switch to the n1kv itself.
Also, when I migrated the Management port from SVS to N1Kv, the host complained that it has no management redundancy, even if the uplink1 where the mgmt port-profile is attached, has 2 pNICs added to it.
So what do you guys think? Also, any other recommended best practices are much appreciated.
Thanks in advance,