Right now my v5 hosts have 4 nics in them configured as follows; nic1 dedicated to management, nic2 dedicated to vMotion, and nic3 and nic4 dedicated to vim networks, so I have 3 vswitches, 1 for each function.
I am looking at moving to the distributed vSwitch but I am a little confused how it should be setup. Do I create 1 dvSwitch with all 4 nic ports and create seperate port groups for each function or do I create 3 seperate dvSwitches?
My 2 Cents
Your vCenter is probably a Virtual Machine which gives you more flexibility, but there are know issues if the vCenter goes down (failure) and is connected to a Virtual Distribute Switch. There are also workarounds, but seeing you already have dedicated NICs for different roles
I would recommend:
Create one standard vSwitch (vSS) on each host with 2 pNICs. (For redundancy these two nics should not be on the same physical Network card) This also minimize issues in case you need to restore your vCenter
The as you said put one pNIC1 as active for Management and pNIC2 as passive
Also put pNIC2 active for vMotion and pNIC1 as passive
After that you create one Distributed switch (vDS) and all your hosts remaining 2 pNIC to this vDS.
Create your Port groups on this single vDS as needed
Please award points for helpful/correct responses
Also let me know if you have any further questions, because this is quite a big discussion and people have different opinions :smileycool:
You can create 1 distributed vSwitch with all 4 nic ports and create seperate portgroups.This should be good.
My 2 Cents
Your vCenter is probably a Virtual Machine which gives you more flexibility, but there are know issues if the vCenter goes down (failure) and is connected to a Virtual Distribute Switch. There are also workarounds, but seeing you already have dedicated NICs for different roles
I would recommend:
Create one standard vSwitch (vSS) on each host with 2 pNICs. (For redundancy these two nics should not be on the same physical Network card) This also minimize issues in case you need to restore your vCenter
The as you said put one pNIC1 as active for Management and pNIC2 as passive
Also put pNIC2 active for vMotion and pNIC1 as passive
After that you create one Distributed switch (vDS) and all your hosts remaining 2 pNIC to this vDS.
Create your Port groups on this single vDS as needed
Please award points for helpful/correct responses
Also let me know if you have any further questions, because this is quite a big discussion and people have different opinions :smileycool:
Actually that is exactly what I ended up doing. I followed the design here just minus the storage stuff since I use fiber channel. I then created a virtual machine network on the standard vswitch and pNic0 as active and pNic2 as standby, on this virtual machine network I am running my vcenter and vma appliance.
http://vrif.blogspot.com/2011/10/vmware-vsphere-5-host-network-design-6.html
The only issue I have since doing all this is that I can not access the console on any of my virtual machines on either switch. Not sure if it is related but it did work before I moved everything around.
Good choice. I would only put everything on on a vDS if my hosts are limited to 2 pNICs and create a Ephemeral Port Group, with the same VLAN as my vCenter, just as a back door in case anything happens to my vCenter :smileycool:
The VM consoles are accessed through the Management NICs
As stated this worked before on the same NIC, so I don't believe it is a firewall issue.
Try restarting your management agents and management network on your hosts from the ESXi console
Connect straigt to the ESXi host with VI client and see if you can access the consoles.
For further information to troubleshoot this you can read this KB
Please remember to award points for helpful/correct answers
I got the consoles fixed. I actually just tore down the environment and rebuilt it. I figured something got screwed up with how many different times and ways I had the networking configured. After I rebuilt it with the correct networking it worked perfectly.