Hi all,
Below is a diagram of my vSphere 5 design so far. The two connections to the NetApp controllers are etherchanneled. How can I improve this design? The two ESXi 5 hosts have 12 NICS each. I also have included a screen shot of the switch. As you can see one of the Physical Adapters show as Stand By, and one shows as Full. How can I change both of these to Full? How can I improve the vSwitch config? I heard somewhere that the Management network should be on a seperate vSwitch than that of the VM network, is this true? I have included my Cisco etherchannel config as well, is there any room for improvement here?
interface Port-channel1
description NetApp EtherChannel 1
switchport access vlan 50
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,50
switchport mode trunk
flowcontrol receive on
interface GigabitEthernet2/0/3
description NetApp Array 2
switchport access vlan 50
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,50
switchport mode trunk
speed 1000
duplex full
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet2/0/4
description NetApp Array 1
switchport access vlan 50
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,50
switchport mode trunk
speed 1000
duplex full
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet2/0/5
description NetApp Array 1
switchport access vlan 50
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,50
switchport mode trunk
speed 1000
duplex full
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active
!
interface GigabitEthernet2/0/6
description NetApp Array 2
switchport access vlan 50
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,50
switchport mode trunk
speed 1000
duplex full
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active
Thanks
The screenshot from the virtual networking setup does not seem complete, do you have any other vSwitches setup?
Could you also print a screenshot from the properties on the NIC Teaming tab of the vSwitch0?
Why have you (or someone) set one of the vmnics as standby? This is a bit strange setup on the vSwitch. Put it up to active and you will see both interfaces as "full" regarding your first question.
I just made that change. That was how it installed out of the box, one active and one standby.
valicon wrote:
That was how it installed out of the box, one active and one standby.
That is very strange. The out-of-the-box configuration is just to add vmnic0 to vSwitch0. The other vmnic:s is (normally) added manually after installation.
I thought it did not look right. What do you recommend as far as the NIC assignment and networking go based on the diagram. I have 12 NICs for each ESXi host and I plan on using vMotion.
From the diagram you have shown it appears that you want to utilise 2 nics for the storage network, is this correct ? I am going to assume you're using ISCSI and want to load balance ?
If it is then;
Create a switch on each host with the 2 nics you want with 2 port groups setup for managment. Assign a seperate IP for each of the port groups. Edit the port groups on the switch edit the nic teaming and overide the failover policy and on port group 1 have nic1 as active and nic 2 as unused, and visa versa.
Example Config
Host 1
DatastoreSwitch
PortGroup 1
10.0.1.1
NIC1 Active, NIC2 unused
Port Group 2
10.0.1.2
NIC2 Active, NIC1 unused
Host 2
DatastoreSwitch
PortGroup 1
10.0.1.3
NIC1 Active, NIC2 unused
PortGroup 2
10.0.1.4
NIC2 Active, NIC 2 unused
Then on the Storage Adapters page within configuration bind the vmnics to the ISCSI (if you're using ISCSI) protocol on the bindings tab (i think, sorry running ESX4.1 here at work and hard to remember).
Then rescan the HBA if you have any attached luns already and you will be able to change the pathing method to round robin..
Of course all the above depends on what your requirements are..
I am using only NFS. I am using vCenter and it is a VM. What would be the best use of my NICs and a good design?
How many VMs do you plan to use?
Will you have vMotion?
With 10 physical NIC ports I think you have plenty of options here.
I plan on 6-8 VMs and I would like to use vMotion.
With 10 NICs you could have two vmnics for Management redundancy and two for NFS and four for VMs you could also afford to have two vmnics only for vMotion, configured for Multi-NIC-vMotion - which would give you great vMotion performance: http://rickardnobel.se/archives/947
So how would this look based on my diagram?
valicon wrote:
So how would this look based on my diagram?
Do you mean for the storage network (NFS)?
Yes I have a NetApp
valicon wrote:
Yes I have a NetApp
If you want to optimize the network traffic the best way in my opinion is to setup two NFS datastores on two different IP subnets together with two VMK adapters, one on each subnet. Using Etherchannels on the ESXi host will help you very little unfortunately.
If you are looking for design ideas regarding the vSphere 5 host network configuration then please check out my blog;
http://vrif.blogspot.com/2011/10/vmware-vsphere-5-host-network-designs.html
Regards,
Paul
Thanks