VMware Cloud Community
Thomas_
Contributor
Contributor

Network question for new Vsphere 5.1 Datacenter

Hi guys,

I hope you can give me some advise for a active/active network concept.

Each physical datacenter will get 2 ESX Hosts, 1 NetApp 3220 Filer and 2 stacks of each 2 Cisco 3750 switches

(one stack for the storage network, one stack for the clients).

The hosts are diskless, they have 2 sd-card slots for the esx hypervisor. The scratch disk is on the NetApp.

Here is what I have in mind:

- 4 physical nics for the connection of the client network (VLAN 50)

- 4 physical nics for vmotion, NFS and ISCSI (VLAN 100)

- 2 physical nics for the management network and service console (VLAN 101)

What are your thoughts?

Thanks!

Reply
0 Kudos
7 Replies
MKguy
Virtuoso
Virtuoso

First of all, are we talking standard vSwitches here or do you have Enterprise+ licenses allowing for distributed vSwitches?

- 4 physical nics for the connection of the client network (VLAN 50)

I don't know about the workloads, but are you sure you need the bandwidth of that many links for VM traffic?

- 4 physical nics for vmotion, NFS and ISCSI (VLAN 100)
- 2 physical nics for the management network and service console (VLAN 101)

Do not put vMotion and anything else, especially not IP-storage on the same VLAN or the same physical NICs either.

I'd just put vMotion on the same 2 NIC team as management, on a separate isolated VLAN.

Will you really be using both, NFS and iSCSI or are you unsure there yet? Again I don't know about the workloads, but you can consider going with 2 NICs only for storage, especially in the case of NFS as it's not capable of proper load balancing/multipathing.

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
memaad
Virtuoso
Virtuoso

Hi,

Along with MKGUY, said, one more important question is what is nic teaming policy.  Here si some comprison.

Nic team with  “Route based on ip hash.”

  • Traffic to or from a VM could be placed onto any uplink on the vSwitch, depending upon the source and destination IP addresses. Each pair of source and destination IP addresses could be placed on different uplinks, but any given pair of IP addresses can use only a single uplink. In other words, multiple connections to or from the VM will benefit, but each individual connection can only utilize a single link.
  • Each VMkernel NIC will utilize multiple uplinks only if multiple destination IP addresses are involved. Conceivably, you could also use multiple VMkernel NICs with multiple source IP addresses, but I haven’t tested that configuration.
  • Traffic that is primarily point-to-point won’t see any major benefit from this configuration. A single VM being accessed by another single client won’t see a traffic boost other than that possibly gained by the placement of other traffic onto other uplinks.

An other nic teaming policy will use one nic at any time

  • Each VM will only use a single network uplink, regardless of how many different connections that particular VM may be handling. All traffic to and from that VM will be place on that single uplink, regardless of how many uplinks are configured on the vSwitch.
  • Each VMkernel NIC will only use a single network uplink. This is true both for VMotion as well as IP-based storage traffic, and is true regardless of how many uplinks are configured on the vSwitch.
  • Even when the traffic patterns are such that using multiple uplinks would be helpful—for example, when a VM is copying data to or from two different network locations at the same time, or when a VMkernel NIC is accessing two different iSCSI targets—only a single uplink will be utilized.

Note: Above content is from this link

http://blog.scottlowe.org/2008/07/16/understanding-nic-utilization-in-vmware-esx/

Regards

Mohammed

Mohammed | Mark it as helpful or correct if my suggestion is useful.
Reply
0 Kudos
Gkeerthy
Expert
Expert

depending upon the license we can take different approach, basically the below we can do

With standard switch - and no enterprise license - no ether channel

vSwitch0 - 2 nics - default teaming policy - we can put management and VMotion traffic - use separate VLAN - use multinic vmotion - also you can put some VM traffic also may be some management VM's

moreinfo below for multinic vmotion

http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-in-vsphere-5-x/

http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-speed-and-performance-in-vsphere-5-x/

vSwitch1 - 2 nics - default teaming policy - for VM traffic - create portgroups etc...

vSwitch2 - 2 nics - default teaming policy - for ISCSI Traffic - make 2 port groups - and in the port groups select the teaming policy and select OVER RIDE switch fail over order - make one nic acitve for 1 iscsi port group and other nic as un used - ESXI iscsi uses multipathing for failover no need for selecting failover in NIC TEAMING

vSwitch3 - 2 Nics - default teaming policy - NFS traffic - make 2 port groups - and in the port groups select the teaming policy and select OVER RIDE  switch fail over order - make one nic acitve for 1 iscsi port group and  other nic as standby - nfs needs this because the failover is handled by the vswitch

vSwitch3 - 2 nics - you can use this for FT

Now another case - with NICS in ether channel

the cisco 3750 supports MLSA- that is multi chasis ether channel and for that you need to make ether channel and use IP hash teaming policy and not use BECON probing for failover detection this will cause vlan flapping issue so use default.

in this case you can follow the same as above, depeding upon the you environment if NFS traffic is more you can use 4 nics and if iscsi traffic is more you can use 4 nics and its upon you.

you can refer the netapp best practice guide...it has all the details..YES it is LONG...

IF you have more nics... you can give 4 nics each for NFS and ISCSI that is 2 nics to each switch and make a ether channel with 4 nics across 2 pswitches..

IF you have enterprise plus license.... you can use Net IOC, here you can club all the nics and spread across to 2 switch and make a big etherchannel and use one distributed vswitch.. and or no need to use etherchannel... we can use LBT that is good...then use net ioc to set shares and bandwidth allocation...

similar thread in the community  - http://communities.vmware.com/message/2081040#2081040

So there is lot.... of ways to do...

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
Gkeerthy
Expert
Expert

one more point - if we use the IP HASH for storage... it is not necessary we will get aggregated thoughtput,,, because the IP hash calculated for source and destination... you can see more details on the thread

http://communities.vmware.com/message/2173830#2173830/

http://frankdenneman.nl/networking/ip-hash-versus-lbt/

http://frankdenneman.nl/2009/11/13/nfs-and-ip-hash-loadbalancing/

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
Reply
0 Kudos
Thomas_
Contributor
Contributor

Hi,

we have the Enterprise license (not Enterprise+), so no distributed switch.

I would use teaming with ip hash and etherchannels on the cisco side for the client network.
For vmotion, ISCSI and NFS the active/unused configuration allowes me to leave the switches untouched, correct?

Well I'm not sure if NFS or ISCSI, if we use NFS we maybe also want "virtual iscsi" so that a vm machine can

connect via iscsi initiator to a lun. What would you recommend? (Simple and redundant, speed is not top priority)

After reading the postings, I guess this would be ok:

- 4 physical nics for the connection of the client network (VLAN 50) -> vswitch0, 4 nics in active/active mode

- 4 physical nics for vmotion and management  (VLAN 100) -> vswitch1

each nic configured with it's own vmkernel port and separate ip like shown in http://pibytes.wordpress.com/2013/01/12/multi-nic-vmotion-in-vsphere-5-x/

- 2 physical nics for the ISCSI (VLAN 101) -> vswitch2

each nic configured with it's own vmkernel port and separate ip

like shown in http://sostech.wordpress.com/2011/08/22/vmware-iscsi-multipath-vsphere5/

- 2 physical nics for the NFS (VLAN 102) -> vswitch3

each nic configured with it's own vmkernel port and separate ip

like shown in http://sostech.wordpress.com/2011/08/22/vmware-iscsi-multipath-vsphere5/

What about the service console? Can it be on the same vswitch as vmotion+management?

Thanks!

Reply
0 Kudos
Gkeerthy
Expert
Expert

"What about the service console? Can it be on the same vswitch as vmotion+management?"

For vsphere 5 and esxi no service console and you can put mangement + vmotion and use separate VLAN, and if you have 4 nics for these it will be waste... so use 2 nics and use rest of the 2 nics for other... like FT

For NFS - the nics will be in Active/standby and this is same for mgmt/vmotion

for ISCSI - the nics will be in ACtive/unused

To your query - "

we maybe also want "virtual iscsi" so that a vm machine can

connect via iscsi initiator to a lun. What would you recommend? "

ISCSI in side the VM is not...a good idea..and if you want disk more than..2TB for a VM then you can go.. for that also.. you can create multiple 2TB VMDK and make a dynamic volume.. for linux any way you dont need...the iscsi in side the vm, the linux LVM just expands as you add more disks.

esxi it self has a very good.. and strong multipathing frame work.... but inside the guest..it is always more overhead. So i recommend just use ESXI iscsi.

"(Simple and redundant, speed is not top priority)" - in a long RUN you will realize the NFS/ISCSI needs more speed. So use jumbo FRAMES for ISCSI and NFS, and add more NICS for the NFS and ISCSI.

Apart from these... your setup is fine... you can proceed

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful. (vExpert, VCP-Cloud. VCAP5-DCD, VCP4, VCP5, MCSE, MCITP)
Thomas_
Contributor
Contributor

Thanks @all for your help.

I think I have enough information now. I will do the setup with vmware consultants, but I wanted some pre-info.

Thanks again!

Reply
0 Kudos