VMware Cloud Community
alex_wu
Enthusiast
Enthusiast
Jump to solution

Best pratice for vSphere 5 Networking

Hi all,

Considering of the following enviroment:

     1) 4 Physical Servers,each server has 16 Nics (Gigabit) will install vSphere 5 std.

     2) 2 Switches with stack function for SAN Storage

     3) 2 Equallogic PS4000 SAN (Dual controller )

     4) 2 Switches for virtual machine traffic

Regarding to the networking, I plan do create some vSwitches on each physical server as follows

     1. vSwitch0 - used for iSCSI storage

         6 NICs Teamed, with IP-hash teaming policy,  multipathing with iSCSI Storage;and the stroage load balancing is Round Rodin(vmware)

         ( vmware suggests use 2 NICs for 1 IP storage traget, I am not sure)

     2. vSwitch1 - used for virtual machine

         6 NICs Teamed for virtual machine traffic, with IP-hash policy

     3. vSwitch2 - for managment

         2 NICs Teamed

     4. vSwitch3 - vMotion

          2 NICs teamed

Would you like kindly give me some suggestions?

Tags (2)
0 Kudos
1 Solution

Accepted Solutions
Virtualinfra
Commander
Commander
Jump to solution

Alex, The standard defined by storage and VMware is used by dell for their servers and tested its on their hardware and released the document, as a best practice..

So its is the best practise for dell server with that model mentioned in the document to be used.

Hope this clarifies..

Thanks & Regards Dharshan S VCP 4.0,VTSP 5.0, VCP 5.0

View solution in original post

0 Kudos
10 Replies
russ79
Enthusiast
Enthusiast
Jump to solution

That is an insane amount of NICs per server... but with that amount your setup sounds fine.

One question though, on the network switches, 'stacked' means that they are connected and configured in such a way that each of the 2 physical switches act like 1 virtual switch? I only ask because if not, if they are simply connected to one another in a daisy chain fashion, you'll run into errors on the switch with IP hash if one link fails on one switch and you have a standby nic on another switch which comes online and takes over. (if all nics are active then no worries) My only suggestion would be to use dvSwitches and 'route based on nic load" ...no special switch setups to get that working and more accurate for load balancing than ip hashing.

Virtualinfra
Commander
Commander
Jump to solution

Regarding to the networking, I plan do create some vSwitches on each physical server as follows

     1. vSwitch0 - used for iSCSI storage

         6 NICs Teamed, with IP-hash teaming policy,  multipathing with iSCSI Storage;and the stroage load balancing is Round Rodin(vmware)

         ( vmware suggests use 2 NICs for 1 IP storage traget, I am not sure)

     2. vSwitch1 - used for virtual machine

         6 NICs Teamed for virtual machine traffic, with IP-hash policy

     3. vSwitch2 - for managment

         2 NICs Teamed

     4. vSwitch3 - vMotion

          2 NICs teamed

Would you like kindly give me some suggestions?

1. Its good have 6 NIC's teamed as well as, how your configuring this 6 NICS you can configure all the 6 NICS active/passive like this.

vswitch0

1st IP - VMNIC0 and VMNIC1 active, VMNIC1 and VMNIC2 passive

2nd IP - VMNIC1 and VMNIC2 active, VMNIC3 and VMNIC4 Passive

3rd IP - VMNIC5 and VMNIC6 active, VMNIC0 and VMNIC1 passive

so this way you can have all the 6 nics active also sharing load and also giving failover.

2. vSwitch1 - looks good, again you can split up this in 3 and 3, create 2 ports groups or 2 and 2 create 4 port groups

3. vSwitch2 - OK fine

4, vswitch3 - ok fine

Thanks & Regards Dharshan S VCP 4.0,VTSP 5.0, VCP 5.0
0 Kudos
scottyyyc
Enthusiast
Enthusiast
Jump to solution

Have a look at this thread (http://communities.vmware.com/message/1814909#1814909), and my comment on there regarding EqualLogics. Dell also has some excellent reference documentation on the topic, as well.

I can't necessarily speak to VMware recommendations (seeing as they have to remain multi-vendor neutral), but I can speak to Dell's, as I've set up EQL's and vSphere several times (just set up a PS4000XV, in fact).

Dell recommends a 1:1 ratio of physical NICs on your hosts to the number of controller NICs on your SAN(s). So with PS4000's, that would mean 2 phyiscal NICs per server dedicated to iSCSI. From everything I've read, going above that doesn't really end up mattering, as one server can only communicate to a given SAN with two NICs at any one point in time. From there, Dell recommends a 1:1 ratio of VMkernel ports to physical NICs for multipathing. Then, you dedicate 1 nic to each VM kernel port. From there, you'll have properly multi-pathed storage. VMware round robin is fine, although if your version of vsphere supports 3rd part plugins, use EQLs.

Also, management traffic is pretty minimal, so in most scenarios I usually don't see that dedicated to its own vSwitch. With 16 NICs, I would do the following:


Per Host:

- 6 NICs for vMotion (you only really need 2 for physical redundancy, but vSphere 5 actually takes advantages of multiple NICs for vmotion, whereas previously it only ever used one at a time). Therefore, the more NICs you use, the faster your vMotions will be. I've tested this, and it is indeed much faster as you throw more NICs at it. You have NICs to spare, so why not.

- 2 NICs for iSCSI - Again, Dell's recommendation is 1:1. I don't think you'd be hurting anything with more, but I don't think you're gaining anything either. Perhaps Dell has some recommendations when using multiple EQL units.

- 8 (all the rest) for VM traffic.

- Remember to segregate your vSwitches across several physical NICs. You don't want one PCI NIC card to die and completely kill your environment (eg. making sure your iSCSI is on multiple pNICs).

Have a look at some of the Dell documentation, because it's not just a matter of assigning NICs to vSwitches and calling it a day. Also keep in mind that your vSwitches need to be configured exactly the same on all hosts (even named the same) in order for vMotion to work properly. dvSwitches help to eliminate that issue, and make config easier.

P.S. I have to disagree with some of Virtualinfra's comments regarding his teaming on the vSwitches. Each vSwitch should have 1 active NIC only (when used with iSCSI).

AndreTheGiant
Immortal
Immortal
Jump to solution

For Equallogic with vSphere 5 see this new document:

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=10799

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
alex_wu
Enthusiast
Enthusiast
Jump to solution

I know this document, but I am not sure if it is the best practice.

0 Kudos
Virtualinfra
Commander
Commander
Jump to solution

Its is designed, implemented and tested , then documented as a best practise.

So you can go head with the document provided by andre or you can stick to standard used and create network setup according to your needs.

Thanks & Regards Dharshan S VCP 4.0,VTSP 5.0, VCP 5.0
0 Kudos
scottyyyc
Enthusiast
Enthusiast
Jump to solution

Virtualinfra, what is this 'standard' you speak of - and how does the Dell document differ from it? To me, the standard is whatever the storage manufacturer and VMware recommend - because they usually dont just slap together a configuration and call it a day. I usually stick to those recommendations, and only deviate if there is a good reason for it.

What specifically would you do different from that document and why?

0 Kudos
durakovicduro83
Enthusiast
Enthusiast
Jump to solution

If you have 16 nics i think this what you regarding is best practice!

I would do same configuraciotn of network if I had 16 nics.

Cheers,

Denis

To err is human - and to blame it on a computer is even more so
0 Kudos
Virtualinfra
Commander
Commander
Jump to solution

Alex, The standard defined by storage and VMware is used by dell for their servers and tested its on their hardware and released the document, as a best practice..

So its is the best practise for dell server with that model mentioned in the document to be used.

Hope this clarifies..

Thanks & Regards Dharshan S VCP 4.0,VTSP 5.0, VCP 5.0
0 Kudos
Josh26
Virtuoso
Virtuoso
Jump to solution

If I had 16 NICs I would probably disable eight of them..

Your SAN has no chance of pushing > 4 NICs worth of traffic through itself, you're only moving into a less standard/tested environment and increasing your likelyhood of issues.

0 Kudos