VMware Cloud Community
pdrace
Hot Shot
Hot Shot

Vsphere 5 network configuration with 10 GBe

We'll be rolling out our first Vsphere 5 servers with 10 GBe in production in the near future and I have questions as to how to best configure the networking.

The hosts will have 4 10 GB connections using 2 dual port 10 GB cards.

Our setup on 4.1 with 1 GB is 3 switches with 2 uplinks each, one for  Managment and Vmotion, one for VM network traffic and one for storage and has served us well. We have used standard virtual switches but will be moving to Cisco distributed switches at some point.

The VSM will be physical a Cisco 1010 cluster.

Our Virtual Center server is a vm and will stay that way for the foreseeable future.

Given this factors should I consider putting the management network on 1 GB connections on a standard vswitch?

As far as the 10GB ports should I divey them up into 2 vswitches and put VM networking and vmotion ( and possibly management) on one switch and use the other for storage?

Or should I just put all the nic ports into one switch?

NetApp best practice is to use IP hash for load balancing. If we share swith ports with other type of traffic will this be the best configuration?

0 Kudos
4 Replies
logiboy123
Expert
Expert

Please check out my blog for vSphere 5 host network configurations.

http://vrif.blogspot.com/2011/10/vmware-vsphere-5-host-network-designs.html

If you are not using enterprise plus OR you cannot use the physical switches to limit bandwidth on a VLAN basis the you will probably hit issues when performing vMotions.

For a static configuration where you pin traffic to a particular uplink using only 2 x 10GbE ports and do not have Ent+ I would review the following article;

http://blogs.vmware.com/networking/2011/12/vds-best-practices-rack-server-deployment-with-two-10-gig...

Specifically I would look at Design Option 1. But note that even in this scenario they are still using LBT for the VM Networking traffic. The reason I have a 4 x 10GbE port design for vSS setups is that it is very hard to guarantee throughput for traffic types without some sort of load balancing on the virtual or physical switches, so instead I pin all traffic types to a particular uplink on a single vSS. It seems like overkill I know, but without smart bandwidth control for particular traffic you will most likely experience issues.

I would steer clear of using LACP unless using a Nexus 1000V.

So IMO the options are;

1) Upgrade to Ent+ and implement LBT, NIOC and SIOC.

2) Use 4 x 10GbE uplinks per host.

3) Use a mix of 1GB uplinks (2 at least) which you pin Management and vMotion to on vSwitch0, then use the 10GbE uplinks for VM Networking and storage on vSwitch1.

I like option 1 the most because it gives you all the features. Option 2 seems like overkill but will guarantee performance. Option 3 is annoying because you need to maintain legacy 1GB infrastructure, but it will guarantee performance so in that regards it is quite good.

Regards,

Paul

pdrace
Hot Shot
Hot Shot

Our licensing is Enterprise Plus but as far as implementing LBT, I would need to migrate to VDS, we are currenlty using VSS.

We have Cicso 1010 appliances, the equivalent of the 1000v in house and will be implementing them in the next couple of months.

What I'm thinking of doing in the meantime is to setup 2 vswitches one for mgmt, vmotion and vm networking and one for storage each with 2 10 GB uplinks assigned  to each.

Vswitch 1

Vmotion portgroup

pnic1 - active

pnic2 failover. .

nic teaming - virtual port id

Management portgroup

pnic1 - active

pnic2 failover. .

nic teaming - virtual port id

VMnetwork portgroup

pnic2 - active

pnic1- failover. .

nic teaming - virtual port id

Vswitch 2

Storage portgroup

Both nics active

nic teaming to IP hash.

I assume this design will change when the 1010 is implemented as then LACP will be an option on the VDS.

I am still considering leaving the management on 1 GB adapters, I am concerned with issues that may arise with having the mangement ports controlled by the Cicso VSM though this may not be a concern when the VSM isn't a vm. I wanty vmotion on 10Gb. We keep a high vm to host density and going to maintenance mode takes a long time with 1 GB!

0 Kudos
pdrace
Hot Shot
Hot Shot

After looking over your 10 GB VSS design I've decided to go with that configuration.

0 Kudos
logiboy123
Expert
Expert

Thanks for the update. Best of luck with the implementation. If you need any further assistance please don't hesitate to ask.

Regards,

Paul

0 Kudos