VMware Cloud Community
ccpspace
Contributor
Contributor
Jump to solution

10 gbe network

Hello everybody, we are about to get some new servers, SAN and network switches. We have a good opportunity to get 10 gbe switches (probably arista networks) and I was wondering what would be the best practice for vSphere.

The SAN will be an iSCSI SAN (we are between HP lefthand and scale computing) and I was wondering if 2 ports per server (each port connected to different switches) will be enough for VM traffic, storage, service console and may be HA; and if such a configuration is supported. The original idea was to have 8 gbe ports (2 VM traffic, 2 service console, 2 vmotion 2 HA), just one 10gbe port will provide more bandwidth than the 8 gbe ports.

Will the 2 ports per server will be enough or I should have more ports, anybody else with a similar environment that can share any details?

Thanks!

0 Kudos
1 Solution

Accepted Solutions
Troy_Clavell
Immortal
Immortal
Jump to solution

for what it's worth, we run 2 x 10Gbe with one vSwitch per host (25:1 consolidation ratio).  The managment network, vMotion and the VM Port Groups all share the 2 x 10Gbe NICs. We have never seen a performance bottleneck on the network.

View solution in original post

0 Kudos
15 Replies
DCjay
Enthusiast
Enthusiast
Jump to solution

Depending on VM traffic and Networking(VLAN)

I will go for

4 NICs for VMs (create trunk port if more than one VLAN is needed)

2 NICs for Service Console

2 NICs for VMKernel(VMotion etc)

0 Kudos
ccpspace
Contributor
Contributor
Jump to solution

That is what I'm planning for 1gbe, but for 10gbe I think it would way too much and a lot more expensive.

0 Kudos
DCjay
Enthusiast
Enthusiast
Jump to solution

There will not be any redundancy if you have one 10GB NIC.

You can have 2 X10GB NICs, per ESX server.

If this is the case, you can create a team, and use trunking on switch port and VLAN taggin for the port groups to seperate the Service console , VM etwork and VMkernel Network.

0 Kudos
ccpspace
Contributor
Contributor
Jump to solution

Yes it's what I was wondering if 2 would be enough or not, and if vSphere supports something like this, and see if this configuration is in use by other users and how is it working for them.

It's going to be a big change for us, going from internal storage to a SAN and the 10 gbe, but I think it it the right time to do it. Will see what happens.

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

2x10GbE is plenty, esp. if you can carve it up (HP VirtualConnect, Cisco UCS, Xsigo, etc).

--Matt VCDX #52 blog.cowger.us
Josh26
Virtuoso
Virtuoso
Jump to solution

Matt wrote:

2x10GbE is plenty, esp. if you can carve it up (HP VirtualConnect, Cisco UCS, Xsigo, etc).

I'm glad someone else said it.

People throw around numbers like "8 physical NICs required" but when management traffic barely accounts for anything, vmotion runs over gigabit trivially, iSCSI rarely uses more than 2 x 4GB connections, what would you realistically need more than 2 x 10Gbe for?

RobBerginNH
Enthusiast
Enthusiast
Jump to solution

So bandwidth aside and I agree that 2 x 10 GBe are good sized pipes.

Any value in using Physical Network cards for certain functions - say the separating the Management Network from the other Network Protocols (Traffic, Storage, Vmotion, FT, etc)??

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

The advtanages are being very certain about separating your traffic.  If you have the ability to do virtual NICs (CIsco, HP, Xsigo) can all do this, it makes it very easy to separate traffic by virtual NIC for clarity, while still setting limits/guarentees.

--Matt VCDX #52 blog.cowger.us
0 Kudos
RobBerginNH
Enthusiast
Enthusiast
Jump to solution

@Matt - agreed - the partitioning of a 10 Gbe physical NIC into multiple virtual NICs (not to be confused with the VM's vNIC) - but what I am asking is - are people "all in" on the 10 Gbe NIC's or are taking their Management traffic off on some 1 Gbe NICs because it's they have that separated out (separate physical management switches) vs. a converged network.

I was wondering if there was a pro/con to it vs. what's in place for networking.

I run 10 Gbe NICs with everything on it except for Management - have the Management broken off with a pair of cards - and would be trying to build a case to eliminate the Management NICs and just have everything go across the 10 Gbe paths.

0 Kudos
Troy_Clavell
Immortal
Immortal
Jump to solution

for what it's worth, we run 2 x 10Gbe with one vSwitch per host (25:1 consolidation ratio).  The managment network, vMotion and the VM Port Groups all share the 2 x 10Gbe NICs. We have never seen a performance bottleneck on the network.

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

Oh, sorry, mis understood the question.

I see no reason for seprate physical NICs if you've got virtualizable 10GbE stuff.

--Matt VCDX #52 blog.cowger.us
0 Kudos
HendersonD
Hot Shot
Hot Shot
Jump to solution

A bit off the subject but what if someone has a virtual server infrastructure and is heading towards doing a significant number of virutalized desktops? Is 2x10GB ethernet still enough to handle this load? We have about 35 VMs right now but plan on moving towards using View as well. If all goes well with our pilot I could conceivably see running 800-1,000 view sessions.

0 Kudos
mcowger
Immortal
Immortal
Jump to solution

With 2x10GBe links per host and assuming a (VERY) aggressive 200 VMs per host, you are still giving 11 MB/s per user, which is WELL above what most users need.  Thats still a 100MBit link per user.

You will run into CPU and memory limits before you run into network limits.

--Matt VCDX #52 blog.cowger.us
0 Kudos
HendersonD
Hot Shot
Hot Shot
Jump to solution

If I have a blade servers though I am using the 2x10 gbe for server access (VM Network) as well as desktops through VMWare View. A large number of the desktops I want to virutalize and replace with zero clients could potentially be booting up about the same time creating quite a network load.

Does this change things? My blade servers are 4 years old and I will be purchasing new servers next year (summer 2012). My blade chassis does not have the ability to support more than 2x10 gbe so I am contemplating whether I should:

  • Purchase new blades (only if 2x10 gbe is enough bandwidth to handle all of my server VMs and my desktop virutalization push)
  • Move towards rack mount servers since each one can have 2x10 gbe connections
  • Look at Cisco's UCS solution which can support many 10 gbe connections

At the same time we are getting new servers we are doing a switch refresh so I will have gig to the desktop everywhere

0 Kudos
RobBerginNH
Enthusiast
Enthusiast
Jump to solution

When you say you are looking at Lefthand - are you using the HP Reference Architecture with the C7000 (and they take 4 blades and drive the Lefthand stuff?).

What sort of disks are you looking to use in it?

I am looking at it as well - so maybe a conversation offline.

Thanks,

Rob

0 Kudos