VMware Cloud Community
esnmb
Enthusiast
Enthusiast

Dual 10G NICs

I have a BladeCenter chassis that will allow each blade to have two standard 1gb NICs as well as two 10G virtual adapters that can be split up into multiple NICs with varying speeds.  I am thinking about using the 1gb NICs as management ports.

For the 10G NICs, should I just have them be two 10G nics and just create multiple vNetworks/vSwitches off of them, with of course seperate VLANS?  Or is there any kind of benefit to split vMotion into a lower bandwidth network and having multiple networks for server traffic?  I'm not sure I see a good reason to do this since it's still the same physical ports....

Thoughts, ideas?

Thanks,

Matt

Reply
0 Kudos
10 Replies
vogie563
Enthusiast
Enthusiast

A pNIC can only be connected to one vSwitch at a time so if you need multiple vSwitches for any reason you may need to carve up the adapter. 

Both are these are good reads about setting up networking

http://blogs.vmware.com/networking/2011/12/vds-best-practices-rack-server-deployment-with-two-10-gig...

http://blogs.vmware.com/networking/2011/11/vds-best-practices-rack-server-deployment-with-eight-1-gi...

Reply
0 Kudos
esnmb
Enthusiast
Enthusiast

Right, but I wasn't sure if there were any reasons to carve it up or just keep it as two 10G NICs along with my 2 1Gb NICs.

I'll check out the posts you sent as well.

Thanks!

Reply
0 Kudos
vogie563
Enthusiast
Enthusiast

Unless you need multiple vSwitches I can't think of any.Alot of servers have CNA's now and it is just recommend to carve up your traffic with port groups/VLANS. The articles go into this and also traffic shaping if you have I/O requirements.

Reply
0 Kudos
esnmb
Enthusiast
Enthusiast

ok cool.

I'm reading them now...

Now the other issue.  These articles are talking about vDS' but I had several bad experiences with the Nexus 1kv where the VSM's lost connection to vCenter due to an intermittent power issue.  So there was a delay in ESX failing over to the powered up external physical switch which caused the VSM's to drop off of the network.  I had to pull ESX hosts and VSM's off of the Nexus so they could talk again, then put them back on.  This of course was not good since multiple VM's lost connectivity to the network.

So I am going back to the standard virtual switches to avoid this.  Stability is more important then the extra features the Nexus and the vDS can provide... at least in the eyes of the power that be here.

Reply
0 Kudos
vogie563
Enthusiast
Enthusiast

The ideas should still apply you will just have to setup each host manully or config them via host profiles to setup the vSwitchs, uplinks, port groups on each host.

Sorry to hear your bad experience with the 1000v and vDS. I am testing one out now and going though a re-connection just due to our vCenter upgrade and forgetting about the 1000v since it was only a test. The 1000v even being disconnected still has been passing traffic just fine just can't change any configs. I didn't try any fail over testing so maybe I would see the same problems, vMotion was ok but didnt unplug any NIC or drop a physical switch?

Reply
0 Kudos
esnmb
Enthusiast
Enthusiast

If vcenter goes down the vm's that are up should still pass traffic, but if you reboot them they will be off the network...

Reply
0 Kudos
vogie563
Enthusiast
Enthusiast

Our vCenter is a VM so we keep vCenter, managment, vMotion, etc on standard vSwitches. So if the vDS goes down vCenter is still online.

vDS or 1000v VEM should always pass traffic if the VSM or vCenter is down, just no config changes.

I have hard horror stories of people cutting over everything over to a vDS and they get locked out of the enviornment since vCenter was also on the vDS. If all networking needs to be using vDS then vCenter should be a physical IMO, if vCenter is a VM keep it on a standrd switch its a nice CYA. 

Reply
0 Kudos
esnmb
Enthusiast
Enthusiast

yeah, we used to have a virtual vcenter but had the issue where the vsm's went off the network.  Trying to locate vcenter on 14 different hosts one at a time was a pain in the @$$...  We made it physical after that.

Reply
0 Kudos
vogie563
Enthusiast
Enthusiast

Yup, vDS and the 1000v adds a bunch of complexity concerning design. I have a DRS rule in to keep vCenter on a certain host as much as possible. We have half the hosts you do and I don't want to play the shell game either if something would happen. Smiley Happy

esnmb
Enthusiast
Enthusiast

Shell game indeed...

Thanks again for your help on this btw.

Reply
0 Kudos