VMware Cloud Community
cshells
Enthusiast
Enthusiast

Cisco UCS B Series Networking?

We will be upgrading to vSphere 5.5 in the near future. While we are doing this, we are looking to redesign the networking. I had a few questions about best practice. Our UCS is a B series with 10GB NICs

Our traffic will be:

vMotion

Management

VM data

vCloud

FCoE for storage

-So with our environment having 10GB NICs and the way UCS utilizes NICs, I assume using NIOC would be the best way to manage traffic?

-Also, is there any pros or cons to using separate vDS switches? So for example-

vDS 1

-Management - vmnic01, vmnic02

-vMotion - vmnic01, vmnic02

-VM data - vmnic03, vmnic04

vDS 2

-Management - vmnic01, vmnic02

-vCloud - vmnic03, vmnic04

I would like to utilize multi NICs for vMotion. We also don't nearly utilize all 10GB NICs, so would adding a 3rd NIC to VM data provide a benefit to increase quicker throughput for our VM data by dispersing the data more? I understand without using pinning on the UCS the traffic flows through any physical NIC, or am I off on that? I guess I don't totally understand how traffic flows out the physical NICs on a B series. For instance, if I have three vmnics for my VM data traffic, how do I know it is flowing through three physical NICs and not out one if there is sufficient bandwidth?

I am looking for a little better insight into the networking on a UCS, and some suggestions or recommendations from other who are using a B series.

14 Replies
lwatta
Hot Shot
Hot Shot

Great question.

You have a couple of options when it comes to traffic management. You can use the QOS settings within UCS. The platinum, gold, silver, etc ... classes to define bandwidth or you can use NETIOC. I've seen customers use both. With UCS it's hardware based and with NetIOC it's software based. 

I see a lot of VMNICs which tells me you are probably using the Cisco VIC(palo) card. And since you say 10G I'm assuming its the older gen card not the 1240 or 1280.  Keep in mind each vmnic you create is getting a share of the 10GB bandwidth.

Multiple VDS is totally up to you. It's providing a logical separation for you in vCenter but you are still using the same physical uplinks in the UCS system.

For vMotion traffic, adding more vmnics will not really help if you only have one(10G) VIC card. Assuming your vmotion traffic is all in the same UCS system you want to make sure to create and use vmnics that are all on the same fabric. This will prevent vmotion traffic from going northbound outside the UCS and greatly improve vmotion traffic. Definitely create another vmnic for vmotion traffic on fabric b but have it allocated for failover only.  You don't want to load balance the vmotion traffic across fabric a and b unless its going outside the UCS.

If you are using the 1240 or 1280 VIC it makes sense to create another vmnic on fabric a and allocate it for vmotion. In this case ESX will load balance and you will see better performance. On the 1240 and 1280 we create a port-channel upstream from the host to the FI. While it looks like 20G generally you will only get 10G with one vmnic because of the hashing algorithm.

let me know if that helps

louis

cshells
Enthusiast
Enthusiast

Louis, thanks for the response. My knowledge is much better in VMware than UCS.

So is there any pros or cons to using NIOC over UCS's QOS?

So our VIC is a 2104XP, with 4 NICs on each fabric. So I understand with the UCS the vmnics will be sharing the bandwidth, that's why I figured using NIOC is the best way to manage the traffic between all those vmnics. So having multiple vmnics for vMotion won't help performance? I read an article that described the process for setting up a multi NIC vMotion, so I assumed if we configured that, it would split traffic over two physical NICs. I guess that is one of my big questions. So in theory, lets say I have two vmnics for VM data and two vmnics for vMotion. How do I know it is actually splitting the traffic between two physical NICs on the UCS. Since those vmnics are actually vNICs on the UCS, they don't actually correlate to a physical NIC. So if one of the physical NICs on the UCS had enough bandwidth available, how do I know that the traffic for vmnic01 and vmnic02 aren't just flowing through one physical NIC? Hopefully I explained that correctly.

Thanks for the tip on creating the vMotion NICs on the same fabric. That makes sense to keep traffic within the chassis.

We will be upgrading to a new chassis in the future, so I assume we will be getting either the 1240 or 1280 VIC. So I guess on your last point I am a little confused. So it would be set up like:

vMotion - vmnic01(fabric a), vmnic02 (fabric b) ----> Port 1/1 (FI a), Port 1/1 (FI b) configured in port-channel

Is that what your saying? Then it will load balance between them?

Thanks for all your help.

Reply
0 Kudos
cshells
Enthusiast
Enthusiast

Louis,

So like I said, my UCS knowledge is a little weak. In doing some more research tonight I realized I didn't understand the UCS parts correctly. I was confused what exactly the VIC was. I thought the VIC was the I/O Module. So our VIC isn't a 2104XP, that is our I/O Module and we do have the older gen VIC. We still are upgrading in the future, so I assume we will get the 1240 or 1280 VIC and an 8 port I/O Module.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

I've written a few posts on networking with vMotion and UCS to get you started:

I find NIOC + Traffic Shaping to be quite effective over UCS QoS policies.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
cshells
Enthusiast
Enthusiast

Thanks for the reply Chris.

I read through those articles and everything is coming together a little better now. The only part I am a little muddy on is the pinning. I assume the pinning configuration is done on the UCS? So lets say I have dvUplink1, in the UCS is where I determine if this uplink is connected to fabric A or B?

Another question regarding vMotion traffic shaping. So with 10GB network I can have a max of 8 concurrent vMotions (I think). If I have the available bandwidth I want my vMotion traffic to any available bandwidth to increase the speed. I am assuming I would need to determine how much bandwidth 8 concurrent vMotions use. Then from there I can determine what numbers to configure on the bandwidth traffic shaping. Would this be the correct way of doing this? Or am I off on my thinking?

Reply
0 Kudos
lwatta
Hot Shot
Hot Shot

On the pinning he's saying pin your vmotion traffic to one fabric in vCenter, which is similar to when I said to keep all your vmotion traffic over one of the UCS fabrics to keep it form going northbound out of the UCS.

I don't know enough about your network layout, but generally I would not tune your network for maximum amount of vmotion traffic. Especially on a shared link. You need to think about it the other way around, otherwise you will starve your VM and mgmt traffic.

louis

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

I read through those articles and everything is coming together a little better now. The only part I am a little muddy on is the pinning. I assume the pinning configuration is done on the UCS? So lets say I have dvUplink1, in the UCS is where I determine if this uplink is connected to fabric A or B?

Every vNIC in UCS is pinned to a fabric interconnect (A or B). Make sure to be consistent with which fabric interconnect you put your vMotion vmk port on to keep traffic within UCS. Otherwise it may need to travel to your upstream switch and come back down to the other fabric interconnect.

Another question regarding vMotion traffic shaping. So with 10GB network I can have a max of 8 concurrent vMotions (I think). If I have the available bandwidth I want my vMotion traffic to any available bandwidth to increase the speed. I am assuming I would need to determine how much bandwidth 8 concurrent vMotions use. Then from there I can determine what numbers to configure on the bandwidth traffic shaping. Would this be the correct way of doing this? Or am I off on my thinking?

It doesn't matter how many vMotions are occurring; a single vMotion can consume the entire 10 Gb link for a brief period of time. Set the traffic shaping value to the maximum quantity of bandwidth you will allow vMotion to consume, such as 8 Gb (example). This really only comes into play when two unique hosts are sending a VM to one specific destination host, as NIOC (which controls source traffic) will handle single host source to single host destination traffic.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
cshells
Enthusiast
Enthusiast

Louis and Chris I appreciate the feedback.

If these were rack mounted servers I wouldn't have as much of an issue. The UCS adds an area of unknown for me.

Ok, I wasn't aware I could tell which port in vCenter was connected to which fabric.

Oh ok. I didn't realize even one vMotion could consume so much bandwidth. So we will just have to determine how much bandwidth we want vMotion to utilize, but like Louis pointed out, we just need to be cautious not to overrun other traffic.

I read Duncan's article about creating a Muli-NIC vMotion, which makes sense on rack mounted servers. Chris, to your point, I understand we would use a multi-NIC setup but in an active/standby configuration (Active/Fabric A, Standy/Fabric B) for the purpose of keeping all traffic within the UCS and then failover to Fabric B for redundancy. Is there any benefit to creating a multi-NIC vMotion for performance gains on a UCS or is it even possible? Like Duncan said you would have vMotion01- vmnic01/active vmnic02/standby, vMotion02 - vmnic02/active vmnic01/standby. I apologize if these are obvious answers I am just not seeing.

One more thought I need to clear up in my head for better understanding. Chris like you said every vNIC is pinned to a certain fabric, but from what I understand, every vNIC isn't pinned to a particular port on that fabric, correct? So if you have two ports set to active/active on your VM traffic, how do you know it is actually utilizing two physical ports on your UCS? I guess that is my biggest confusion on the UCS. How does the UCS determine how many or which physical ports it uses? Say I have four vNICs on my UCS, are they actually using four physical ports on the fabric in UCS? I guess to my own question, if I have eight vNICs on one fabric in my UCS and I only have four physical ports on that fabric, so it can't be using eight physical ports.

I guess all this boils down to one point of multi-NIC design on a UCS. Does a multi-NIC configuration on a single fabric have any benefit?

VM Traffic - active/vmnic01/fabric A - active/vmnic02/fabric A

VM Traffic - active/vmnic01/fabric A

Is option A going to give me more performance or are they both the same since UCS will utilize available bandwidth despite how many active vNICs you have on a particular fabric. And maybe my questions are obvious answers I am just not understanding, in which case I apologize for dumb questions.

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

Is there any benefit to creating a multi-NIC vMotion for performance gains on a UCS or is it even possible? Like Duncan said you would have vMotion01- vmnic01/active vmnic02/standby, vMotion02 - vmnic02/active vmnic01/standby. I apologize if these are obvious answers I am just not

Multi-NIC vMotion allows all vMotions (even if it's just one) to use multiple uplinks at the same time. Every vmkernel port marked as vMotion will send traffic. It doesn't matter that it's UCS, just that there are multiple uplinks available.

One more thought I need to clear up in my head for better understanding. Chris like you said every vNIC is pinned to a certain fabric, but from what I understand, every vNIC isn't pinned to a particular port on that fabric, correct? So if you have two ports set to active/active on your VM traffic, how do you know it is actually utilizing two physical ports on your UCS? I guess that is my biggest confusion on the UCS. How does the UCS determine how many or which physical ports it uses? Say I have four vNICs on my UCS, are they actually using four physical ports on the fabric in UCS? I guess to my own question, if I have eight vNICs on one fabric in my UCS and I only have four physical ports on that fabric, so it can't be using eight physical ports.

Each vNIC is pinned to a fabric via the VIC (the blade's network adapter). The amount of bandwidth and number of lanes provided by your VIC vary with each model (M81KR, VIC1240, VIC1280, etc), along with how many ports you have connected to the IO module (21xx, 2204, 2208) and how many blades are sharing the IO module. UCS uses CoS (class of service) to share the physical infrastructure, which typically comes into play when you oversubscribe your physical uplinks with virtual NICs.

However, this is largely irrelevant for a high level discussion. UCS will figure out the path between the blade vNIC and the fabric interconnect's vEth port. As long as you configure two vmk ports with vMotion, and ensure that the Active adapter for each vmk is on a different side of the UCS fabric, you are guaranteeing at least two unique paths will be used for traffic.

Brad Hedlund also has a great post here (it's a tad long):

VMware 10GE QoS Design Deep Dive with Cisco UCS, Nexus

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
cshells
Enthusiast
Enthusiast

Thanks Chris!  This makes more sense now.

However, this is largely irrelevant for a high level discussion. UCS will figure out the path between the blade vNIC and the fabric interconnect's vEth port. As long as you configure two vmk ports with vMotion, and ensure that the Active adapter for each vmk is on a different side of the UCS fabric, you are guaranteeing at least two unique paths will be used for traffic.

Brad Hedlund also has a great post here (it's a tad long):

VMware 10GE QoS Design Deep Dive with Cisco UCS, Nexus

So with our limited maintenance windows I would like to be able to evacuate VMs as quick as possible. Through everything you have answered for me, is it even possible to configure a multi-NIC vMotion, pinned to one fabric, and has the ability to fail over to the other fabric? I like the possibility to keep all vMotion traffic on the chassis and with your previous statement of providing two unique paths, that would cover redundancy, but can you get the best of both worlds?

Reply
0 Kudos
chriswahl
Virtuoso
Virtuoso

The older VIC (such as the M81KR) only has one physical connection to the fabric, so ... no. Smiley Happy

The newer VICs (1240 mLOM and/or VIC 1280) have multiple connections and connect at 10, 20, or 40 Gbps depending on the configuration. vMotion can take advantage of the increased speed without need for multi-NIC configuration.

I'm not aware of a truly supported method for pinning vmk's to both FIs without the potential possibility for traffic to flow into your upstream switch. vMotion isn't fabric aware, and an A side fabric vmk port may decide to connect to the vmk port on the B side fabric.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
Reply
0 Kudos
cshells
Enthusiast
Enthusiast

ok. We have the older VIC, but are upgrading in December so that increased bandwidth should be great.

I appreciate all the help! This helped clear up a bunch of questions I had about VMware and UCS.

Reply
0 Kudos
cshells
Enthusiast
Enthusiast

Chris, sorry one more thing I thought of while we are on this discussion.

I just want to make sure I got this right. On your example of vMotion pinning you pin the the upLink on the dvSwitch. So in that situation if I have my VM data traffic in active/active I would want to create a separate dvSwitch for that traffic, correct? Since it is set globally opposed to the portgroup level.

Reply
0 Kudos
cshells
Enthusiast
Enthusiast

Nevermind, I misread the article. You said globally on the portgroup, my bad.

Reply
0 Kudos