VMware Cloud Community
JonhL
Contributor
Contributor
Jump to solution

Creating vMotion, Management and iSCSI

Hi all,

I have some doubts that I will like to have some clarification.

First vMotion and Management Trafic.

I have two nics for this. My question is, should I create on VMkernel port for each one(and adding the vMotion Option and Management Trafic) adding a different IP for each and put this two in a vSwtich but example; one nic1 active for vMotion and nic2 standby, and on Management Trafic the same, nic2 active and nic1 standby. With this I have two different netwoks but using both nics in case of failure.

Or just create one VMkernel and enable both on the same VMkernel and put both nics active.

What is the best choice, or the best performance?

I have 6 hosts on this cluster, so I need to do this in on hosts.

First I am using QLOGIC

On iSCSI side, should I use VMkernel Port Binding? Or just use a normal VMkernel with my iSCSI VLAN wihout setting any VMkernel Port Biding on the iSCSI Software Adapter?

I have both cases running, but do not know which is the best way to have the best performance.

Thank You

JL

Reply
0 Kudos
1 Solution

Accepted Solutions
VirtuallyMikeB
Jump to solution

Good day!

In summary, you should do something like this.

vSwitch0 - Mgmt & vMotion

====================

vmk0

--------

Management

Uplink 1: Active

Uplink 2: Passive

vmk1

--------

vMotion

Uplink 1: Passive

Uplink 2: Active

vSwitch1 - iSCSI

====================

vmk2

--------

iSCSI Traffic

Uplink 3: Active

If you only have one uplink for iSCSI traffic, you don't have to configure port binding.  iSCSI port binding is for multipathing.  If you only have one NIC for iSCSI, you can't multipath.  You need two or more NICs or uplinks for multipathing.

Now, for the management and vMotion links, I suggest only using one active and one passive for each type of traffic.  Even though vSphere 5 has the multi-NIC vMotion feature, the idea of separating vMotion and management traffic is to segregate vMotion and its bursty traffic from your important management traffic, which happens to include your HA traffic (also important).  You certainly don't *have* to configure it like this, but keeping vMotion on it's own physical link will keep it from walking on your management traffic.  In case of a failed cable, I'm sure you'd rather have a congested link than no management or vMotion traffic.

All the best,

Mike

http://VirtuallyMikeBrown.com

https://twitter.com/#!/VirtuallyMikeB

http://LinkedIn.com/in/michaelbbrown

----------------------------------------- Please consider marking this answer "correct" or "helpful" if you found it useful (you'll get points too). Mike Brown VMware, Cisco Data Center, and NetApp dude Sr. Systems Engineer michael.b.brown3@gmail.com Twitter: @VirtuallyMikeB Blog: http://VirtuallyMikeBrown.com LinkedIn: http://LinkedIn.com/in/michaelbbrown

View solution in original post

Reply
0 Kudos
11 Replies
vCloud9
Enthusiast
Enthusiast
Jump to solution

One cruicial information you didnt provide was, which version of ESXi you are running or plan to run? I am assuming here that you plan to deploy ESXi5.

I would have suggested going by one kernel port each with each NIC in active and standby and vice-versa if it would have ESXi4.1.

But starting with ESXi5 version, VMware lets you use multiple NICs for vMotion, so I would suggest that you keep both the NICs in an active-active state, so that your vMotion traffic could make use of both NICs.

Hopefully, this thread http://communities.vmware.com/thread/308281 answers your question whether to use or not to use port binding.

-vCloud9

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful.
JonhL
Contributor
Contributor
Jump to solution

Hi

Thank you for your reply.

Regarding the vMotion and MT in one VMkernel or two, I will take a look. But I will like to know if there is any advantage on one, or other way.

Regarding the iSCSI I have look at the threats but both and the article are for ESXi 4.x, and yes I am using ESXi v5

But basically is if I use 2 NICs to my iSCSI will work with multipath, but in my case, most of the iSCSI is using only one nic. Should I use anyway, or is pointless since I have only one nic?

Thank again for the reply.

JL

Reply
0 Kudos
vCloud9
Enthusiast
Enthusiast
Jump to solution

You will use 2 VMKernel portgroups, here is VMware KB with clear steps to setup http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=200746... . As far as the management network goes, it doesn't really matter, but for the vMotion can take advantage of both the NIC's, giving you faster migration of VMs.

-vCloud9

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful.
Reply
0 Kudos
VirtuallyMikeB
Jump to solution

Good day!

In summary, you should do something like this.

vSwitch0 - Mgmt & vMotion

====================

vmk0

--------

Management

Uplink 1: Active

Uplink 2: Passive

vmk1

--------

vMotion

Uplink 1: Passive

Uplink 2: Active

vSwitch1 - iSCSI

====================

vmk2

--------

iSCSI Traffic

Uplink 3: Active

If you only have one uplink for iSCSI traffic, you don't have to configure port binding.  iSCSI port binding is for multipathing.  If you only have one NIC for iSCSI, you can't multipath.  You need two or more NICs or uplinks for multipathing.

Now, for the management and vMotion links, I suggest only using one active and one passive for each type of traffic.  Even though vSphere 5 has the multi-NIC vMotion feature, the idea of separating vMotion and management traffic is to segregate vMotion and its bursty traffic from your important management traffic, which happens to include your HA traffic (also important).  You certainly don't *have* to configure it like this, but keeping vMotion on it's own physical link will keep it from walking on your management traffic.  In case of a failed cable, I'm sure you'd rather have a congested link than no management or vMotion traffic.

All the best,

Mike

http://VirtuallyMikeBrown.com

https://twitter.com/#!/VirtuallyMikeB

http://LinkedIn.com/in/michaelbbrown

----------------------------------------- Please consider marking this answer "correct" or "helpful" if you found it useful (you'll get points too). Mike Brown VMware, Cisco Data Center, and NetApp dude Sr. Systems Engineer michael.b.brown3@gmail.com Twitter: @VirtuallyMikeB Blog: http://VirtuallyMikeBrown.com LinkedIn: http://LinkedIn.com/in/michaelbbrown
Reply
0 Kudos
JonhL
Contributor
Contributor
Jump to solution

Hi Mike,

Thank You for you reply.

In my actual configuration I have the Mgmt & vMotion set like that.

On the iSCSI side, I see the point, so I do not need to set this.

Thank You

JL

Reply
0 Kudos
vCloud9
Enthusiast
Enthusiast
Jump to solution

If you dont have much vMotion activity then thats the way to go. Only down side I see with that configuration -vMotion(Active/Standby) is that you are limiting the vMotion traffic to only one NIC.

-vCloud9

Please don't forget to award point for 'Correct' or 'Helpful', if you found the comment useful.
Reply
0 Kudos
JonhL
Contributor
Contributor
Jump to solution

Hi

My big problem, is that this type of hosts only have 4 nics. So I need to distrubute them very well for a good trafic and the VMs inside produce lot of trafic.

To share iSCSI, vMotion, Mgm Trafic and VMs Network. What is the best options to sahre nics on this features, or what are the features that should not share the nics.

JL

Reply
0 Kudos
trink408
Enthusiast
Enthusiast
Jump to solution

Only one nic for Iscsi would make me very nervous, that is if you're running your vm's off an Iscsi storage array...not sure what you're utilizying the Iscsi storage for...

I take it you have no option to add additional nic cards to those host servers? 

Reply
0 Kudos
VirtuallyMikeB
Jump to solution

You'll many times see mgmt and vMotion share links in an active/passive configuration.  This also works well for iSCSI or NFS traffic and VM traffic - also in an active/passive configuration.  It sounds like you're limited on your hardware.  If you only have four NICs, then this configuration will work well for you.  You separate different types of traffic on physical links and only in the case of a link or NIC failure will your traffic share a physical path.  Of course, use VLANs whenever you can, as well.

Cheers,

Mike

----------------------------------------- Please consider marking this answer "correct" or "helpful" if you found it useful (you'll get points too). Mike Brown VMware, Cisco Data Center, and NetApp dude Sr. Systems Engineer michael.b.brown3@gmail.com Twitter: @VirtuallyMikeB Blog: http://VirtuallyMikeBrown.com LinkedIn: http://LinkedIn.com/in/michaelbbrown
Reply
0 Kudos
joshodgers
Enthusiast
Enthusiast
Jump to solution

I agree with Mike's comments, but you also need too consider the HA implications when using  IP Storage (iSCSI) , I discuss the topic in detail here.

http://joshodgers.com/2012/05/30/vmware-ha-and-ip-storage/

Hope it helps.

Josh Odgers | VCDX #90 | Blog: www.joshodgers.com | Twitter @josh_odgers
Reply
0 Kudos
JonhL
Contributor
Contributor
Jump to solution

Hi All,

We use VLAN for all traffic. So vMotion, MT, or iSCSI even use the same nic, is using VLAN to separate the traffic.

I think with this is ok to share a nic for this areas. Like I said, I have a lack on network adapters on this hosts and is not possible to add more.

Thank You

JL

Reply
0 Kudos