VMware Cloud Community
td3201
Contributor
Contributor
Jump to solution

vswitch best practice with physical nics

Hello,

First, let me explain my physical architecture of my hosts; Dell 1950s with the onboard broadcom nics enabled plus a single quad port nic card in each host. Using iscsi as well

So that gives me 6 nics. Overkill? Ya, but its what I got. That being said:

1) How should I be cabling this to maximize my capabilities? Open question but assume I really dont know what I am doing (which isnt too far from the truth).

2) Any comments on the failover and load balancing policies?

Reply
0 Kudos
1 Solution

Accepted Solutions
kjb007
Immortal
Immortal
Jump to solution

I would defiitely recommend not teaming anything with your iSCSI nics. Leave those for iSCSI and nothing else.

One more thing, in your setup, I would go one step further and team one onboard and 1 port on your quad together for the iSCSI, and team the other onboard and a second port on the quad for vm traffic, and team the remaining two on the quad for management. This way you also can keep your vm's up and running if you had a failure on your quad, or on your onboard NICs. May be overkill, but if your vm's can't go down, then you're creating the highest level of availability.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB

View solution in original post

Reply
0 Kudos
5 Replies
kjb007
Immortal
Immortal
Jump to solution

This really depends on what you will be doing and how much vm traffic and I/O you expect to pump through your servers.

With 6 NICs, I would team two together for your iSCSI, two for management (service console/vmkernel/vmotion), and two for the vm traffic.

This would give redundant links for all of the main network components, and allow you to load balance to a degree all of your traffic.

I would add also that with 6 NICs, you should be splitting those between physical switches as well, to have redundancy on that end, otherwise, a switch failure would render your esx redundancy moot.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

I'm glad you said something about splitting out the management. I was going to allocate that function to the iscsi NICs but what you are saying makes sense.

I assume there is no harm in teaming across two physical nics?

Reply
0 Kudos
kjb007
Immortal
Immortal
Jump to solution

I would defiitely recommend not teaming anything with your iSCSI nics. Leave those for iSCSI and nothing else.

One more thing, in your setup, I would go one step further and team one onboard and 1 port on your quad together for the iSCSI, and team the other onboard and a second port on the quad for vm traffic, and team the remaining two on the quad for management. This way you also can keep your vm's up and running if you had a failure on your quad, or on your onboard NICs. May be overkill, but if your vm's can't go down, then you're creating the highest level of availability.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
td3201
Contributor
Contributor
Jump to solution

Ok, I am on the right track now. By splitting off the management functions, that's a separate vswitch, with a service console and vmkernel, correct? Any harm in that living on the same IP subnet as my iscsi network?

Reply
0 Kudos
kjb007
Immortal
Immortal
Jump to solution

In the same subnet means being part of the broadcast traffic. It would be a recommended best practice that your iSCSI network is completely isolated. This offers security as well as other benefits. Keeping that traffic, which can be very chatty away from other functions would help your performance/throughput as well.

The choice is yours, but isolating those types of traffic with VLANs/subnets would be better than sharing, if you have the option.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos