VMware Cloud Community
jftwp
Enthusiast
Enthusiast

6 NICs versus 4 NICs per clustered host

Hi all -

For the past 3 years, I've used only rackmount servers (Proliants) and they've served us well. They have 2 onboard gig NICs and I would install a 4 port gig PCI nic before doing the build. Onboard nic A and one of the 4 PCI nics would be teamed into one virtual switch for the service console and vmotion, whereas the remaining 3 PCI ports + 1 onboard would be teamed into a single vswitch for the VMs themselves. This config has allowed for physical path redundancy not only from the given ESX host (if onboard OR PCI card were to fail) but also redundancy to/from the pair of core ethernet switches to which all NICs were distributed.

We've also had HP blades (BL460c) here for the past couple years , but to date they've been used for other servers/apps. We're now low on physical ports (both ethernet and SAN) and I've been asked to look into using blades instead of buying/cabling more rackmount servers. This makes sense on many fronts, including heat and power use in the data center, and we have available blade chassis slots to use/fill. Problem is, this first gen of blades/chassis will only support up to 4 NICs per blade (2 onboard, 2 via mezzanine card).

So my question for everyone listening is this: What would be the best approach to vswitch setup/config when given 4 ports to work with for any clustered (DRS and HA) host? What I'm thinking is 2 NICs in one vswitch (vmotion + SC) and 2 NICs for VMs. Okay... fine... but when we upgrade to vSphere and want to leverage the new 'fault tolerance' feature, I think I have already read that you must have a dedicated NIC for this feature, which means I wouldn't be able to implement that without giving up service console/HA/vmotion or VM traffic redundancy? That's what I'm worried about, but perhaps there are other caveats I'm not aware of when using 4 NICs per host?

(FYI, newer blades / blade chassis + modules that support far more flexible networking configurations, such as HP's 'VirtualConnect' for SAN and ethernet connectivity are NOT an option at this time, budget-wise.... we must leverage our 2+ year old blade infrastructure as-is).

Thanks for any feedback/information/suggestions regarding 'downgrading' from 6 ethernet ports (gig) to 4 ports with clustered ESX 3.5 hosts.

0 Kudos
3 Replies
Texiwill
Leadership
Leadership

Hello,

I would read my Topolgy Blogs as well as http://kensvirtualreality.wordpress.com/vswitch

Each of these cover the cases you have mentioned.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
AntonVZhbankov
Immortal
Immortal

Here is 'best practice' for 460c:

Combine vmnic0vmnic3 and vmnic1vmnic2 into different vSwitches.

Service Console - vmnic0 active, vmnic3 standby

VMkernel (VMotion) - vmnic3 active, vmnic0 standby

Virtual Machine Network - vmnic1+vmnic2 in load balance

If you want Fault Tolerance - check CPUs on your blades first, only the latest generations Xeons are compatible. And yes, you probably have to install 4 ports mezzanine network card to have dedicated FT interface.


---

VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, HPE ASE, MCITP: SA+VA, VCP 3/4/5, VMware vExpert XO (14 stars)
VMUG Russia Leader
http://t.me/beerpanda
AndreTheGiant
Immortal
Immortal

See the Texwill blog that is very well done.

Anyway with a few NIC your can create a single vSwitch (or just 2) and play with VLAN or simple logical networks on the same physical infrastructure.

Be only sure to separate different type of traffic (storage, VMotion/FT/Mangement, VM traffic) using different NIC teaming policy for each differente portgroup.

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos