Apparently the other post was too long and I lost people before I think my biggest question. Right now we have 1 vSwitch with 4 pNICs handling our SC/vMotion/Production network. For security reasons and best practices I'm thinking about moving to 1 vSwitch w/ 2 pNICs to handle SC/vMotion and 1 vSwitch with 2 pNICs to handle the production network.
I know I'm going to get slammed with questions about performance of our Production network, but I think that 4 gig NICs is a big extreme for about 5-6 VM's per host. Looking at our networking performance charts over the longest period of time, the most I've seen it spike to was 50,000KBps, with an adverage across the hosts of about 900KBps adverage throughput. Shouldn't 2 gig NICs be able to handle even the 30,000-50,000KBps spikes which don't happen that often?
Kyle
Hello Kyle,
I'm starting to think that maybe splitting the SC/vMotion traffic up with VLAN tagging/802.1q might be a leap that I'm not ready for quite yet but would something like this be what you were talking about having pNIC0 -> SC (backup vMotion), pNIC1 -> vMotion (backup SC)
Granted I would have more NICs in the production and the DMZ area but the right idea?
Yes. However unless you use VLANs, the failover mode would put vMotion data on the SC network which is not really desirable. But that is more a trust issue than anything else.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
Most of my projects I've deployed have 3-4 seperate vSwitches for SC/VMotion/Production VM/DMZ VM depends how you architect them. Most people doing fine with 2 pNICs per port group like Production VM. Not all the VMs network traffic utilize that much in average. If you read the new book from Mike Laverick, he mentioned about network traffic utilization in real world experience is basically low on average, so it would be safe to say 2 pNICs should be sufficient. Unless, your situation is different in terms of high network bandwidth throughput, than you might have to add more NICs for teaming/load balancing or using 10GBe cards but that's totally different discussion.
pNIC01/02 ->vSwitch1 ->SC/VMotion
pNIC03/04->vSwitch2->VMotion/SC
pNIC05/06->vSwitch3->Production Network
pNIC07/08->vSwitch4->DMZ network
pNIC09/10->vSwitch5->SCSI/Backup/Spare (your choice)
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
Well in a perfect world we would all have 10 NICs, but we only have 6. What is the difference (besides having 2 NICs assigned for throughput) between your vSwitch1/vSwitch2 configured that way and possibly putting them on the same vSwich and using 802.1q or VLANs or some other form of seperation? You would still have 2 pNICs for redundancy, separation. I'm not very familiar with 802.1q and people have been on my R&D boxes for the past month so I can't get on them until they finsh to play and test my config. Obviously I would love to have 2 pNICs for each one separate, SC, vMotion but I can't justify the added cost of ordering new NICs for that when we have the ability to use 802.1q on our switches/or VLAN configuration on the switches.
My big picture goal below, would this work? Or are there flaws in my mental picture?
Kyle
Hello,
Your diagram is fine. The main thing to remember is to keep the DMZ network on its own pNICs. Other than that, your setup is fine. WHen doing 802.1q you want to use Virtual Switch Tagging (VST) instead of External Switch Tagging. VST brings the trunk into the virtual switch.
From a virtual switch world you create a portgroup for each VLAN. In the physical world they trunk to the port to which the pNIC for the vSwitch is connected.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
So for the vSwitch0, what I'm supposed to do is create 2 portgroups, make SC (192.168.1.) with a VLAN of 100, and then another portgroup for vMotion (192.168.2.) with a VLAN of 200 with 2 pNICs attached to the vSwitch. When the switch traffic goes to the switch it will separate the traffic?
Kyle
Hello,
Correct, but one refinement.....
pNIC0 -> SC Portgroup (backup for vMotion)
pNIC1 -> vMOtion Portgroup (backup for SC)
Set the opposite pNIC as the backup for the other and assign a pNIC to each portgroup. This way the only time you absolutely need VLANs is when there is a failure.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
As far as I use BL460c too, I see one big potential problem in your scheme. Both nics you use for each switch belongs to the same chip. Chip is down - all network is down? management or production. So, better way is to use nic0 + nic2, and nic1 + nic3.
One chip is down - and you have remaining gigabit on each switch to seamlessly migrate VMs.
---
If you can - do NOT combine DMZ and production network on the same ESX'es. Even with dedicated NICs.
---
A) we dont' have the money to put in 2 more hosts to run our DMZ servers on a competely isolated network b) I think its a big rash and over the top.
Granted yes it is a safer setup in the big picture of things, but until we/someone gets hacked and they were able to jump vSwitches and all that jazz my tune would change but for now I think we're better off than most people by pushing all DMZ traffic through dedicated NICs
Thanks for the pointers though
Kyle
I'm starting to think that maybe splitting the SC/vMotion traffic up with VLAN tagging/802.1q might be a leap that I'm not ready for quite yet but would something like this be what you were talking about having pNIC0 -> SC (backup vMotion), pNIC1 -> vMotion (backup SC)
Granted I would have more NICs in the production and the DMZ area but the right idea?
Kyle
And more: always try to combine on one failover group like "SC + VMkernel" ports from different physical nics, if you're using dual- or quadport nics.
---
Hello Kyle,
I'm starting to think that maybe splitting the SC/vMotion traffic up with VLAN tagging/802.1q might be a leap that I'm not ready for quite yet but would something like this be what you were talking about having pNIC0 -> SC (backup vMotion), pNIC1 -> vMotion (backup SC)
Granted I would have more NICs in the production and the DMZ area but the right idea?
Yes. However unless you use VLANs, the failover mode would put vMotion data on the SC network which is not really desirable. But that is more a trust issue than anything else.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization