VMware Cloud Community
heybuzzz
Enthusiast
Enthusiast

Are you moving to 10 gigabit ethernet?

I'm curious to know if anyone out there is running (or plan on moving that direction) 10 gigabit Ethernet to their ESX hosts. I had a talk with one of our local Cisco reps yesterday and he was telling me that I could replace my six pNICs (2 - SC/VMotion and 4 - Production) and dual 4 Gbps FCcard with just two copper cables running 10 Gig E. Going from 6 different physical cables from 3 different NICs to two different switches in my data center gives me a good piece of mind. To take that all away and just rely on two cables...hmmmm. I was called into this meeting when they started to talk about how great it was and would be a cost saver. Anyways, I have done some reading on my own and would like to hear some personal experiences. Thanks

Pretty good read...

Reply
0 Kudos
12 Replies
jguidroz
Hot Shot
Hot Shot

Yes, we are looking to move to 10Gig next year. With the amount of VMs we are looking to virtualize, it just made sense to move to 10Gig. We will most likely purchase two dual-port cards per server and start with 2 connections from each server, and move to 4 if we need to. This is strictly for VM and storage traffic. VMotion and Service Console traffic will remain on the built-in nics on the server to separate network switches.

The basis of this is combining your storage fabric and network fabric into one unified fabric. Like any good fabric design, redundancy should be built in.

Reply
0 Kudos
heybuzzz
Enthusiast
Enthusiast

"VMotion and Service Console traffic will remain on the built-in nics on the server to separate network switches."

-See, that's my issue.... I was told to just combine those function on the two 10 gig cables. I do not like the idea of NOT having them seperate, but it increases the cost of using 1 gig port (downgrade a 10 gig port to a 1 gig) on the 5000 series Nexxus.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

10Gb and CNAs will change how you do security within the virtual environment, but not as much as you would expect. You still want segregation between your Service Console/Management Appliance, V-Motion, IP Storage, and your VM networks. Using just 2 10Gb links will not give you this segregation and will not necessarily grant you security. Consider that NO ONE recommends just two pNICs for ESX, it is best to use your 10Gb links for what is important. Most likely IP Storage OR VM Network. Not for both at the same time.... But you say, no one can saturate a 10Gb link... Some Disk IO will be able to in the future. You not only need to plan for performance but security as well. heybuzz's approach is the best I like so far but I do not know for what he wants to use 10Gb links. Just for VMs or for IP Storage+VMs? Are they FC CNA's?


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
heybuzzz
Enthusiast
Enthusiast

Yes, I do not want to combine all the functions on 2 10 Gig links, but the "experts" said it would be fine to combine all of the functions. So my Engineers ask me why do I need two 1 Gig cables for my SC/VMotion traffic when they read all on the two 10 Gig. Yeas they are FC CAN's.

"You still want segregation between your Service Console/Management Appliance, V-Motion, IP Storage, and your VM networks. Using just 2 10Gb links will not give you this segregation and will not necessarily grant you security."

Can you expand on this? Thanks

Shows all on one 10 gig cable

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

"You still want segregation between your Service Console/Management Appliance, V-Motion, IP Storage, and your VM networks. Using just 2 10Gb links will not give you this segregation and will not necessarily grant you security."

Most FCoE/CNAs will use one or two cables, they are a tad different from straight 10Gb links. With straight 10Gb links you can have a 'bump' in the wire and someone could in essence sniff all the traffic, with a CNA, it does not work like that. So lets look at non-CNA first

Using 2 pNICs regardless of speed implies you must TRUST VLANs to make everything work, specifically the trunking of those VLANs through your pSwitch to your vSwitch. vSwitch security is such that current known Layer-2 attacks can not occur, but Layer-2 attacks can occur within your pSwitch, so your pSwitch is the weakest link. Most physical switches have some defense against Layer-2 attacks but there are not many. So in essence the use of VLANs as a security measure depends on your TRUST levels in your physical switch and that everything is setup properly, your network is not hacked, etc. This is a precarious situation from a security perspective, because 'TRUST' has to be earned is not just given.

CNAs w/Nexus switches are a bit different however. These allow 7Gbps to be assigned to FC-SAN and 3 Gbps to be assigned to standard networking (simplified definition). Because of this, you need your CNA to go to your Nexus switch. THey work differently but the security issues are really the same. Do you TRUST VLANs, because you will need to use them on the pSwitch if you only have 2 CNAs. The wiring in the Nexus handles the FC and the networking differently but the networking side still has Layer-2 attack issues. So using a CNA is really not much better than using a standard pNIC. Specifically you may not be able to use VST but mostly EST.

Using the 10Gb pNIC for Storage, great idea, segregate all other networks.

Using the CNA for Storage + Virtualization Host Networks (VMotion/SC) may also be a worthwhile idea. but I would not use it for all that plus VM network.

With either tech you have a level of TRUST that needs to be earned.

You need to still keep your data separate as much as possible. I will post a blog on this once I put my thoughts down on paper, there are many other considerations for use of 10Gb links.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
jguidroz
Hot Shot
Hot Shot

I agree with your points on trusting VLANs. A lot of people believe VLANs=security, and this is just false.

However, on your description of the CNAs, I'm a bit confused. Everything I've read regarding CNAs is that the board is made of three chips, a 10GB ethernet chip, a 4GB FC chip, and a third chip designed by Cisco that provides the Lossless ethernet. You will get 10GB ethernet from a CNA for your networking.

This link gives a pretty good picture of how CNA operates.

http://www.internetworkexpert.org/2009/01/01/nexus-1000v-with-fcoe-cna-and-vmware-esx-40-deployment-...

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I was going by what Cisco said on the Virtualizaiton Security Podcast... However, while the FC is a separate chip, everything still travels over the wire... Effectively there are 'channels' on the wire and FC takes up some, ethernet the other, etc. Either way you are comingling data between CNA and Nexus but since both are on the wire there is no possibility of a bump in the wire... so its the pSwitch that needs to be protected, etc. as well as teh vSwitch.


Best regards, Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009, DABCC Analyst[/url]
Now Available on Rough-Cuts: 'VMware vSphere(TM) and Virtual Infrastructure Security: Securing ESX and the Virtual Environment'[/url]
Also available 'VMWare ESX Server in the Enterprise'[/url]
[url=http://www.astroarch.com/wiki/index.php/Blog_Roll]SearchVMware Pro[/url]|Blue Gears[/url]|Top Virtualization Security Links[/url]|Virtualization Security Round Table Podcast[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
jguidroz
Hot Shot
Hot Shot

I completely agree. It all goes back to Trust.

Reply
0 Kudos
rDale
Enthusiast
Enthusiast

I can say that ive been running for over a year 16 core 128GB boxes with over 100 vms on them including heavy web and SQL servers, all boxes run everything through 2 single port 10GB interfaces.

Not had a single problem performance wise infact discovered that vmotion wont go above 900mb/sec, with a single vSwitch you can still push console and vmotion onto one interface and data onto the other.

Yes you need vlaning and for us that required from heavy external security monitoring, but was worth it.

I think you would be suprised how little traffic actually occurs on the NICs even with 100+ vms. What we looked for was latency and the move from 6 1gb interfaces to 2 10gb has been excellent.

Given that we have over 200 vlans in use so seperate interfaces wasnt even and option.

As for switching we are using nexus 5020s and they are fast.

meistermn
Expert
Expert

Can you tell more about latency between 1 gigabit and 10 gigabit. Smiley Happy

Reply
0 Kudos
CWedge
Enthusiast
Enthusiast

1gbe latencey is 120micro sec and 10gbe is sliding window, base at 10 micro seconds up to 70 depending on load.

Reply
0 Kudos
heybuzzz
Enthusiast
Enthusiast

I appreciate all of the info. Our 2nd DC is being built now and they plan on fitting it with some Cisco 5010's or 2148's. I'll let the Network Engineers worry about all the VLANs.

Reply
0 Kudos