VMware Cloud Community
RBurns-WIS
Enthusiast
Enthusiast

FCoE - Friend or Foe?

I'm trying to get the opinion from various professionals in the industry on their thoughts of FCoE (Fiber Channel over Ethernet). I myself am a big supported. For those who are new to the concept of FCoE I'll briefly explain: FCoE allows the consolodation of multiple traffic flows including LAN, Management, Storage, IPC, VMotion etc over a shared medium. This medium is 10G ethernet. FCoE uses Priority Flow Control (PFC) and congestion control utilizing a buffer credit mechanism to provide a "lossless" medium essential to carry Fiber Channel storage traffic. Take an ESX server in a corporate network. You probably have two or three 1GB LAN connections, two FC connections for Storage, one dedicated connection for Management, one connection for Vmotion and potentially more depending on your configuration. At minimum most ESX servers have no less than 6 Network connections at any time. These connections can be replaced with two redundant FCoE connection. Understandibly FCoE requires special switches such as the Cisco Nexus 5000 series which can aggregate Native Fiber Channel, Ethernet and FCoE traffic. In turn these switches would link up to the backbone core switches and Fiber Channel director switches.

There is a slight cost associated to the new type of network card called a Converged Network Adapter (CNA) as well as the switches. These costs can normally be recouped by the reduction in cables & switch ports, power savings, increased performance and centralized management.

I'd like to hear anyones opinion, concerns or comments. If you have any questions I'll be glad to answer them.

Cheers.

Rob

0 Kudos
22 Replies
K-MaC
Expert
Expert

Do you work for Cisco?

Cheers

Kevin

Cheers Kevin
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

Yes I do.

0 Kudos
williambishop
Expert
Expert

I know I'm shocked.... Smiley Wink

Seriously though, FCOE and the new datacenter network (both from cisco and brocade) basically call for a plant forklift replacement. For those of us with substantial installations, this is not a "slight" cost upgrade...

Starting new, yeah, it's feasible. But it's also bleeding edge, so it will take a while before it gets grip and starts gaining momentum. Personally, in 6-10 years, I imagine it will be mainstream. But it's VERY expensive tech right now. Sure, I'm going to throw out the million dollars just in FC infrastructure I own.....Or am I? Would I be willing to try it? Yep. Just bring down the chassis into a smaller form factor so I can afford to test it. The last 9000 series switch I bought cost me about 12k vs. the 50k of the competing switch just two years earlier, and offers a lot more functionality. FC is cheap to implement these days. I don't see FCOE biting into that anytime soon. And that's not even taking into consideration the worry that a lot of us have with lowering our security by moving it to copper based connectivity from glass....Or the risk of running all of our eggs through one basket. Pass for now.

--"Non Temetis Messor."
0 Kudos
Rodos
Expert
Expert

Rob, you just got branded as failing the "don't be evil" test. Shame on you Cisco. As soon as I read your post I thought, he works for Cisco. I jumped in to ask and I was beaten to it. I do give you credit for being brave enough to state it.

As someone who is delving into this deeply (I have the kit in my lab) I am not going to answer your question, even if you had the best of intentions. There are plenty of us here who will be discussing this in depth, but this is a cummunity forum and we all try to leave our organisations and agenda at the door if we can (or be better at hiding it Smiley Happy )

If you had of said. "Hey guys, I work for Cisco and I am trying to find out these specific things and here is why " or if you had a post history above 13, you may have got some traction.

Sorry if I sound harsh, its been a 19 hour day.

In summary. Great question. Answer, hang around here for a few months and read the forums and I think you will get lots of insight into what people thing.

Considering awarding points if this is of use

Rodos {size:10px}{color:gray}Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/{color}{size}
0 Kudos
mreferre
Champion
Champion

Rodos, don't be so bad.... Xmas is coming... Smiley Wink

I don't think there is nothing wrong in a vendor asking that. It might turn into an interesting discussion for the whole community which might have been looking into this and they have now an opportunity to express their concerns (and possibly the vendor providing a rational why that shouldn't be a concern). Do you agree?

I tend to see these things as an opportunity rather than an offense. It might also give you a chance to challenge the vendor. Let me start first Smiley Wink

As far as the converged network, Nexus 5000 hw switches / Nexus 1000 sw switches ..... is the Cisco x86 blade the new frontier to provide an integrated end-to-end solution for virtual environments?

http://www.virtualization.info/2008/12/cisco-to-enter-x86-server-market-with.html

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Lots of thought needs to go into using Converged networks. The question arises as to how this would actually be used, how security would be implemented. But before those questions can be answered we need to know how FCoE itself and it's switches protects against MiTM attacks as well as the other current crop of Layer 2 and Layer 3 switch attacks.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

SearchVMware Blog: http://itknowledgeexchange.techtarget.com/virtualization-pro/

Blue Gears Blogs - http://www.itworld.com/ and http://www.networkworld.com/community/haletky

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

If you don't want to answer, then just bypass the post. I'm not hiding where I work, nor is my intention anything work related. I'm an engineer and really have no care if you like/hate Cisco or our products. It was a general question where I'm looking for the opinion of fellow professionals. I asked for comments on a technology not a specific product...

All other responses have been appreciated.

0 Kudos
Rodos
Expert
Expert

Rob, I will take you at your word. For an engineer you write like a salesman, but I have probably been accused of that myself at times Smiley Happy Obviously I am turning into a grumpy old man for which I apologize . For what its worth I am a big Cisco fan and have been flogging it for years. Some of my best friends run IOS. Smiley Happy

So you are keen to discuss and answer questions. Excellent, that is what we like around here. Let me clear the slate and let the discussion begin.

I would be keen to see some thoughts around Texiwill (Eds) question on security. He gets this security stuff and I only know enough on it to get myself into deep water but not out of it. I would not even know what questions to ask.

For the CNAs, are they all the same or are there differences across manufacturers? What considerations should people look at when choosing product? Do all present a single HBA and a single 10G network interface or can you get one that presents multiple? What do you think is coming down the track, do we need to or how do we future proof?

Typically one ends up with lots of ports in a server where physical separation of networks is required (multiple vSwitches) rather than using VLANs (one vSwitch and multiple port groups). Is this something that CNAs can help with?

How do we compare and contrast a bunch of rack servers with CNAs connecting into a Nexus which then uplinks to the DC fabric for ethernet and fibre, to a blade chassis with internal interconnects which then has a few uplinks to the DC fabric for ethernet and fibre. Whats the pros and cons between the two? If someone has comitted to the blade path what does this mean for the converged network space? If I am interested in moving into converged networks how does this effect my decision on server platforms? What do the blade vendors have on the horizon here.

Would be keen to hear your thoughts and insights on these things, as well as the thoughts of others. These are just some of the topics I am pondering in this space. Am I on the right track?

Rodos

Rodos {size:10px}{color:gray}Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/{color}{size}
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

Thanks Rodos & great questions. Everyone is entitled to their opinion and I'm just excited to discuss this topic that I have a great deal of interest in. There probably is a sales-sounding tone is my comments, but I'll do my best to keep it to a minimum. I wanted to give people a quick background since everyone is from different backgrounds and experience. Let's see if we can fill in some of the blanks for you here, and hopefully get some of the security concerns answered. I do have a direct line to the teams that are developing the equipment and software, so I see it being a huge benefit for you guys to voice your concerns. If I don't have the answer, I'll do my best to pass them along and find out.

Let's start with your question about CNAs. There are two main vendors are this point in the early game - Qlogic and Emulex. Combining Ethernet and Technologies has raised an interesting opportunity for vendors. Will those who produce NICs also start producing CNA's or will HBA vendors own the market? So far, these two HBA vendors are ahead of the game. For the Ethernet functionality on the CNA cards they are just purchasing the chips off Intel. Interestingly enough Intel also makes a really good 10G Ethernet card in which you can run a Software FCoE driver on. We're just putting the software FCoE drivers through the test. Of course anything done in software will impact proc cycles and performance. Currently both Qlogic & Emulex present two separate adapters to the OS. Ex. In Windows, under device manager you will see a 10G ethernet Adapter and a separate Fiber Channel Adapter. The great thing about this is the OS treats it just the same as if the CNA was two physically separated adapters. On the downside of CNA is cost at this point. A dual port CNA can run over 2k USD. Hopefully the spirit of competition will bring this down below the 1k mark in the next year.

On to your second comment about rack servers vs. blades. Both options are feesible. In the next year you will see every blade vendor (IBM, HP, Dell) coming out with advanced function blade switches for the server chassis. Brocade and Cisco already produce 10G and Fiber Channel blades for the Dell m1000e chassis. This will allow those enterprises who have already invested in blades to extend their efficiency. Due to fabric and backplane requirements I wouldn't be surprised if this required some vendors to introduce a new chassis to support the added functionality. FCoE may introduce some great improvements, but at what cost? You don't want to have spend the savings from hardware consolodation on management of the technology. FCoE shouldn't have any impact on your server platform other than the usualy driver requirements. Both Emulex and Qlogic has drivers for every major platform - Windows, Linux, Vmware and Soalris.

Design will always be dependent on your networks requirements. For those with a blade environement you may not need to uplink into a top of Rack switch like a Nexus 5000, but rather directly to your core switch such as a Cat 6500 or Nexus 7000. In my personal opinion I'm not yet a fan of the Nexus 7000. It's a very expensive core switch with limited fuctionality (at this point). Unlike the Cat 6500 there are no service cards (Firewall, Content Switching etc) for the 7000. It's built to be a very fast core switch. It doesn't even have Fiber Channel functionality yet. This is going to keep the 6500 as most enterprises core switch-of-choice for a while longer.

As for security Texiwill, what I can offer to address your concerns is what Cisco has come out with called TrustSec. TrustSec will allow for encryption of data between switches or between hosts & switches preventing MiTM attacks. I'm not an expert on TrustSec, but do know one of its advantages will be to address security concerns that are being identified as part of virtualization's maturing. I do hope this will grow into a standard protocol that can be used in environements with hardware from multiple vendors similar to IPSec.

Texiwill - any additional insight on this?

Cheers.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

As for security Texiwill, what I can offer to address your concerns is what Cisco has come out with called TrustSec. TrustSec will allow for encryption of data between switches or between hosts & switches preventing MiTM attacks. I'm not an expert on TrustSec, but do know one of its advantages will be to address security concerns that are being identified as part of virtualization's maturing. I do hope this will grow into a standard protocol that can be used in environements with hardware from multiple vendors similar to IPSec.

The safety of this depends on the algorithms used for TrustSec, whether each VLAN and FCoE uses its own keys and therefore is separate, and uses pre-shared keys or certificates that the Administrator can control. If it is not per VLAN/FCoE link then break the encryption and all is lost. If it is not pre-shared keys/certificates that the administrator can setup then MiTM is still possible? How does TrustSec work in the virtual environment is it just from the Cisco vSwitch to the Nexus or is it from the VM to the Nexus and beyond the Nexus?

Ideally I would want from the vNIC through the vSwitch to the Core Switch to the Firewall. I would very much like pre-shared leys/certificates using IPsec as the basis.

How much compute power is required for this?


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Rodos
Expert
Expert

I have had this thread open in my browser all day to reply to. However I keep getting distracted and have not had the 10 minutes to digest before replying.

But quickly, Scott Lowe just did a post on this subject called "Continuing the FCoE Discussion" @ http://blog.scottlowe.org/2008/12/09/continuing-the-fcoe-discussion/

Go read the post but here is a snippet.

>how is FCoE any better than iSCSI?

># FCoE is always mentioned hand-in-hand with 10 Gigabit Ethernet. Can’t iSCSI take advantage of 10 Gigabit Ethernet too?

># FCoE is almost always mentioned in the same breath as “low latency” and “lossless operation”. Truth be told, it’s not FCoE that’s providing that functionality, it’s CEE (Converged Enhanced Ethernet). Does that mean that FCoE without CEE would suffer from the same “problems” as iSCSI?

># If iSCSI was running on a CEE network, wouldn’t it exhibit predictable latencies and lossless operation like FCoE?

I have posted a comment on Scotts blog to direct people here for some comments.

Rodos

Considering awarding points if this is of use

Rodos {size:10px}{color:gray}Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/{color}{size}
0 Kudos
Rodos
Expert
Expert

Just a thought on this great question from Scott.

One difference is if you want to integrate into an existing FC fabric. You can use FCoE at the access layer and then integrate that into your existing FC switches. Many SANs don't support iSCSI or FC at the same time or alternatively don't allow access to the same LUN via FC and iSCSI at the same time.

Another is the breadth of tools for monitoring and troubleshooting FC.

Just a thought as to some of the differences.

Considering awarding points if this is of use

Rodos {size:10px}{color:gray}Consider the use of the helpful or correct buttons to award points. Blog: http://rodos.haywood.org/{color}{size}
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

Great comments. I was wondering who would pull the "iSCSI" card. I was a fan of iSCSI far before I was a fan of FC. I'll give you my take on the iSCSI vs. FC battle.

Truth be told - they're both great options. iSCSI is a great low-er cost option, but does not match the performance of FC. When I first took at look at both protocols my first impression was "wow, they're encapsulating ANOTHER protocol...big surprise". Sometimes I just prefer pumpkin pie over a 7-layer cake. Most performance hits come from the IP vs. FC protocol stack. Another very important factor is install base. Show me any enterprise network running critical HPC or databases on solely on an iSCSI SAN and I'll be shocked. Due solely to FC's presence in the market place networking giants will keep pushing this technology to be the higher performer of the two, of course with a heft price tag attached. It will also keep the more advanced management at in the hands that have come to be accustomed to it since FCoE is simply encapsulated FC which has been around far longer than iSCSI. It's a fact that it's tough to teach old dogs new tricks. Since iSCSI utilizes TCP for reliable delivery we know that it comes at the price of collisions, dropped packets and re-transmits. On the other hand, FC requires a lossless transport as you mentioned so rather than dropping packets EVERY packet must arrive in order without dropping them. FC accomplishes this through buffer-to-buffer (b2b) credits. At the end of the day, having to re-transmit packets over a lossy network will not be as fast as utilizing pause frames to control congestion. Re-transmitting packets also require the sending initiator to reprocess and send the information again, whereas b2b credits will utilize the buffer of the closest switch to the storage target and work backwards. This means that congestion would only ever reach the host on a worst case scenario. Personally I'd rather let my switches share the congestion load than force my hosts to retransmit from the furthest point away from of the storage target. Another benefit of FCoE is Priortiy Flow Control (PFC) which basically is QoS for FCoE. This allows you to keep your storage and IPC traffic at high priority & low latency and you LAN or other non-critical traffic using up any left over bandwidth. QoS can be used in addition to PFC in an FCoE infrastructure to give an unpresidented about of traffic control to the network admins.

For me the selling point of FCoE over iSCSI is consolodation. There's a huge cost reduction in cable management, powerusage and administration (both physical and logical) with multiple NICs and networks. iSCSI will not be able to unify storage, LAN, IPC, Management, and HPC networks as FCoE will (at least not anytime soon) - all without changing your existing storage infrastructure. The important thing to realize is the development of the FCoE technology will in no way force iSCSI to become obsolete. I believe every technology has its place.

0 Kudos
williambishop
Expert
Expert

So it's biggest saving is in cabling plant? That's why I went to blades, but it saves more than enough that I dont' have to worry about further consolidation. FC and Blades so far has been my biggest ally, streamlining the process and the installation 10x.

I'm going to go with waiting on it mainstream.

W

--"Non Temetis Messor."
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

I remember when blades were introduced - many people where very hesitant about "putting all their eggs in one basket", but as people embraced the technology its proved one of the most cost effective moves for companies looking to save on power & DC realestate. Success in business is driven on a delicate balance between "cutting-edge" vs. "tried & tested".

FCoE can be integrated into your networks without having to change any existing infrastructure other than uplinking new switches. This makes a easier transition for companies that want to move their high end servers to FCoE, while running all their legacy FC devices as normal. As far as predicting FCoE to be mainstream in the 6 years, I predict it will be maintstream within the next 3.

0 Kudos
williambishop
Expert
Expert

You have specs showing that FCoE is faster than FC? Why would I move my high end servers to FCoE and leave my standard servers on FC, when I would normally go the opposite(I'm assuming that FCoE does not indeed go faster, but would be equal or slower.

As to the bladecenters....I have worked on first generation bladecenters....those people were right. It was three generations before I found them reliable enough (and not changing specs every 3 months like they did early on) to trust. The first efforts weren't perfect...and they were expensive. So maybe it goes mainstream in 3 years, I still predict that it will be 6 years before it's trusted enough for the 5x 9 crowd. Meanwhile, other than cabling, it doesn't save me much at all...It's cutting edge, it's first generation, it's expensive, and there does not exist the abundant data that exists for tried and true technologies. I'm part of the paranoid crowd, and I don't change direction quickly. I expect there are a lot of us out there.

I'm not minimizing the technology, like I said, I like it, but it will be 5-6 years before it composes more than a lab for me.

--"Non Temetis Messor."
0 Kudos
RBurns-WIS
Enthusiast
Enthusiast

0 Kudos
williambishop
Expert
Expert

So, out of a rack of servers, I save enough power to run one more 2u server, and I save half the cabling. And if I am saturating my 4G san links (i'm not, and don't know anyone who is), then I can look forward to 10G and one day 100G FCoE.......Temporarily, just jot me down into the Foe category....It's not enough, and it's still too early to adopt a cutting edge technology.

W

--"Non Temetis Messor."
0 Kudos
mreferre
Champion
Champion

I personally think it boils down to how much further we can push the technology in terms of security and segregation.

I have a customer running 3850M2 ESX hosts with as many as 22 NICs (5 x quad-ports + 2 on-board) and a couple of FC HBAs. Obviously this is not done for perf reasons but rather for security purposes. If somehow this new technology is going to be secure enough to collapse all those network segments into a single cable (i.e. 2 for redundancy) and explode the complexity of these many Ethernet segments + SAN somewhere else on the backbone of the customer..... they would L-O-V-E this..... Obviously this needs to be more secure than VLANs as the reason for which they have 22 NICs is because they don't trust VLANs (I know Ed I know.... VLANs are not meant to provide security boundaries.... Smiley Wink ).

In addition to that, cable consolidation is one way to look at this and I agree that when it comes to blade one can pretty much get the same result (although 22 NICs would be challenging Smiley Happy ). However I tend to see this also as "cable flexibility" that is... how long does it take to add/remove a new Ethernet zone to a physical setup (be it blades or rack form factor)? You have to go through each of your hosts and switches and add/remove ports. With this you can do everything "virtually". It's similar to a ESX host running 10 Virtual Servers Vs running 10 physical low-end commodity boxes: there is a value in consolidating those 10 servers .... but there is also value in creating the 11th in 3 mouse clicks and a few seconds.

I think that in order for this to be a compelling technology CEE/FCoE needs to be able to address the security concerns associated to different network segments. If it's not able to achieve this than my customers would probably have to run with 20 1Gbit ethernet + 2 x 10Gbit CEE cards (for FC and for one of the many Ethernet segments he can't afford to mix with the others). Not compelling at all.... actually ridiculous.

My 2 cents.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos