VMware Cloud Community
mreferre
Champion
Champion

Infiniband Vs 10Gbit Ethernet

If you are using (or better plan to use) more than 10 ESX hosts with a certain number of NIC's...... I am wondering if you could read this article and share your thoughts:

Notice that this doesn't imply that it can be done as described. Certification / Support / and Technical Limitations need to be taken into account.

Thanks.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
24 Replies
epping
Expert
Expert

good paper on an interesting topic.

We discussed this topic at the european customer advisory council a few weeks ago, the main reason why people had a load of nics was not for bandwith but to keep the networking team happy... the networking team do not understand vmware so want it layed out as they would from physical servers, this problem is increased by the "virtual switch" in the vm world, as one member said "if it dont come in a blue box and run java slowly the networking team will not look at it."

so the problem is where does the vm networking lie, with the vm guys or the networking guys.... if it should be with the networking guys what can be done to make them more on board with virtualisation.... what if u could run that blue box virtualy!!

regarding 10G or infiband i currently can see no business justification to move to either, FC will be around for a while in big DCs (too expensive and established to replace), may be Continuous Availability will make people change there minds

0 Kudos
mreferre
Champion
Champion

Andy,

thanks for looking into this. I agree 100% especially the "the main reason why people had a load of nics was not for bandwith but to keep the networking team happy". BTW the same reason for which NPIV is meant to keep the SAN people happy (which is not a good reason to implement a technology that is a step-back on the way to the virtual datacenter .... but this is another story).

For the sake of the discussion this scenario is not meant to replace Fibre Channel nor Ethernet. Quite the opposite.

This is meant to allow the networking people to continue to use their dozens of physically disconnected networks but instead of plugging dozens of different cables into dozens of physical hosts they plug these cables once into this "bridge". They do not (technically) even need to create VLAN's or things like that if they don't want to. What happens on the other side of this switch is something they shouldn't care about (so to speak obvisouly).

The idea here is keep both the networking and the server teams happy. I think there is lots of space for improvement here but I understand that "legacies" are hard to overcome....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
oreeh
Immortal
Immortal

I already hear the security guys scream: "We need physical separation, if we use Inifiniband we could use VLANs as well."

This is one of the most common reasons to physically separate different networks.

Now how do you think one could convince these guys to "trust" a black box (the IB switch) if they don't trust in VLANs (these guys don't have trust in ESX vSwitch separation as well)?

0 Kudos
mreferre
Champion
Champion

Mh .... I see what you mean and I don't disagree in principle. However I think my point is that ... since "you" are trusting a black box called ESX into which you are plugging 22 NIC's (and if the box was called Windows probably you wouldn't have done so) why shouldn't you trust a black box that is the IB bridge. I am not sure there are data or studies that back the fact that ESX is more secure than a IB switch or the other way around.

BTW we did have a number of customers whose networking guys voted against having the ESX box being the concentrator for so many different networks but at the end the management did go tough a risk analysis and decided that it was well worth considering the ESX server "secure enough" to do that. They might want to do the same and "convince" the networking people that IB is at least as secure. After all let's face it .... if it was for the networking people we wouldn't be using VMware either as they would like to keep EVERYTHING separate and on small physical servers .....

Thanks.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
oreeh
Immortal
Immortal

I only wanted to bring up an often heard argument.

Yes, these guys use the same argument against ESX - fortunately they don't decide.

0 Kudos
GBromage
Expert
Expert

bq. I already hear the security guys scream

Really? It's the accountants that I hear screaming, regardless of which solution we pick.......

I hope this information helps you. If it does, please consider awarding points with the 'Helpful' or 'Correct' buttons. If it doesn't help you, please ask for clarification!
0 Kudos
oreeh
Immortal
Immortal

Who cares about accountants when we are on a journey...

Signature from Steve Beaver:

*Virtualization is a journey, not a project.*

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I would like to see both supported, but I am not sold on the security of VLANs. Some Intrusion Detection System work by ignoring TAGs within tagged packets. If you ignore the TCP Tag you can see all packets across every VLAN on a vSwitch, I am not sure that is true on a real switch.... But it could be possible with the less expensive Tagging pSwitches. Given this capability, using VLAN for high security issues is not a great thing. I always suggest multiple networks in many cases (vMotion, Storage, Admin). And maybe use VLANs for internal VMs, or something like that.

I like Infiniband/10G not for splitting up the bandwidth but for improving Storage network and vMotion performance. As the security issues are frightening with badly configured vNetworks.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
mreferre
Champion
Champion

Thanks for all your comments.

Keep them coming ....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
myxiplx
Contributor
Contributor

Well, from our point of view this is great news. We're a smaller outfit looking to roll out vmware for the first time, and we're debating whether the extra cost of infiniband is worth it. Nice to have confirmation that infiniband is as good as we thought Smiley Happy

0 Kudos
mreferre
Champion
Champion

For the records I didn't really want to say it's "good" or suggest to use it ..... I literally just wanted to throw it onto the table for more feedbacks / thoughts.

You know that in this market it's not always "the best" that wins.....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
S_Crockett
Contributor
Contributor

This sounds very similar to the Xsigo I/O virtualization business case.

0 Kudos
mreferre
Champion
Champion

It is (although I didn't mean to push Xsigo specifically as a vendor... in fact I have published my article before they came out with their "business case" - I think at least). Smiley Happy

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Anders_Gregerse
Hot Shot
Hot Shot

I've also read the article and I can see the benefits (speed, simplicity, etc). But I don't like to be the first doing it (we are a small shop) and it adds yet another technology to keep redundant and as far as I know there is not anyone using it. We don't have security or accountants screaming though. Where would the IB team sit? Networking, storage ? New technology biggest hurdle is overcoming "fear" of changes (letting go of old ways of doing things, screaming security at every change where the business will gain a big advantage, etc.)

0 Kudos
mreferre
Champion
Champion

Anders, completely agreed.....

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
Anders_Gregerse
Hot Shot
Hot Shot

(looking aside the new technologi, need for knowhow, eg.) Have you done any cost analysis on the subject? What are the break-even? I've been on one of the vmware supported vendors site and it's looks cheaper than I thought, but it also lacks support for iSCSI hba virtualization to avoid sw initiators. However we are a small shop with 6 hosts and each have about 4 nics and 2 hba ports and even there it already seems cheaper with Infiniband. It could also increase bandwidth at major database, mail and backup installations. The question is whether Infiniband will beat 10Gbit Ethernet on the hosts.

0 Kudos
mreferre
Champion
Champion

Anders,

good points. My very personal opinion is that with 6 hosts / 4 nics / 2 HBA's you should NOT look into this. Even assuming that you get to the break-even (debatable - IB HBA's/ IB switches / IB bridges etc etc for 6 hosts are going to be a bit expensive) the efforts you need to put into it to change your status-quo are going to be big. I see IB better (potentially) for large organizations with lots of servers and a big number of ethernet connections. There is where the benefits of IB could be put on a large scale.

> The question is whether Infiniband will beat 10Gbit Ethernet on the hosts

This is another good point. We will definitely (mh .. likely) see a convergence on the media in the long run for storage / networks. IB <might> be that media even though there are people that think that 10Gbit Ethernet could be it as well (there are lots of discussion going on about CEE - Converged Enhanced Ethernet and FC over CEE etc etc). We'll see but obviously 10 Gbit ethernet has an advantage for obvious reasons... the point is that with IB we have a solution today .... with Ethernet these are still plans.

Massimo.

Massimo Re Ferre' VMware vCloud Architect twitter.com/mreferre www.it20.info
0 Kudos
pattho
Contributor
Contributor

I have seen alot of theorizing on this subject but no proof of concepts yet, it is very attractive.

Is anyone doing this currently or has evaluated it, proof of concept?

Would be a huge boon in the blade chassis environments, 16 port infiniband virtual config for both SAN/Network to service all the blades....

0 Kudos
eXtreme
Contributor
Contributor

Proof of concepts are being evaluated. The performance #'s in

combination with the number of cables / devices / ports / energy

consumed is very attractive. I also believe that by reducing the

number of devices and device drivers that reliability and availability

will signfiicantly improve.

Since VMWare decouples and "truely" virtulizes the I/O devices then no rewrite of I/O protocols are required from a VM'ed OS perspective. One of the side benefits of consolidation and the ability to effiently utilize hardware when virtualized, is that we can adopt faster technologies sooner without a high cost of change on the OS and application side.

Infiniband's current roadmap provides for 1Tbps in 3 years. Ethernet and DCE/CEE (which is more like Infiniband) only has 40 & 100 Gbps on the roadmap. In the end we are best served as customers by having at least two I/O technologies compete.

PS. Infiniband's mature low latency RDMA stacks will enable to to take advantage of shared DRAM much more efficiently than anything else available today.

0 Kudos