970170
Contributor
Contributor

Get Rid of Physical Switch between 2 ESXi hosts - Virtual Layer2 between boxes

Jump to solution

Hi,

This may be a stupid question but I can't quite figure out the answer so I might as well ask.  I have two physical ESXi hosts in my lab and for various reasons would like to eliminate the physical switch between them but still retain full vSphere functionality (vMotion, HA, DRS etc).  Is there a virtual switch/repeater product that I can buy/use (deployed on both ESXi hosts in a HA configuration) that can give me this functionality?  I would then use pNIC ports on each ESXi host to talk to each other and additional pNICs as switchports to the outside world.  i.e. could I do this with some sort of multi-Vyatta configuration?  NSX?  Need multi-cast support also.

Thanks for your help!

0 Kudos
1 Solution

Accepted Solutions
JarryG
Expert
Expert

If you want to get rid of physical switch, simply do not use it. Use eth-cable to connect pNICs of those two ESXi-hosts directly. If your NICs have auto-mdi(x) capability, you do not need anything special. Some older NICs might still require crossover ethernet cable, but even this is not necessary with new hardware...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉

View solution in original post

0 Kudos
9 Replies
JarryG
Expert
Expert

If you want to get rid of physical switch, simply do not use it. Use eth-cable to connect pNICs of those two ESXi-hosts directly. If your NICs have auto-mdi(x) capability, you do not need anything special. Some older NICs might still require crossover ethernet cable, but even this is not necessary with new hardware...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
970170
Contributor
Contributor

OK but what about if I need 3 hosts?  How would I wire/configure this considering that if one host goes down the other two still need to talk to each other?

0 Kudos
JarryG
Expert
Expert

Then you need 2 NICs in every host and connect all three hosts in triangular topology (every host with both of other two). Proper network configuration is necessary (ip/mask, routing, etc).

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
970170
Contributor
Contributor

..and then to connect the cluster to the outside world, would I need another port on EACH ESXi host?  Have you actually tried something like this before or is it just theory?

Thanks

0 Kudos
schepp
Leadership
Leadership

Your thread is about 2 ESXi hosts. Now we are at 3, and suddenly you also want to connect to the "outside world".

That's what switches are made for! Smiley Wink

0 Kudos
970170
Contributor
Contributor

Yes I agree its scope creep for sure!  Sorry about that..

Various reasons for my questions:

  1. 10 Gig switches are expensive, I am funding this personally
  2. 10 Gig switches are loud and rackmount, this causes me logistical issues at home (condo)
  3. 10 Gig switches are hot, this causes me more issues at home
  4. Any additional HW adds that take up space directly increases the probability of a divorce in my near future
  5. Need 10G for all flash VSAN
  6. Need 3 hosts for VSAN
  7. Would also like 10G to the rest of the network


Should have probably outlined this to begin with...

Any help or insight would be appreciated.  Thanks!

0 Kudos
JarryG
Expert
Expert

If you want to connect 3 esxi-hosts, then outside-work (maybe some NAS or whatever), than I definitely think it is worth to get switch. That's what switch is made for. I do not think that argument about price is so serious, you can get some entry-level 8-port 10gbit switch for less than 1k €/$. FYI, I'm using d-link dxs-1210-10ts (new for ~700€, 8x 10Gbase, 2x SFP+) and I think it is money well spent.

Solution without switch is acceptable really only for direct connection between 2 hosts (as your original question was). With every new connection it is getting more and more complicated, and at the end you might pay more for all those NICs than for 10gbit switch...

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉
0 Kudos
970170
Contributor
Contributor

How loud is your D-Link switch?  Does it generate a lot of heat?  Do you have it in a room in your house?  General thoughts about the switch?

That one is slightly cheaper than the Netgear one I was looking at..

Thanks

0 Kudos
JarryG
Expert
Expert

It is sure not as loud and generates much less heat than my esxi-server and NAS. But I can not say a lot about it because I have it in 19" rack, together with all other equipment, all in separate room. It takes about 30-40W, but at max switching capacity (~200Gbps) it would take more juice and generate more noise, that's for sure.

It is not the best switch you can find, but I'd say price/performance ratio is in good ballance. It is comparable with Netgear models (i.e. Netgear ProSAFE Plus XS708E). Of course none of them is like Cisco 500-series, but that's different class, and to get something like SG500XG-8F8T you'd have to pay much more.

_____________________________________________ If you found my answer useful please do *not* mark it as "correct" or "helpful". It is hard to pretend being noob with all those points! 😉