Not sure if I should post this here or in the general Vsphere forum, but my desired usage would be for VSAN...
In normal Linux, admin can build a br0 logical container (bridge interface), and then associate 2 pNICs (eth0 and eth1 say for example) to it, so that they reside in the same L2 broadcast domain (act as switch ports), then you assign your local interface IP to the br0 construct and the physical eth0 and eth1 ports on the box will act like switch ports and can actually pass traffic across them inside the br0..
Can this same thing be done on ESXi?
For example, if you have a dual-port physical NIC card, is it possible to build a logical bridge construct in ESXi and assign both of those pNICs of your dual-port card to it so they can pass broadcast traffic across? Or could one just assign 2 uplinks to a vDS, and make them promiscuous and pass traffic across each other? Or could a bridge somehow be built below the hypervisor layer so only one vmnic is presented to the hypervisor? Is there a standard way to accomplish this?
I want to configure a 3-node ESXi cluster for VSAN testing in a switchless daisy-chain configuration like this: ESXi-01 <--> ESXi-02 <--> ESXi-03. Requirements: No physical switch can exist. This network will be isolated (no gateways), and only used for VSAN traffic in my example.
In my particular example above, only the ESXi-02 host would actually need the dual port NIC installed to start with, but ideally, this setup could be expanded to include additional nodes as well. Think of it like an isolated backplane for VSAN. Once POC was established, second sets of PCIe dual-port network adapters can be installed to eliminate single-point-of-failure.
From what I understand though, VSAN requires all nodes be in the same broadcast domain (?), so I do not believe you could simply make this work using L3 networking. Seems to require all hosts have their respective VSAN vmkernel interface to be on the same L2 broadcast domain. Perhaps it is possible to accomplish this using L3 networking? Perhaps it could even be possible to daisy-chain all hosts into a complete circle using vDS and LBT?
The last thought that came to my mind is if there are such thing as dual-port PCIe network adapters which might have hardware-based bridging built right into the controller. This would be the most desirable. Part of this post here is to get ideas on whether this bridging can be done easily in the software level, but also to provide food for thought on the development side of things in case this is not currently possible. Making this possible to do in the hardware-level directly on the card would most likely only require a firmware flash on the card itself to enable this functionality.
The motivating factor for all of this is because 10 GbE switches are a very expensive investment for an edge location (especially if you need 2 for redundancy), yet 10 GbE dual-port PCIe cards are very cheap. Some way to eliminate the expensive switching would save a lot of money at remote sites where they cannot be justified..
Any thoughts on this?