So I am learning more and more about networking within esxi/vcenter.
Here is what I have.
(2) ESXi hosts
(7) NIC's physically connected to a 24 port layer 2 managed switch.
Here is what I am attempting to do:
Create three networks:
Dev - 172.16.3.1/24
Test - 172.16.4.1/24
Prod - 172.16.15.1/24
I'd like these setup across the two esxi hosts via a VDS. Each network would be 'isolated' to a degree. I just do not want one big flat network on my home network.
I have been reading the documentation about setting up VDS and VDG's. Seems straightforward for the most part. I have done some testing, but i am stuck on a few things (again, i am pretty new to this).
How many distributed switches should I create? Best practices? At some point, i'd like to expand this to vmotion, storage, maybe more.
what about distributed port groups? My initial thought was to create one distributed port group for each network (Dev, Test, prod.)
How/where do i define the subnets for the networks?
I am thinking that since my switch is layer 2 and not 3, I need to have something act as a router for me, to route traffic between the networks?
I hope that makes sense.
I know what I want to do, just trying to figure out how to do it.
You can create one vDS and you will need on vDG for Management traffic one for vMotion, I suppose you will want to migrate VMs between the two ESXi.
When you create a vDG for virtual machine traffic you cannot set IP, you can only assign VLAN ID.
Yes you will need a router, you can install and configure a Linux VM to act as a router.
Ok. This is helpful.
Yea, and I was thinking of using something like PFSense to route traffic between my VM networks.
Would something like this work:
(1) VDS for management
(1) VDS for vmotion
(1) VDS for networks (dev, test etc.)
(1) DPG for management
(1) DPG for vmotion
(3) DPG for networks (dev, test, etc.)
(1) VM router to route traffic between all the networks
For the router, I would attach the DPG to the router as networks (so it would have multiple interfaces?)
Lastly, where the upllinks physically plug into my switch, I am thinking i need to put VLANS on that?
Thanks for the help.
Just to add some more, based of some testing i did.
I setup a VDS and one DPG.
I put in one physical NIC from both ESXi hosts and one VM from each host just to test connectivity.
It worked. I could communicate to each box back and forth, simple ping requests. The idea here was to be able to communicate two VM's on different ESXi hosts.
now, i turned on DHCP to see what IP address it was given. Turns out, it is getting a IP from my DHCP server on my router (ubiquity edgerouter X.)
Walking through this, my thinking is i need to specify the ports on my switch where the two nics are plugged into in this VDS, into a VLAN? Then use something like pfsense to act as a DHCP server to VM's on one of the internal networks?
My thought process is i need to stop the VM's from getting IP addresses from the router.
Sound right? Am I on the right path?
You can configure one vDS and vDG for Management, one for vMotion and one for each VM traffic.
YOu need one your pfsense to have configured all the subnets and VLANs and make it act as a DHCP also
So, would I need one VDS for all my networks?
Then create a DPG for each of the above?
Or one VDS with all the DPG attached to that one VDS?