I may have misunderstood or overlooked something, but I have a bunch of VMs on my Mac with Fusion Pro 11.5.7, which are connected to various virtual network segments through Vmnet-adapters.
Now I'm exploring running containerized applications with vctl, and I want the application to be accessible on a certain subnet (virtual network segment). But out of all the parameters in config.yaml it seems the Vmnet is not available when running "vctl system config".
And if I edit the config.yaml file and change the NIC name to for example vmnet4, and then run vctl system start, it automatically create vmnet12 (the lowest unused vmnet number found) and edits the config file to that.
Why am I not able to control this seemingly essential parameter? Or am I misunderstanding it (too much VM centric thoughts maybe?), and editing the wrong things to achieve my goal? (Not sure if my question is clear, but hopefully someone understands what I'm asking and what I'm trying to achieve.
Probably not that many active VMTN users who use vctl.
You might want to give it a try at github:
Being able to specify a network for the container host to connect to was out of scope and isn't supported unfortunately.
I want this too... (i.e. set a Container to run on Bridged so it can be accessed directly)... but I don't think we'll be getting that in the short term.
Thanks @Mikero .
I now understand the networking with vctl a bit more. So basically the container host is mapped to all interfaces on the Mac, and does a port-forwarding to the container IP and port regardless of which interface you address. Also, the traffic is souce-NAT:ed with the external IP as the source.
I managed to kind of work around this by looking at the properties configured of the vmnet adapter assigned to the container host. There I could see that DHCP was enabled, and that NAT and connecting the local host was enabled too. Disabling the NAT was not an option, as the container start was aborted once that's detected. But I could disable the DHCP, And then I enable DHCP server mode on one of my VMs (a firewall acting as a router), and put it on another IP, next to the local Mac's IP. After this, the containers are provided an IP from my firewall, which means I have more control of the IP assigned, as well as can set the default gateway to be that of the firewall. And now I can route traffic straight the container, without NAT.
And by omitting the usual portmapping when starting the container, I can basically also prevent any traffic from coming in that way.
The only really odd thing I've observed so far is that _if_ I try to use the portmapping on the Mac host, just before the first TCP segment is sent, there is an ARP request sent for the container IP address, but with the sender IP of my firewall, but the sender MAC and source MAC of the local Mac host's interface... (this is detected as an IP conflict by Wireshark, given that another MAC address claims to have the IP of the firewall/gateway.
vctl ps is by the way still able to detect what IP was assigned to the container.
What remains to be seen is how persistent the MAC address assignment to the container is, and whether IP address reservation in my DHCP server will work over time, so that my services end up on predictable IP addresses.
Edit: One showstopper I relized in the end is that when I do a vctl system stop and vctl system start, a new vmnet adapter is created... 😞 Not sure if that's due to the original vmnet adapter already existed and was used.... Bummers!
Any ideas on how to tweak or hack things to get around this?