Do you know if there is any network speed limitation between VMs running on the same Vmware Workstation? I seem to remember something saying 1 Gbit/s as maximum, but if the VMs can deliver more throughput - will it go through?
do you use bridged or one of the hostonly networks vmnet1 or 8
the hostonly networks are only limited by the hosts CPU resources
but anyway - Gbit network is way faster than anything the virtual disks can handle
do you use bridged or one of the hostonly networks vmnet1 or 8
the hostonly networks are only limited by the hosts CPU resources
but anyway - Gbit network is way faster than anything the virtual disks can handle
Ulli Hankeln wrote:
the hostonly networks are only limited by the hosts CPU resources
Thanks for your reply! So the host-only could get as much bandwidth as the CPU can process the packets, that is good. If using the bridged, does it have to touch the physical adapter even if going to another internal VM?
Ulli Hankeln wrote:
but anyway - Gbit network is way faster than anything the virtual disks can handle
Is it really? Today I use mechanical SATA-2 HDD and it works very well, and I could almost get around 100 MB/s throughput from the disks, but I am in the process of buying a SSD 3 Gbit/s, perhaps around 200 GB, to store VMs on, and then it should be able to deliver (in theory) up to 270 MB/s. Here a 1 Gbit network would be the limit.
Bridged network connections have to travel through the physical nic
> Here a 1 Gbit network would be the limit....
theory is NOT practice
Ulli Hankeln wrote:
Bridged network connections have to travel through the physical nic
Even when doing VM to VM traffic?
> Here a 1 Gbit network would be the limit....
theory is NOT practice
Yes, naturally, but please elaborate if you like?
> Even when doing VM to VM traffic?
yep
I still have to see a virtual disk perform so well that gbit network would be the bottle neck
maybe if you use SSDs as physical disk ...
Ulli Hankeln wrote:
> Even when doing VM to VM traffic?
yep
I still have to see a virtual disk perform so well that gbit network would be the bottle neck
maybe if you use SSDs as physical disk ...
Strange with the bridged adapter. I have most often used that for my VMs to get them to access internet, but I guess the NAT device could be better instead, to keep the VM to VM traffic only in host RAM.
As for the disk I am about to buy SSD to use for datadisk for the VMs and I plan to use one VM as iSCSI target for other VMs (ESXi) and just do not want the internal networking to be a limit, even if unlikely.
you will surely hit other problems with NAT then just speed limits
Ulli Hankeln wrote:
you will surely hit other problems with NAT then just speed limits
Please elaborate if you like. What will be problems with using the NAT device?
the NAT service tends to be unstable
but that only happens if you have very busy VMs - for example when the VMs run p2p software like emule or stuff like that
> ... the NAT device
thats very vague - this is not a device - vmnet1 and vmnet8 use the same virtual adapter + for vmnet8 your host runs an additional NAT-service
which is not too rock solid
for high demands I therefor recommend to replace the NAT-service offered by the host with a NAT service offered by a Linux or BSD VM.
I use the m0n0wall firewall VM based on FreeBSD whenever I have to setup a scenario that has very high demands on NAT throuput
Ulli Hankeln wrote:
the NAT service tends to be unstable
but that only happens if you have very busy VMs - for example when the VMs run p2p software like emule or stuff like that
Thanks for your reply. So the NAT service is not suitable for high loads, but if wanting to run high network loads internal to the workstation and just be able to download updates or similar, it should work?
> ... the NAT device
thats very vague - this is not a device - vmnet1 and vmnet8 use the same virtual adapter + for vmnet8 your host runs an additional NAT-service which is not too rock solid
Yes, I know it is vague. I do not understand the Workstation networking! I have been working a lot with physical switches, all kind of Cisco and HP devices with VLANs, spanning tree and similar, and also know the vSphere vSwitches (standard and distributed) well, but.. I do not understand the networking in Workstation. For me it is very very vague what the VMNETx represent and the difference between them. I am quite new Workstation user too, so I have not looked much at it, but it is kind of confusing.
each vmnet can be reagrd as a hub
if you have a VM that uses an ethernetcard configured as vmnet1 the connection goes like this:
host-virtual-adapter-vmet1 --------------------- hub vmnet1 ------------------------- ethernet0 card of the VM
if you have bridged network the connection looks like this
external router ------------------ hub vmnet0 -------------------------- ethernet0 card of the VM
Thanks! When speaking of a hub, will that mean that any network traffic is visible to all VMs (if putting their nNIC into promiscous mode)?
There is no VLAN support in the Workstation switches/hubs?
nope - VMware documentation says that vmnets are switches - but that is not correct
Nope to VLAN, but yes to distributing all traffic to all attached VMs?
yes- you can sniff all traffic that goes through a vmnet by simply connecting a VM with a sniffing-tool like wireshark to the same vmnet
Ulli Hankeln wrote:
yes- you can sniff all traffic that goes through a vmnet by simply connecting a VM with a sniffing-tool like wireshark to the same vmnet
Like a promiscous Portgroup in vSphere vSwitches? Strange, but I guess the security issue is lower in a Workstation enviroment.