Highlighted
Contributor
Contributor

Nested ESXi 5.5 on Workstation 10 Networking issue

Jump to solution

Hello All,

I have an instance of VMware workstation 10 running on a HP z600 with hosts OS of RHEL 6u3 where I’m trying to host a nested instance of ESXi 5.5 for a lab environment.  I don’t have any network problems with guest OSs (Windows or Linux) using bridged vmnics on the workstation instance.  I’ve configure the ESXi server to also use a bridge vmnic.  I can connect to the ESX server fine from any workstation on my network with vSphere but I can’t connect to any vm hosting from the nested ESX server.  I've tried Linux and Windows VMs with the same result.  VMs don’t seem to have a problem obtaining a IPv4 config from my DHCP server (interesting).  From the VM I can’t ping the default router.  Ping indicates destination is unreachable.  I can ping the management IP on the ESX server from any workstation on my network but not any VM.  Both the ESX mgt network and vmnetwork are on the same vmnic.  Very simple config.  Any thoughts on where to start?

Bridged network on VM workstation:

WorkStationEditor.png

ESX Settings on worksation:

ESXSettings.png

ESX vSwitch Settings:

ESXSwitch.png

Thanks for any help,

-dave

0 Kudos
1 Solution

Accepted Solutions
Highlighted
Immortal
Immortal

Do you have the vmnet devices configured to allow promiscuous mode?  See http://kb.vmware.com/kb/287.

View solution in original post

0 Kudos
4 Replies
Highlighted
Immortal
Immortal

Do you have the vmnet devices configured to allow promiscuous mode?  See http://kb.vmware.com/kb/287.

View solution in original post

0 Kudos
Highlighted
Contributor
Contributor

Thanks for the reply.  I seen that articles and did the lazy thing by adding my personal ID to the root group and setting all the rights to vmnet* accordingly.   Where my problem was is I didn't log out and log back in after I made the group modification for my personal ID.

I have a follow up question about an observation if you don't mind.  I took a reboot on my z600 after re-reading that articles.  I was still having my previous issue but this time I noticed my vmnet* device were back to just root:root 600 ownership.  Do you know what script creates these device on boot?

Thanks again,

-dave

0 Kudos
Highlighted
Immortal
Immortal

Not exactly, but I can tell you how to fix the problem.

# mknod c 119 0 /lib/udev/devices/vmnet0

# mknod c 119 1 /lib/udev/devices/vmnet1

# mknod c 119 8 /lib/udev/devices/vmnet8

# chown root:root /lib/udev/devices/vmnet?

# chmod 660 /lib/udev/devices/vmnet?

The devices will be copied from /lib/udev/devices to /dev on boot, with the permissions preserved.

Highlighted
Contributor
Contributor

Perfect.  Thanks a lot for your help.

-dave

0 Kudos