VMware Communities
ctrotter
Contributor
Contributor

Networking/VLAN issue with dedicated iSCSI NIC

I did a chunk of searching, but couldn't quite find what I was looking for.

Basic premise of what I want to do: I am running Workstation 8 on Win7x64, and I have three ESXi VMs that I want to provide iSCSI storage to.  The iSCSI storage is on a separate physical network/subnet/VLAN.

Here is what I have done so far:

  • Configured a separate NIC on my Win7 box (actual different physical NIC) with the VLAN driver (Realtek teaming thingy)
  • Configured the port on my switch to match the other working ports
  • Set the appropriate VLAN on the NIC as per manufacturer's tool
  • Set a VLAN-specific IP on that NIC
  • Tested (can ping storage, few other things on that subnet), so it has connectvity
  • Can ping from the iSCSI storage to my Win7 IP

In Workstation:

  • Set my primary NIC as bridged for the LAN subnet (i.e. turned off automatic bridging)
  • Set my iSCSI NIC as bridged for vmnet9
  • Added a 2nd vNIC to one of the ESXi virtual machines running in Workstation
  • Configured said vNIC as vmkernel, appropriate IP, and correct VLAN ID
  • SSH'd into the console and can ping the IP I configured, but cannot ping out to the rest of the subnet

So somewhere between vmnet9 and the VLAN driver stuff is being blocked, even though all VLAN stuff is set correctly.

Any ideas?

0 Kudos
4 Replies
ctrotter
Contributor
Contributor

Ooookay, figured it out.

I missed one small step from here:  http://www.vladan.fr/how-to-create-and-use-vlans-in-vmware-workstation/

That is, set the vmnet0 back to 'automatic bridging' after setting vmnet9 to the iSCSI NIC.

Of note - while fiddling around I removed the VLAN driver and my Windows 7 network config on the iSCSI NIC has only the IP and netmask configured.  However, I was only able to get access (ping from the ESXi console) once I made the above change.

Hope this helps someone else out!

0 Kudos
ctrotter
Contributor
Contributor

One more update.  It's working, but I'm getting a ton of these:

Jan 11 22:41:11 vmkernel: 0:02:26:27.246 cpu0:4098)NMP: nmp_CompleteCommandForPath: Command 0x28 (0x41027f396940) to NMP device "t10.F405E46494C45425E646769453F6D2C6A72583D2141573D4" failed on physical path "vmhba33:C0:T1:L5" H:0x2 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

Further, I cannot add iSCSI datastores and the delays in rescanning/adding a LUN are an order of magnitude longer.  I'm going to assume there are serious issues with the VLAN driver performance and abandon this scheme.

If you are curious, I am using the same storage, same VLAN, same switch, for some physical ESXi lab hosts and they are working perfectly.  The NIC on my PC that is dedicated for iSCSI is directly plugged into the same switch, so the only difference is the Realtek VLAN driver and VMware Workstation bridging.

The other card I'm using is an Intel PRO1000 CT, so I may try swapping things around, or installing an older PCI-X Intel NIC into a PCI slot.

0 Kudos
ctrotter
Contributor
Contributor

I've installed a PRO1000MT dual adapter into a PCI slot and gotten the Intel PROSET drivers installed.  Created a new VLAN on one of the ports for my iSCSI VLAN, and set an appropriate IP.

No luck!  No connectivity between the ESXi VM and the adapter at all.

  • I can ping fine from the storage to the adapter, but between the adapter and the ESXi VM - nothing. 
  • I tried swapping the adapter's 'places' in the Workstation network config (i.e. moving the iSCSI adapter to the 'automatic' vmnet0 network, and the LAN adapter to vmnet9), and no change.
  • Also tried removing the VLAN, assigning a VLAN-appropriate IP to the adapter, and pinging through, like what worked for the Realtek driver.  No luck.

I noted in the network adapter settings in the viClient that the 'observed ranges' for the VMkernel adapter were 10.10.10.25-10.10.10.25 - the same IP as the physical iSCSI adapter.  I am guessing this is part of the problem.

I ran a continuous packet capture on the iSCSI adapter:

  • On the storage I pinged the VMkernel IP and saw broadcast-ARPing 'Who has 10.10.10.35?' (the IP of the ESXi iSCSI VMkernel) from the storage IP
  • I then connected to the ESXi console and pinged the storage: I saw more ARPs of ESXi broadcasting for the storage IP.  I noted that the MAC did not match the ESXi MAC - probably because that's the Workstation network MAC...?  Format was Vmware_71:72:39, where the ESXi MAC ended in ef:df:6f
  • I pinged the storage from my Win7 host and saw things proceed as normal.

So I'd guess that the Workstation bridging is creating a new broadcast domain (like a switch), and because there is no routing between Workstation and iSCSI VLAN subnet, things fail.  If that's not the case, then bridging is just not working properly (note that the bridge for iSCSI vmnet is set to the VLAN adapter).

Any input on how bridging works/what I'm doing wrong?

One other side note - the switchport the iSCSI adapter is connected to is set up as General, VLAN 10 tagged - is that correct?  Should it be something else if I'm using bridged networking?

Message was edited by: ctrotter

0 Kudos
ctrotter
Contributor
Contributor

Haha, answering a lot of my own questions.

I had a brainwave while at bible study tonight based on the following:

  1. Bridging is taking vmnet traffic and transparently attaching it to a physical adapter.
  2. The physical adapter has the VLAN already set and is known functional.

Therefore, setting the VLAN again on the ESXi host has no purpose, and could therefore be blocking things.

What has different broadcast domains?  Different VLANs, of course!

I removed the VLAN from the ESXi VM's vmkernel and everything instantly started working.  Further to all this, not only does it work, but it works without any of the aforementioned errors!!  Logs are nice and clear, VMFS datastores add within seconds, vmhba rescan takes an order of magnitude less time, etc.

So...if you want to pass VLAN traffic on a physical adapter into your Workstation virtual machines via bridging, DO NOT set a VLAN inside the virtual machine!!

0 Kudos