VMware Cloud Community
ashleyw
Enthusiast
Enthusiast

iscsi binding to localhost for converged compute/store devices.

Hi,

I was playing around with one of our converged storage and compute supermicro nodes.

Why is it that a vmkernel interface used for binding the iscsi traffic needs to be connected to a virtual switch with a physical uplink when the connection between the host itself and the virtual machine never leaves the host?

If you see the configuration below, this configuration works even though the nic itself is not connected.

When the nic is patched into the network and I look at traffic stats on the switch there appears to be traffic hitting the switch which presumably means the loopback adaptor isn’t been used.

I’m just wondering if there is a way of configuring local host iscsi traffic to re-present the storage back to the host itself (using a virtual SAN such as an OmniOS VM) which can avoid the NIC as being the bottleneck.

Obviously if we needed to present this storage to other nodes in the cluster, then we’d need to create an iscsi target on a vswitch with a physical uplink connected, but what about local traffic?

It doesn’t make sense to me as to why vmkernel interface needs to be bound to a physical nic – if there a way of creating the iscsi config for local host only outside of GUI to do what we want to improve performance?

any thoughts would be appreciated.

cheers

Ashley

nic1.png

nic2.png

0 Kudos
2 Replies
unsichtbare
Expert
Expert

Good question!

Are you certain that this configuration will not work without a physical NIC? Obviously, a NIC is required to implement any form of Port Binding, but for standalone, why is Port Binding required? Isolated vSwitches work just fine.

I am wondering, however, why not simply use the Directly Attached storage instead of creating a Virtual SAN and presenting iSCSI?

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
ashleyw
Enthusiast
Enthusiast

hi, I'm 100% certain that you can only configure the iscsi initiator on the vmware host if there is a physcial uplink, but interestingly once it has been configured it can be removed and the iscsi LUNs are still visible to the host (even after a reboot).

However once it's removed, then the iscsi configuraiton screen ends up looking like this (see below) but ends up showing a warning on the initiator configuration.. Please VMware - can this be fixed.

I believe there is not the same limitation on NFS shares to the localhost.

I didn't want to use the disks directly because this would mean they'd be presented as individual disks and I wouldn't be able to utilise the ZFS software raid/compression/and all the other good flexibility that ZFS offers.

Having said all of this, now that EMC ScaleIO is available in a free form, we may well end up shifting to this instead for our development workloads.

nic3.png

0 Kudos