VMware Cloud Community
john123456
Contributor
Contributor

Different ways to "RDM" / Circuvent the ESX ISCSI driver

Hello all,

We have a entry level iSCSI SAN, with four 1GB iSCSI ports. The "traditional" way the connect the SAN to ESX is to connect the 4 iSCSI ports to the iSCSI VMKernel ports. Al 4 ports can be connected to a different VMKernel port. The VSwitch can then be connected to 4 HwNics. But if I understood it correctly, all disk traffic flows trough the (Cisco based) iSCSI driver at a max speed of 1GB.

If you have a dedicated LUN (with separated disks for a high I/O based application, and connect that LUN as RDM to a VM, the traffic to than LUN is also routed trough the VMKernel iSCSI port. We have the impression that a high I/O load this LUN causes a quick I/O performance drop on all other LUNS

After discussion with several people, we got the Idea to do a different way of RDM, and bypassing the VMKernel iSCSI driver. What if we give a VM 1 or 2 dedicated Vnics, connect those to a dedicated vSwitch, and 2 GB HNics witch are connected to the SAN. Install iSCSI software in the VM and bind the dedicated vNIC'S to the VM based iSCSI software.

We tested it, and it WORKS, but could not find any references on the internet from other people doing this. Have anybody here done such a setup? So, is this a smart way to gain extra I/O performance, or does it only add a extra layer of complexibility, with no real performance gain?

Regards,

John

Reply
0 Kudos
4 Replies
happyhammer
Hot Shot
Hot Shot

John

yes the way you have explained is another way of doing things and will provide more i/o over both the vmware hardware /software intiators. This also lends itself to using any SAN based software such as snapshot managers or VSS aware tools.

Would suggest you configure your physical switch ports for Flow Control and disable unicast storms also for both the SAN attachement and Pnics from ESX that will provide the iSCSI connectivity

kjb007
Immortal
Immortal

This is another common means of using iSCSI based RDM's. The difference here is that you need additional NICs on the server to allow for this traffic. If you have high I/O requirements, and have the interfaces, then this will work just fine. This will definitely bypass the ESX software iSCSI config. You will also have to use your own backup means, as the ESX snapshots will no longer apply. You will have to use SAN based snapshots, as ESX will have no knowledge of this LUN/volume. Other than that, it works very well, and is also documented well in the forums.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Texiwill
Leadership
Leadership

Hello,

In addition you need to be careful how you present the LUNs to ESX if you present them as well as your storage server could become an attack vector. If you are using just iSCSI initiator from within a VM then you would have no issues.


Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
Blue Gears and SearchVMware Pro Blogs
Top Virtualization Security Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
Exwork
Enthusiast
Enthusiast

I do this all the time. iSCSI initiator inside the guest, bound to a second virtual nic on the guest.

Make sure you aren't sending your iSCSI traffic over the first nic, or you might end up causing a bottleneck to the normal guest connectivity.

Reply
0 Kudos