VMware Cloud Community
AllBlack
Expert
Expert
Jump to solution

RDM v ISCSI initiator in guest

Hi all,

We are looking at a Sharepoint deployment and the SP index server requires its application data to be on a LUN.
That to me says two options:

-Use RDM

-Use iSCSI initiator in VM guest

Is there any reason why I should not use the iSCSI initiator in the guest?
My reason of thinking is that it is a much simpler configuration.
We have to go to a lot of burocratic tape before we can make changes to a production environment
thus I'd rather not configure ESX with iSCSI. We are using NFS as backend storage

Have you seen performance issues when using the ISCSI initiator in VM guest?
I did not notice anything during a vmotion for example. What should I look for?

All thoughts appreciated!

Cheers

Please consider marking my answer as "helpful" or "correct"
Tags (2)
Reply
0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

If your back end filer is the same then whether to use RDM or iSCSI depends on how much you want to manage the storage from within the VM. Using Software iSCSI Initiators within a VM means you need to deal with that when you do upgrades, etc. Ifyou use RDMs you leave the management of such up to vSphere.

THis is also one reason I prefer RDMs. Less for me to do when I manage VMs and if my vSphere host uses hardware iSCSI devices or can do iSCSI offload (like HP Flex-10) then I win all the way around.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

Reply
0 Kudos
11 Replies
AndreTheGiant
Immortal
Immortal
Jump to solution

Guest iSCSI is more simple? Not necessary... maybe on VMware side, but guest VM is a little more complicated.

A virtual RDM can work with vMotion, backup, shapshot without problem..

So it's up to you... performance are a little better in the virtual RDM case (a guest iSCSI has a little overhead), but this could not be the big reason...

One possible reason is backup point of view (depending on how you make them): a guest iSCSI is not visible from a VM image backup... this could be a pro or a big cons.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
AllBlack
Expert
Expert
Jump to solution

Backups are done through snapmanager products at storage layer so we don't require VM image backup or VM snapshot.

Vmotion appeared to work just fine with guest attached LUN.

One thing that could be decisive is how RDM v ISCSI reacts on a filer failover. Haven't tested it yet but the cleanest method would be preferred

VMware Communities<http://communities.vmware.com/index.jspa>

RDM v ISCSI initiator in guest

reply from Andrew M<http://communities.vmware.com/people/AndreTheGiant> in Enterprise Strategy & Planning - View the full discussion<http://communities.vmware.com/message/1781099#1781099

Please consider marking my answer as "helpful" or "correct"
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal
Jump to solution

HA and vMotion works well also with guest iSCSI.

For network failover you may need two vNIC to handle multipath and failover better (and faster) in the VM.

But in most case also the vSwitch NIC failover may work.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
idle-jam
Immortal
Immortal
Jump to solution

if it's not application requirement to have in guest iSCSI i would go for RDM to have a unified management where all i need to worry is my vmkernel connectivity vs having to worry each OS's nic and etc if the storage is not available ..

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

Please remember if you use iSCSI that you should ensure you are talking to a iSCSI NAS/Server that segregates your vSphere Datastores from the VM. THis would be a security issue.

I prefer RDMs for the reasons given but have used iSCSI as well. It depends on where I am sending the data. When the virtual disk is big enough or when using RDMs I tend to make backups using rsync/tar/in VM backup mechanisms. At least for one of my VMs due to the restoration requirements more than the issues with making backups.

So you need to know how you will backup and more importantly restore your data.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
AllBlack
Expert
Expert
Jump to solution

Hey guys, this is definitely food for thought.

Edward, you are talking about a security issue. Can you expand on this?
If I understand you correctly you are saying that LUNs connected via iSCSI
should be on a different storage unit from VM datastores? Where sits the security risk?
We have netapp filers who actually serve a few purposes. There are LUNs attached via ISCSI from physical boxes
and the VM datastores are connected via NFS.

For this particular VM backup will be made via Snapmanager for Sharepoint (which relies on snapdrive and Snapmanager for SQL).

Please consider marking my answer as "helpful" or "correct"
Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

The security risk is basically within the iSCSI network and the fact that no encryption is used to transfer blocks from one device to another. It is potentially possible for an attacker to sit on the iSCSI network and read blocks being written by one VM as they are transfered to the filer. That is unless that VM is using IPsec for its iSCSI initiator (which it is not), and that same VM could read data as it is being written by the ESX host as it definitely does not use IPsec.

Some may say what about CHAP, this is just the authentication mechanism to write data and read data onto the wire, but once on the wire it is in clear text.

A friend of mine brought this up in a virtualization security class as a potential weakness, within 2 hours a student had already written code that would read whatever was written over the wire, reassembled the packets and was able to view what was inside the file on a completely different computer that talked to the same iSCSI server. So yes this is a valid attack.

Ideally the best way is to split your networking such that vSphere Data Stores are on one network talking to one controller, while VMs that need iSCSI use a completely different network and controller. I would even suggest using different switches as well if possible (with blades this is sometimes not possible) so you at least need to segregate using VLANs.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
AllBlack
Expert
Expert
Jump to solution

Hi Edward,

That is very useful and I agree that RDM offers more benefits even though we may not require these at this point in time.
The main concern for us was robustness upon filer failover. In your opinion, do you believe there is an advantage of using one over the other.
At the end of the day they would both be using a connection over a dedicated ISCSI VLAN to the same (ISCSI) filers.
If there was a disruption because of a failover would it not affect both just as much? Apparently some peers have seen issues
with SQL stored on iSCSI guest disks during failover of the filer heads. Has not been tested again RDM. I am trying to understand how it differs since they both use the same technology.

Hope this is not too much of a stupid question but am not much of a storage expert.

Please consider marking my answer as "helpful" or "correct"
Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

If your back end filer is the same then whether to use RDM or iSCSI depends on how much you want to manage the storage from within the VM. Using Software iSCSI Initiators within a VM means you need to deal with that when you do upgrades, etc. Ifyou use RDMs you leave the management of such up to vSphere.

THis is also one reason I prefer RDMs. Less for me to do when I manage VMs and if my vSphere host uses hardware iSCSI devices or can do iSCSI offload (like HP Flex-10) then I win all the way around.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
GregL75
Enthusiast
Enthusiast
Jump to solution

I really like this thread....

We are looking at something similar. We currently have a physical pair of Oracle RAC/ASM servers and have been ask to replicate this configuration in the virtual world.

The physical configuration is purely FC based between the RAC nodes and the NetApp filer. As we have been very succesfull with this config I am currently trying to replicate it as closely as possible.

Here are the challenges that I see right off the bat.:

Shared storage, don't wanna go with shared scsi in the VMs

No FC on the esx hosts.

So what I came up with would be an iSCSI implementation based on the great multi vendor post on iSCSI

The dvSwitch would have 2 vm pg each leveraring a single physical uplink (acting as a passthrough, handing over the failover to the VM HBA). The VMs would have an added nic interface in each port group. Inside the VM each interface get a unique ip in the iSCSI vlan. Both net interfaces becomes paths for the vm hba.

Did anybody got something similar done... Requesting the green light to go to the lab.

Thanks all

Greg

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I have seen this done with other databases and tools. The key is just to make sure what VM sees is not what the ESX(i) host sees. They should be zoned differently.

Best regards,

Edward L. Haletky

Communities Moderator, VMware vExpert,

Author: VMware vSphere and Virtual Infrastructure Security,VMware ESX and ESXi in the Enterprise 2nd Edition

Podcast: The Virtualization Security Podcast Resources: The Virtualization Bookshelf

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos