VMware Cloud Community
scale21
Enthusiast
Enthusiast

in guest ISCSI vs RDM advice

I have a few servers im trying to configure to hold some Exchange and SQL servers.

right now i have everything working from the host stand point (vMotion, HD, etc).

I am working on now getting my guests connected to our storage. Im wondering the recommended way to do this.

My vswitches are broke out  as follows:

vswitch0-->management network-->vmnic0

vswitch1-->vmotion-->vmnic4

vswitch2-->virtual machine network--vmnic6/vmic2

vswitch3-->Storage Network -->vmnic5/vmnic1

Now here is where the fun comes in.

Right now i have:

Vswitch4--vmsannetwork--vmnic3/vmnic7 which has  a VM port group on it.

Ive added in a second vNIC into my guest and then have assigned that card to the vmsannetwork port group/ removed all the binding but TCPIP. I then gave it a ip address in my storage network and im off and running

This is working however...im wondering if i should be Using RDM instead?

Is The way i have it ("in guest mapping") the prefered way or is RDM cleaner/easier to manage?

I have not tried RDM but i cant imagine the performance being that much greater to justify the change

The goal in all of this is to put our Exchange 2010 and SQL data on these "in guest" mappings so the data can live directly on our san and that inturn can leverage some of its features.

I understand this is better than storing the data inside of VMK files.

Does this setup sound correct for SQL/Exchange data and direct access to the SAN Storage?

I am having a hard time finding any solid info on the use of RDM vs in guest (vs. vmdk for that matter).

0 Kudos
4 Replies
CFormage
Enthusiast
Enthusiast

In guest access is messy, and you probably dont want to do vmotion like that either.

Is there a specific reason for setting it up like you have?

The reason you would want to use RDM or in guest iSCSI is for some backup /SAN level application or shared storage /MSCS clustering.

If you must, I would go with RDM rather than in guest.

If you are not using SAN level Snaps/ Clones or SRDF and you are not using a MSCS cluster that requires shared storage then you probably can do just fine with a VMDK

Also the performance difference is not noticable, if you don't trust me just download IOmeter and test it yourself.

Blog: www.dcinfrastructure.blogspot.com
0 Kudos
scale21
Enthusiast
Enthusiast

THanks for the reply.

There is no specific reason other than leveraging the san snapshots and features of the san like replication etc.

Vmware themselves responded saying that there really isnt any glaring difference from a performace standpoint. They said if we plan on using MS SQL clustering then for sure we would want to use the in-guest iscsi initator as it is the only supported way of doing it with iscsi storage. We will likely want that in the future so this the the way im leaning at this point.

Storing our exchange / sql dbs in vmdks probably wont work out so well when it comes to replication of our san storage.

Our storage vendor got back to me yesterday and said that in-guest is the way to go despite it's trade offs.

I have tested the inguest and it works fine with vmotion from what ive tested.

One thing that does have me nervous is how often can i expect random disconnects of my inguest iscsi mappings?  Im hoping i never have to deal with that mess. I have read that some people do run into that issue with inguest.

another thing i havent gotten around how to do yet is to seperate my iscsi host traffice vs my guest traffic. My host traffic is on its own vlan via our switches. This is also where the san resides.

I do have 2 seperate vswitches (1 vswitch with seperate vnics for host vkernel ports and esxi traffic and 1 vswitch with seperate vnics for guest vm ISCSI traffic).

Because my san is on the same vlan as my hosts.....my guest iscsi traffic also needs to be here on this vlan. The only way around this is if i can figure out how to have the SAN on 2 different vlans at the same time. Not sure that is possible. IF it is...can put the traffic on a seperate vlan for my guests. I dont see any way around this.

0 Kudos
rickardnobel
Champion
Champion

scale21 wrote:

I do have 2 seperate vswitches (1 vswitch with seperate vnics for host vkernel ports and esxi traffic and 1 vswitch with seperate vnics for guest vm ISCSI traffic).

Because my san is on the same vlan as my hosts.....

Do you mean that the iSCSI SAN is on the same IP network as the management network for the hosts? If so, you are likely to be accessing the SAN through the management vmnic instead of the intended iSCSI vmkernel vmnics.

The only way around this is if i can figure out how to have the SAN on 2 different vlans at the same time. Not sure that is possible.

Check with the iSCSI SAN if it has support for 802.1Q VLANs. If so, it should not be impossible to have IP addresses on different VLANs and also put the guest iSCSI on this different network if needed.

My VMware blog: www.rickardnobel.se
0 Kudos
scale21
Enthusiast
Enthusiast

No

My vswitches are broke out as follows:

vswitch0-->management network-->vmnic0

vswitch1 <vmkernel ports>-->vmotion-->vmnic4

vswitch2-->virtual machine network--vmnic6/vmic2

vswitch3 <vmkernel ports>-->Storage Network -->vmnic5/vmnic1 (vlan21 on the switches)

vswitch4-->gueststorage nework -->vmnic3/vmnic7 (vlan21 on the switches)

SAN (vlan21) with single ip address as a discovery target.

Ill check to see if the san supports 802.1Q.

Thanks

0 Kudos