VMware Cloud Community
01Ryan10
Contributor
Contributor

Possibility of specifying VMkernals for Storage Adapters?

Currently I have a 2-node ESX4.0 cluster connecting to a Dell MD3000i SAN using multipathing.

vSwitch1 --> vmnic1

VMkernal Port

vmk0 : 192.168.130.10

vSwitch2 --> vmnic2

VMkernal Port

vmk1 : 192.168.131.10

The above configuration creates a datastore in which all of my VMs reside.

I would like to create datastores from the SAN that I can present to my VMs as vmdk files, but I'd like to know if there is a way to use seperate physical nics than the ones in the above configuration?

Reply
0 Kudos
13 Replies
Andy_Banta
Hot Shot
Hot Shot

Let me make sure I understand your question: You're using iSCSI, and I'm guessing you're using /24 subnets?

You have a VMFS that is the primary datastore for your VMs. You want to create a second VMFS that has vmdk files that are additional storage for the VMs. When you configure this, you want the datapaths between the ESX host and the two separate VMFS datastores to use different paths. Did I get that right?

If this is using iSCSI Multipathing, both vmkNICs will be used, provided they can both reach the storage. If you want additional vmkNICs to be used for iSCSI storage, you can create them as well. If you put them on the same subnets, all of the vmkNICs will be able to see the same storage.

In this case, you could create vmk2 at 192.168.130.11 and vmk3 at 192.168.131.11

You can probably segregate traffic by changing the the path policy within ESX. When you look at the the paths to storage, you can set the the paths used for your first datastore to have On/Fixed paths using the paths through vmk0 and vmk1. The paths for vmk2 and vmk3 would be Off. Reverse this configuration for the second VMFS. That way, even though you'll see sessions (paths) from each vmkNIC to each datastore, the data will only use the paths that are active.

If this isn't what you're asking, can you provide a little more explanation?

Thanks,

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

Let me make sure I understand your question: You're using iSCSI, and I'm guessing you're using /24 subnets?

Yes

You have a VMFS that is the primary datastore for your VMs. You want to create a second VMFS that has vmdk files that are additional storage for the VMs. When you configure this, you want the datapaths between the ESX host and the two separate VMFS datastores to use different paths. Did I get that right?

Yes.

If this is using iSCSI Multipathing, both vmkNICs will be used, provided they can both reach the storage. If you want additional vmkNICs to be used for iSCSI storage, you can create them as well. If you put them on the same subnets, all of the vmkNICs will be able to see the same storage.

In this case, you could create vmk2 at 192.168.130.11 and vmk3 at 192.168.131.11

You can probably segregate traffic by changing the the path policy within ESX. When you look at the the paths to storage, you can set the the paths used for your first datastore to have On/Fixed paths using the paths through vmk0 and vmk1. The paths for vmk2 and vmk3 would be Off. Reverse this configuration for the second VMFS. That way, even though you'll see sessions (paths) from each vmkNIC to each datastore, the data will only use the paths that are active.

If this isn't what you're asking, can you provide a little more explanation?

Thanks,

Andy

You kind of lost me there. I'm mainly asking because I want to keep all of the iSCSI traffic that controls access to the datastore that houses the VMs physically seperated from datastores that may be used as vmdk or RDMs to the VMs. I have two Dell R710 servers that both have 12 physical nics.

Server01

nic1 = vmware management

nic2 & nic3 = iSCSI multipathing to a single SAN volume of 450GBs used to store VMs

nic4 = VM traffic to production network

Server02 is in a cluster with Server01, so I have it setup identical.

I'd like to use nic5 as a dedicated NIC for my SQL VM (this is easy). I'd also like to use nic6 and nic7 as a multipath to my SAN for a dedicated datastore to be presented to my SQL VM as its storage drive. This way none of my SQL traffic impacts any of the other VMs.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot

You kind of lost me there. I'm mainly asking because I want to keep all of the iSCSI traffic that controls

access to the datastore that houses the VMs physically seperated from datastores that may be used as

vmdk or RDMs to the VMs. I have two Dell R710 servers that both have 12 physical nics

Ok, then you have plenty of NICs to play with.

I'd like to use nic5 as a dedicated NIC for my SQL VM (this is easy). I'd also like to use nic6 and nic7 as a

multipath to my SAN for a dedicated datastore to be presented to my SQL VM as its storage drive. This way

none of my SQL traffic impacts any of the other VMs.

So, just as you associated vmk0 with NIC 2 and vmk1 with NIC 3, add two more vmkernel interfaces, using NIC6 and NIC7.

Once done, you'll have 4 vmkernel ports that can be used for iSCSI. Add the new dedicated datastore, make sure the SW initiator in ESX can discover it. Rescan you SW iSCSI adapter. Once done, go to Configuration -> Storage. Choose "Add Storage ..." and create a new VMFS on your new storage.

Now in your datastore menu, you should see two datastores (say, "vm-store" and "sql-store"). Select vm-store and the datastore details appear below. Choose "Properies ..." Then, in the lower right corner of the new window, select "Manage Paths ..." You'll see the four paths to vm-store.

They'll have Runtime Names like

vmhba33:C0:T1:L0

vmhba33:C1:T1:L0

vmhba33:C2:T1:L0

vmhba33:C3:T1:L0

You can right-click on the paths and disable two of them. Like, for vm-store, you can disable C2 and C3.

Repeat the exercise for sql-store and disable C0 and C1. (The LU number will be different for that datastore). At this point, you'll have 2 usable paths to each datastore and two disabled paths, effectively giving you the separation you want.

Let me know if this is enough detail.

Thanks,

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

Now in your datastore menu, you should see two datastores (say, "vm-store" and "sql-store"). Select vm-store and the datastore details appear below. Choose "Properies ..." Then, in the lower right corner of the new window, select "Manage Paths ..." You'll see the four paths to vm-store.

They'll have Runtime Names like

vmhba33:C0:T1:L0

vmhba33:C1:T1:L0

vmhba33:C2:T1:L0

vmhba33:C3:T1:L0

You can right-click on the paths and disable two of them. Like, for vm-store, you can disable C2 and C3.

Repeat the exercise for sql-store and disable C0 and C1. (The LU number will be different for that datastore). At this point, you'll have 2 usable paths to each datastore and two disabled paths, effectively giving you the separation you want.

Let me know if this is enough detail.

Thanks,

Andy

I haven't tried it, but I'm fairly certain this will kill the multipathing functionality . There are four "runtime names" because I think each name is linked to one of the four SAN iSCSI ports.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot

I haven't tried it, but I'm fairly certain this will kill the multipathing functionality . There are four "runtime names"

because I think each name is linked to one of the four SAN iSCSI ports.

Have you set up iSCSI Multipathing ("port binding") on this system? I should have asked that initially. If not, take a look at pages 30-35 in the iSCSI SAN Config Guide. Without configuring this, you're using NIC teaming for any path failover or aggregation, which isn't the best choice.

Once iSCSI Multipathing is set up, you can double the number of NICs being used for iSCSI, which will double the number of paths to the storage. After that's done, you can disable the paths which are using the set of NICs you don't want to use for that storage.

We don't have an MD3000i in our lab, so I didn't know how many paths you would see by default. If you have 4 path with 2 NICs set up for iSCSI Multipathing, you'll see 8 with 4 NICs configured. Then it's just a matter of trimming the paths from particular NICs.

Enjoy,

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

Ah...Thanks! That's probably it, and I'd expect to see 8 paths with 4 NICs. I'll read through, give it a shot, and post back here.

Reply
0 Kudos
01Ryan10
Contributor
Contributor

Have you set up iSCSI Multipathing ("port binding") on this system? I should have asked that initially. If not, take a look at pages 30-35 in the iSCSI SAN Config Guide. Without configuring this, you're using NIC teaming for any path failover or aggregation, which isn't the best choice.

After I thought about it more last night...I'm fairly certain I already have multipathing setup. That's the reason I already have 2 VMkernals using Round Robin. If I down one of the two switches, everything keeps working. If I down one of the two controllers on my SAN, everything keeps working.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot

After I thought about it more last night...I'm fairly certain I already have multipathing setup. That's the reason I already have 2 VMkernals

using Round Robin. If I down one of the two switches, everything keeps working. If I down one of the two controllers on my SAN,

everything keeps working.

Ok. Then you should have no problem adding more vmkernel ports on different NICs and getting the config you're looking for. Just add one more connection from each of those NICs to the switches, and you'll still have the same redundancy with separate datapaths for VM storage and SQL storage.

vmk0 -> NIC2 --> switch 1 -> SP A

vmk2 -> NIC6 -/

vmk1 -> NIC3 --> switch 2 -> SP B

vmk3 -> NIC7 -/

Use vmk0 and vmk1 for vm-store and vmk2 and vmk3 for sql-store.

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

I can setup more VMkernals no problem. My problem is not knowing how to configure VMware to explicitly use certain VMkernals for specified Datastores or RDMs. I think you told me how in previous posts, but I am unable to get more than 4 paths on any given Datastore/RDM after adding more VMkernals and rescanning.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot

I think you told me how in previous posts, but I am unable to get more than 4 paths on any given Datastore/RDM after adding

more VMkernals and rescanning.

Can you send the output of

esxcfg-vswitch -l

esxcli swiscsi nic list -d vmhba# (Whatever vmhba is set up as your SW iSCSI initaitor)

esxcfg-mpath -l

Thanks,

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

File attached.

Reply
0 Kudos
Andy_Banta
Hot Shot
Hot Shot

So this system does not have iSCSI Multipathing set up:

  1. esxcli swiscsi nic list -d vmhba33

No iSCSI Nics Found

Right now, it looks like you have the port groups VMkernel and VMkernel2 set up for iSCSI. If you provide output of esxcfg-vmknic -l, I'd know a liitle more about where any of the other port groups are vmkernel port groups or not.

You need to perform the

esxcli swiscsi nic add

operations explained on page 34 of the iSCSI SAN Config Guide to get iSCSI Multipathing set up.

Once you do that, you'll ave basic Multipathing set up. After that, you can add more vmkernel ports to the iSCSI initiator to increase the number of paths. You're doing a little bit of limiting by having port separated by subnets, which is fine. This (and how to set up iSCSI Multipathing) are described pretty well in this blog post:

The steps at this point are

  • Set up iSCSI Multipathing (refered to by "port binding")

  • Add two more vmknics, one on the 192.168.130.0/24 network and one on 192.168.131.0/24 network to iSCSI Multipathing

  • Rescan

  • You should now see 8 paths to your storage

  • Identify the paths using the the NICs you don't want to use for that particular storage

  • Disable those paths.

Enjoy,

Andy

Reply
0 Kudos
01Ryan10
Contributor
Contributor

I'll have to have a detailed look through that document. I must not be understanding something about iSCSI multipathing, because I thought I already had it going.

I'll post back tomorrow, and I appreciate your guidance.

Reply
0 Kudos