VMware Cloud Community
warnox
Enthusiast
Enthusiast
Jump to solution

ESX iSCSI Configuration Recommendation

Hi,

I'm wondering what the best Storage Adapter configuration would be in the following scenario to provide Host NIC and SAN Controller redundancy. The SAN will be plugged directly into each host as I do not have a gigabit switch available at this time which means each point to point link will be in it's own subnet as below.

Host 1:

iSCSI_A1: 192.168.1.2/30

iSCSI_B1: 192.168.1.5/30

Host 2:

iSCSI_A2: 192.168.1.9/30

iSCSI_B2: 192.168.1.13/30

iSCSI SAN:

Host A1: 192.168.1.3/30

Host B1: 192.168.1.6/30

Host A2: 192.168.1.10/30

Host B2: 192.168.1.14/30

According to the article, VMware KB: Considerations for using software iSCSI port binding in ESX/ESXi, I shouldn't use Port Bindings as each link is in a separate broadcast domain. Does that mean I should set up 2 Software iSCSI Adapters on each ESX Host and just use dynamic discovery?

Thanks for any help.

0 Kudos
1 Solution

Accepted Solutions
tomtom901
Commander
Commander
Jump to solution

Sure, no problem. Make sure you see both paths though, under Configuration -> Storage Adapter -> Select your iSCSI adapter. Example like this. In this case I have 12 devices with 24 paths, so 2 paths to each device (LUN).

Screen Shot 2014-03-16 at 20.39.29.png

You can also mark answers as helpful or correct if you'd like.

View solution in original post

0 Kudos
14 Replies
a_p_
Leadership
Leadership
Jump to solution

You can have only a single iSCSI SW Adapter on an ESXi host. What you have to do is to either configure two vSwitches with a single VMkernel port group on each of them or a single vSwitch with two VMkernel port groups. In case of a single vSwitch, the vmnics for each port group need to be configured as active/unused to ensure that each port group has its own dedicated vmnic assigned.

I'm don't know which iSCSI target (storage system) you use, but the IP configuration you mentioned is somewhat unusual, so you may want to double check this with the vendor's documentation!?

André

warnox
Enthusiast
Enthusiast
Jump to solution

Yes, I know how to configure the vSwitch but not sure about the adapters. If I can only add one storage adapter maybe I need to just put both of the controllers IP's under Dynamic Discovery?

0 Kudos
a_p_
Leadership
Leadership
Jump to solution

It shouldn't matter which and how many IP addresses you add for Dynamic Discovery. As long as one of them is reachable, the storage system should return all the configured targets.

André

0 Kudos
tomtom901
Commander
Commander
Jump to solution

As Andre said, why would you configure the SAN in different smaller subnets. The only advantage you gain here is the mitigation of some broadcast traffic in the storage subnet, but other than that, I don't see many advantages to doing it this way. What model and make storage array are you using?

0 Kudos
warnox
Enthusiast
Enthusiast
Jump to solution

The SAN is just an old HP MSA 2012.

Multiple subnets clearly show that each link is on a different network, there is no other reason for this.

If I just add two of the SANs IPs into Dynamic Discovery on each host, what would happen if 1 NIC goes down? Will the storage traffic continue to flow over NIC #2 even if multipathing is not configured?

0 Kudos
tomtom901
Commander
Commander
Jump to solution

If you add the 2 IP's to the dynamic discovery, you should have multiple paths to the destination device (LUN). If one of the vmnics, or vmkernel interfaces goes down, and therefore the path to the device, it should automatically switch over to the other path.

0 Kudos
warnox
Enthusiast
Enthusiast
Jump to solution

Cool, that's all I need to know Smiley Happy  I'll put in the SAN in this week and see how it goes. Thanks for the help!

0 Kudos
tomtom901
Commander
Commander
Jump to solution

Sure, no problem. Make sure you see both paths though, under Configuration -> Storage Adapter -> Select your iSCSI adapter. Example like this. In this case I have 12 devices with 24 paths, so 2 paths to each device (LUN).

Screen Shot 2014-03-16 at 20.39.29.png

You can also mark answers as helpful or correct if you'd like.

0 Kudos
warnox
Enthusiast
Enthusiast
Jump to solution

Alright managed to configure this as per my original plan and everything seems to be working fine. Each host has 2 NIC's plugged directly into the SAN (1 to each controller) and I can see 2 paths from each iSCSI storage controller. Thanks for the help guys.

Capture.JPG

0 Kudos
tomtom901
Commander
Commander
Jump to solution

Which storage array are you using? You currently have 2 paths, with 1 of the 2 paths being actively used. If you look at my example, all of my paths are being actively used to send IO's to the storage device. That's because my path selection policy best practice is Round Robin, where's your current configured policy will be either MRU (Most Recently Used) or Fixed. If you check the manual of your storage array, you can check what would be the best path selection policy for your case. If it supports active / active, I'd go with configuring RR as the Path selection policy to optimize both performance and at the same time redundancy in this setup.

0 Kudos
warnox
Enthusiast
Enthusiast
Jump to solution

Is that even possible with this Network adapter configuration (two vSwitches)? I assume even with the way it is at the moment it provides redundancy if the active link goes down?

Capture.JPG

0 Kudos
tomtom901
Commander
Commander
Jump to solution

Yes, that's correct, it will provide redudancy (hence the 2 paths found to the storage device). However, I would like to suggest a better configuration:

Single vSwitch (vSwitch 1) with 2 port groups

Port group iSCSI_A1: IP 192.168.20.2. NIC teaming set to override switch failover, set vmnic2 to active and vmnic3 to unused.

Port group iSCSI_B1: IP 192.168.20.10. NIC teaming set to override switch failover, set vmnic3 to active and vmnic2 to unused.

Then you'll be doing the default VMware MPIO setup and you have some extra scaleability and redundancy in your setup. When adding NICs to the system for iSCSI, you don't have to create seperate vSwitches. But your setup does the same thing, so there is not a very strong (technical) need to change this.

If your array supports RR as PSP will be mentioned in the appropriate documentation.

Screen Shot 2014-03-30 at 23.15.19.png

0 Kudos
warnox
Enthusiast
Enthusiast
Jump to solution

Oh I see what you mean. The vSwitch configuration I have in place is supported for non-multipathing but the other way was to configure it is as you described.

I thought the PSP change you suggested might cause problems in the future if I add an RDM disk for a cluster but I now see that this is not a problem in vSphere 5.5. And I didn't realise you can make this change on a per LUN basis.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=103618...

The HP MSA I'm using is a few years old so I'm not even sure if it supports active/active. This environment is not too storage intensive so it should be fine on what it is anyway.

0 Kudos
tomtom901
Commander
Commander
Jump to solution

warnox wrote:

The HP MSA I'm using is a few years old so I'm not even sure if it supports active/active. This environment is not too storage intensive so it should be fine on what it is anyway.

And that's the important thing! Glad you got it sorted out.

0 Kudos