VMware Cloud Community
hennish
Hot Shot
Hot Shot
Jump to solution

iSCSI design with vSphere and Dell MD3000i

I have setup an ESX 4 server (the first of three for this cluster), and an iSCSI SAN Dell MD3000i connected through two Dell PowerConnect 2816 switches.

Now I'm reading the VMware "iSCSI SAN configuration guide", and I just wanted to make sure I have made the design correcly.

I have connected one ESX (vmnic1) pNIC to one of the pSwitch, and another (vmnic5) to the second one. To this pSwitch the SAN iSCSI ports are connected, so that one of the switches carries the 10.10.10.x subnet, and the other one the 10.10.11.x (see IP plan below)

The two pNICs have a vSwitch each (not sharing one), one portgroup each, and are configured on different IP subnets (see attached screenshot)

IP Plan:

Port group iSCSI-10 (VMkernel): 10.10.10.1

Port group iSCSI-11 (VMkernel): 10.10.11.1

SAN controller 0-0: 10.10.10.10

SAN controller 0-1: 10.10.11.10

SAN controller 1-0: 10.10.10.11

SAN controller 1-1: 10.10.11.11

Using this configuration I could see four different paths to each LUN. and I was able to make the ESX server fail over to different switches/controllers as I was unplugging the network cables from vmnic1 and vmnic5, one at a time.

Then I found the configuration guide section about "esxcli" and configuring multipathing, which I carried out according to the instructions ("esxcli swiscsi nic add -n vmk0 -d vmhba34" and "esxcli swiscsi nic add -n vmk1 -d vmhba34")

Now, I have eight different paths to each LUN, of which three are "Active", and one "Active (I/O)" (see my attached screen shot).

So, my questions are:

1. Are eight paths to each LUN really normal in iSCSI designs?

2. Is my choice of vSwitch/pNIC/pSwitch design correct?

Thanks in advance!

/Anders

Reply
0 Kudos
1 Solution

Accepted Solutions
DSwarm
Enthusiast
Enthusiast
Jump to solution

@hennish,

As long as your new design continues to meet your workload and failover needs, your infrastructure design should be fine.

But you might want to take a look at this Dell design. In particular, step 12 which configures round robin multi-pathing policy using the VI client GUI, for

each LUN exposed to the ESX server. This enables load balancing over two

active paths to the LUN. After completing this step, you should see2 'Active IO' paths.

Regards,

KongY@Dell Inc.

@KongYang on Twitter.

View solution in original post

Reply
0 Kudos
11 Replies
DSwarm
Enthusiast
Enthusiast
Jump to solution

@hennish,

As long as your new design continues to meet your workload and failover needs, your infrastructure design should be fine.

But you might want to take a look at this Dell design. In particular, step 12 which configures round robin multi-pathing policy using the VI client GUI, for

each LUN exposed to the ESX server. This enables load balancing over two

active paths to the LUN. After completing this step, you should see2 'Active IO' paths.

Regards,

KongY@Dell Inc.

@KongYang on Twitter.
Reply
0 Kudos
hennish
Hot Shot
Hot Shot
Jump to solution

Thanks for the link to the Dell article. It was exactly what I was looking for. (luckily it was almost the exact same design as I had done)

I will try the Round Robin setting, and hopefully I'll have some time to compare the performance before and after.

Reply
0 Kudos
manfriday
Enthusiast
Enthusiast
Jump to solution

How did this work out for you?

I am seeing the same thing with 8 paths instead of the expected 4... but after configuring Round-Robin, as per the linked document, I have 4 active paths.

Is that normal?

Reply
0 Kudos
hennish
Hot Shot
Hot Shot
Jump to solution

Weirdly enough, I actually got a lower throughput when switching to Round Robin. Just 1825 IOPS and 7,13 MB/s compared to 2008 IOPS and 7,84 MB/s.

The test was performed on a RAID5 LUN using IOmeter with a 4 kb size and 75 % read.

Hopefully gonna test on an RAID10 and w/ Jumbo Frames tomorrow.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

Very interesting, I am building out/planning the same exact configuration. (seems very popular). Let me know how your results are.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
manfriday
Enthusiast
Enthusiast
Jump to solution

Well, a reboot of the hosts actually returned the number of paths back to the expected 4 (2 active).

Reply
0 Kudos
hennish
Hot Shot
Hot Shot
Jump to solution

When I enable the iSCSI Software Initiator, I get the following message (se attached file). Is this really neccessary? ESX seems to run fine without a second SC on the iSCSI network.

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee
Jump to solution

How was your performance after you enabled Jumbo Frames on the switches?

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
hennish
Hot Shot
Hot Shot
Jump to solution

Hi. My tests with jumbo frames didn't show any increased IOPS nor MB/s, but maybe I'll need to go beyond 4 kb blocks to see any improvement there.

I'm on vacation until mid august, and I'll try to do some more testing then.

Reply
0 Kudos
manfriday
Enthusiast
Enthusiast
Jump to solution

Hennish,

Version 4 does not require a service console on the ISCSI network at all.

Reply
0 Kudos
Coinspinnr
Contributor
Contributor
Jump to solution

on the Dell switch, did you configure the jumbo frame size so that it is 22 bytes larger than the MTU on the nics?

Example  you set MTU on NICs to 9000,     Jumbo frames on the switch should be set to 9022.

Also make sure the duplex matches up on both ends as well.

Reply
0 Kudos