VMware Cloud Community
eloysierra
Contributor
Contributor
Jump to solution

NetApp iSCSI best configuration?

Hello ...

I own 3 esx servers 4, connected to a NetApp 2020th cabin, dual controller active / active. Each controller has 2 interfaces (e0a and e0b)
NetApp reading the documentation, have two connection options: Using LACP, or by using standard software interfaces with the MPIO iscsi vmware, creating 2 or more vmkernel ports.

I opted to use the second option, spending 2 physical cards in each server to connect esx iscsi. I created a vSwitch for iSCSI, with 2 vmkernel port.

vswitch.jpg

without using an adapter in each case. ( Unused Adapter option).

On the side of the cabin, I configured the interface of each controller e0a with e0a interface of the other controller as its partnet, plus you have assigned a second IP address as an alias. So that I have e0a controller "A", with IP 192.168.1.17 and 192.168.1.40 alias; e0b controller "A" withIP 192.168.2.17 and 192.168.2.40 alias; e0a controller "B "IP 192.168.1.18 with and without alias e0b controller" B "with IP 192.168.2.18, and without alias.

netapp.jpg

Now, in the iscsi initiator configuration vmware , add only the "Dynamic discovery" IP 192.168.1.17, and I appear on the tab "static discovery", the four IP addresses.

static discovery.jpg

Once you add this, each datastore I get with 4 paths available.:

paths.jpg

Is this a correct configuration? ... be improved ?....
If I add more IP addresses "alias", the "paths" multiply, also happens if I add more vmkernel vswitch port to ....

Doing this in ESX 3 servers, not "overload" the connections of the cab ?....

As I can see if packets are being dropped? ...
I have the possibility of adding to each esx server 2 more cards for iscsi vsiwtch of adding 2 to each server vmkernel port .... good idea or not ?....

Thank´s...

Reply
0 Kudos
1 Solution

Accepted Solutions
beckham007fifa
Jump to solution

so far so good....your configuration should be fine.

Regards, ABFS

View solution in original post

Reply
0 Kudos
7 Replies
beckham007fifa
Jump to solution

add additional cards to the configuration else you might have problem doing vmotion and there can be storage connectivity loss with reservation errors coming if you have all the traffics flowing through just 2 nics.

Regards, ABFS
Reply
0 Kudos
eloysierra
Contributor
Contributor
Jump to solution

Hello....

The servers have 10 network cards, 2 of which I dedicate to VMotion, 2  for service console and 4 for virtual machines.

Is correct?

Reply
0 Kudos
beckham007fifa
Jump to solution

Hi,

How many for iscsi?

Regards, ABFS
Reply
0 Kudos
eloysierra
Contributor
Contributor
Jump to solution

Hi...

2 Nic´s for iscsi

Reply
0 Kudos
beckham007fifa
Jump to solution

so far so good....your configuration should be fine.

Regards, ABFS
Reply
0 Kudos
eloysierra
Contributor
Contributor
Jump to solution

statistics and performance

Hello again ....

I'm in the testing phase, and do not know if the results I get are good or bad.

The only load in the tests is to make a clone of a virtual machine hosted on a local disk of ESX server, to the cabin FAS2020,graphics and get the following measures:

In the section mbtx / esxtop command s (n), the vmk2 and vmk3 (iscsi), not exceeding 160 Mb .. normal is ?.....

In the graphs of NetApp protocol latency around 5 ms ...

The cabin has 13 SAS 15 k rpm disks.

uso 2020.jpg

uso clon.jpg

Reply
0 Kudos
eloysierra
Contributor
Contributor
Jump to solution

Hello again....

I continue with my performance tests .... and I do not understand the following:

In a vmware document states: "" When You Set up multipathing iSCSI HBAs and Between Two multiple ports on a NetApp storage system, givethe two HBAs or static discovery Different dynamic addresses to connect to the storage.
The NetApp Storage System Permits only one connection for each 'Each target and initiator. Attempts to make additional connections cause thefirst connection to drop.
Therefore, it a single HBA Should not Attempt to connect to multiple IP addresses associated target NetApp With TheSame. "

On the other hand, the fas 2020, I set the parameter "iscsi.max_conneections_per_session" to 32 (the maximum), and instead, the command
iscsi show -p iscsi session tells me that in each session the "max connection" is 1
Sin título.jpg
Also, if you generate load in the cabin, the session iscsi-v command shows me the message again and again "Seq / xxx "..... Scsidb_RD_WaitingBurst

Is all this normal? or I have to set some additional parameters? ...
iscsi sesion show v.jpg
Than´ks
Reply
0 Kudos