VMware Cloud Community
santomaurob
Contributor
Contributor

vmware software iSCSI HBA Question

Hi all,

     I wanted to ask you guys a quick question. I recently started at a new job here at a school district. When I got onsite, I saw they had a 3 node vSphere Cluster. The nodes are Dell R740 Servers. Each of these hosts have 2 built in SFP+ (Intel X710) and one Intel X710 Dual SFP+ expansion card. It seems like the previous team was never able to finish setting this all up because nothing was setup properly. All traffic was flowing through the management NIC (VM Client Network, iSCSI, and Management). vMotion also was not setup properly. First thing I did was add some dac cables and setup vMotion which is working fine. I then had our purchasing department order 3 more Intel X710 Dual SFP+ cards which I installed in each host.

       Now yesterday I reconfigured the iSCSI HBA in each host. Our Dell EMC SAN has to controllers (SPA & B) and each controller has four SFP+ connections back to our core switch. The iSCSI ports on our SAN are on two differen VLANs each with its own subnet. I read in the vmware docs that when setting up the software iSCSI HBA to targets with different subnets, that network port binding should not be used if the vmk's are going to each be on the different subnets as well. So this is what I did. Each host has one vmk on subnet A and one vmk on subnet B. Each vmk is attached to its own vSwitch. Now everything is working fine. I was able to setup the static targets with no issue and rescanning the HBA updated all the paths. Per Dell's documentation, I set all iSCSI storage devices to use Round Robin for their multipathing mode. The first rescan on the first host I did took a little longer than usual but everything after that was fine.

        My question is, is there any way to verify that the iSCSI traffic is going out the intended vmk ports? Any cli command or area in the vCenter GUI to confirm this. I was able to see on my Dell EMC that the hosts were now connected via one of the IP addresses from subnet A. It doesn't show subnet B but since I didn't configure gateways on these vmk ports, I don't see anyway for subnet A to reach subnet B from the hosts. Maybe I am overthinking this but I just want to make sure that the iSCSI traffic is no longer going out over the MGMT vmk. Sorry for the long post. Thank you for your time.

Reply
0 Kudos
8 Replies
compdigit44
Enthusiast
Enthusiast

Reply
0 Kudos
a_p_
Leadership
Leadership

The two subnets that you've configured must not be routed, i.e. port1 from the ESXi host connects to e.g. port 1 on each SP, ESXi port 2 connects to port 2 on each SP.

To monitor the traffic you can either use the GUI, or esxtop from the ESXi host's command line.

André

Reply
0 Kudos
santomaurob
Contributor
Contributor

Hi comp,

     Thank you. I will read through this documentation and give it a try.

Reply
0 Kudos
santomaurob
Contributor
Contributor

Hi Andre,

     Thank you for your reply. The SAN and the hosts do not connect directly to each other but rather to our core switch stack. The ports we have vlans setup which are tied to the different subnets. So VLAN A is Subnet A, B to B, etc. The ports from the SAN and Hosts that are connected to Subnet A are configured for VLAN A only (no trunking just access ports on that VLAN). Same goes for the ports going to Subnet B. The vmks also do not have gateways configured so ESXi just assigned them the default I think. I am not sure if this qualifies as what you were saying below about not being routed but at this point directly connecting the hosts to the SAN is not really an option. I am going to try and use the traffic monitor comp mentioned above and see if I can figure it out that way. Thanks again for your help.

Tags (1)
Reply
0 Kudos
CityofRome
Contributor
Contributor

     You should be able to look at the port stats for the management port(s) and the iSCSI ports.  You should also be able to look at the port utilization of those ports and see which port(s) has/have the most traffic.  Of course, that all depends on how much real traffic you expect on your management ports.  For us, we're using a vSAN now but we used to use Dell Equallogic SANs.  All of our production servers are VMs and they go through the management ports but it's nowhere near the amount of traffic we used to see for iSCSI traffic.  Also, you should be able to initiate a large file copy or storage migration and look at the performance monitor for host(s) involved to verify that traffic is going over the physical NICs you expect

Reply
0 Kudos
santomaurob
Contributor
Contributor

Thank you for your reply CityofRome. This has helped me a bit. Looking at this on each of the hosts I was able to verify a decent amount of traffic running through the physical NICs the ISCSI vmks are attached to. The management NIC still has some traffic going through it but that is because our VM Client traffic is still going through that interface. We will soon be changing this and separating the VM Client traffic off to its own 10gb links on its own vswitch. Once I do that, I will continue to monitor the interfaces. At that point I should see the traffic on the management nic go down quite a bit. Thanks again.

Reply
0 Kudos
PraveenBatta1
VMware Employee
VMware Employee

Below data will help you to understand the vSpehre networking:

Login vCenter -> Go to Networking -> select any virtual switch -> Click on ports ->  Here you can see what all ports of that virtual switch is connected to respective vlan / subnets / .

Now you can search for your port which is serving the iScsi traffic.

Reply
0 Kudos
IRIX201110141
Champion
Champion

Hello,
because we are a Dell Shop and Dell Partner as well.  Please tell us which DellEMC SAN modell you have and how your switch fabric looks like. If its a Dell SC aka "Compellent" than you have to understand the concept of compellents "fault domains" for example.

On the other hand speaking of SPA ad SPB sound more like "Unity" speach.

Since ESXi have a concept of swISCSI binding and non-binding it depends on your SAN storage how you have to configure ESXi. Its starts not on ESXi side.

Btw., in most of our iscsi setups 4x10G is more than enough and we dont see a need for more pNICs. Just take 2 of them for iSCSI and when possible avoid to place other kind of traffic onto them.  The other 2x10G are for all kind of LAN and ESXi and are separated by using VLANs. But better have more pNICs as less never hurts 😉

Regards,
Joerg

 

Reply
0 Kudos