Hi,
I have a PS6000, 2 Powerconnect 5524 and 2 R710
Ports 1 and 2 of my san are on Switch1 and ports 3 and 4 on Switch 2, ports are tagged on VLAN10 an the two swjtches, they ar no stacked
The fours ports of my san are:
Port 1 10.10.1.10 on switch 1
Port 2 10.10.1.20 on switch 1
Port 3 10.10.1.30 on switch 2
Port 4 10.10.1.50 on switch 2
Virtual San ip: 10.10.1.200
My R610 has a vmnic3 on switch1 and one vmnic5 on switch2, ports are VLAN10 taggued
With this setup i c'ant see my san over 4 path
If i setup a vmkernel ISCSI1 port 10.10.1.50 on vmnic3/vmk0 on switch1 i can see my san on 10.10.1.10, 10.10.1.20 or 10.10.1.200
If i setup a vmkernel ISCSI2 port 10.10.1.60 on vmnic5/vmk1 on switch2 i can't see my san
If i delete ISCSI1, i can see my san over the ISCSI2 and vmnic5
If i setup ISCSI1 and ISCSI2 on separate Vlan, i can see my san over the two nic, so i have 4 path to the SAN
I saw that:
~ # esxcfg-route -l
VMkernel Routes:
Network Netmask Gateway Interface
10.10.1.0 255.255.255.0 Local Subnet vmk0
10.10.3.0 255.255.255.0 Local Subnet vmk4
10.10.4.0 255.255.255.0 Local Subnet vmk3
default 0.0.0.0 10.10.1.254 vmk0
Maybe a route problem??
Thx
I've setup up EQLs with vSphere many times. Dell and EQL recommend a one-to-one ratio of iSCSI VMkernel ports to physical NICs. If you have 2 NICs per host dedicated to iSCSI, you should have 2 VMkernel ports on each host (and remember, they need to be configured exactly the name for vMotion to work). They also recommend the same number of NICs on each host as you have on the SAN controller - 4 SAN controller iSCSI NICs - 4 NICs per server for iSCSI.
Despite physical paths and redundant NICs, each VMkernel port only represents one logical path.
Here's how I would do it with a 6000 series EQL (4 ports per controller)
R710 #1:
VMkernel Port 1 - Physical NIC 1 - Switch A
VMkernel Port 2 - Phyiscal NIC 2 - Switch A
VMkernel Port 3 - Physical NIC 3 - Switch B
VMkernel Port 4 - Physical NIC 4 - Switch B
R710 #2:
VMkernel Port 1 - Physical NIC 1 - Switch A
VMkernel Port 2 - Phyiscal NIC 2 - Switch A
VMkernel Port 3 - Physical NIC 3 - Switch B
VMkernel Port 4 - Physical NIC 4 - Switch B
4 Port .1Q LAG between Switch A and B, allowing all VLANs to pass through. As Andre said, all iSCSI NICs on the same VLAN. I can't see why you would need or want several iSCSI VLANs.
Dell EQL:
Plug 2 NICs per controller into each switch, so each controller is also multipathed through both switches.
In this configuration, VMware will see 4 paths to the storage, which is what there should be, if you have 4 NICs per controller. You will also get full redundancy in this configuration, since each host has 2 paths per switch.
I'm very familiar with EQLs, so if you have any other questions just ask. I'm actually setting up a PS4000 this week.
No ideas ?
You cannot use two different network for iSCSI... See the configuration guide on Dell-Equallogic site.
All interfaces and group IP must be on the SAME network. You must connect the two PC54xx with a LAG (I suggest at least 4 cable).
Andre
I've setup up EQLs with vSphere many times. Dell and EQL recommend a one-to-one ratio of iSCSI VMkernel ports to physical NICs. If you have 2 NICs per host dedicated to iSCSI, you should have 2 VMkernel ports on each host (and remember, they need to be configured exactly the name for vMotion to work). They also recommend the same number of NICs on each host as you have on the SAN controller - 4 SAN controller iSCSI NICs - 4 NICs per server for iSCSI.
Despite physical paths and redundant NICs, each VMkernel port only represents one logical path.
Here's how I would do it with a 6000 series EQL (4 ports per controller)
R710 #1:
VMkernel Port 1 - Physical NIC 1 - Switch A
VMkernel Port 2 - Phyiscal NIC 2 - Switch A
VMkernel Port 3 - Physical NIC 3 - Switch B
VMkernel Port 4 - Physical NIC 4 - Switch B
R710 #2:
VMkernel Port 1 - Physical NIC 1 - Switch A
VMkernel Port 2 - Phyiscal NIC 2 - Switch A
VMkernel Port 3 - Physical NIC 3 - Switch B
VMkernel Port 4 - Physical NIC 4 - Switch B
4 Port .1Q LAG between Switch A and B, allowing all VLANs to pass through. As Andre said, all iSCSI NICs on the same VLAN. I can't see why you would need or want several iSCSI VLANs.
Dell EQL:
Plug 2 NICs per controller into each switch, so each controller is also multipathed through both switches.
In this configuration, VMware will see 4 paths to the storage, which is what there should be, if you have 4 NICs per controller. You will also get full redundancy in this configuration, since each host has 2 paths per switch.
I'm very familiar with EQLs, so if you have any other questions just ask. I'm actually setting up a PS4000 this week.
You mean on the same physical network?
I will test wtih a LAG between switch
It's ok with a trunk between switchs.... :smileylaugh:
What is the best way, LAG or stacking switchs??
Hi,
I would recommend a LAG. Because then you have two switches that work independent of each other. The benefits of a stack are higher bandwidth, 12 Gbps if i recall correctly, less ethernet ports required and a single config to maintain. But the LAG setup is often more stable when you need to reboot one switch or loose one switch.
I would also suggest that you look into the possibility to install Dell Equallogic MEM driver on each ESX/ESXi host. It can be downloaded on the support page at www.equallogic.com. The MEM package includes a installation script that will create and configure vmkernel ports automatically.
I have an essentials license and MEM is not compatible whit it
Ok i will test with a LAG of 4 ports
All is allright with a LAG between the 2 switchs, thanks for you help !