egaulue
Contributor
Contributor

Teaming trouble: Direct 10GBE connection to 2 NAS with HA (no switch)

Dear community,

I have 2 hosts ESXi1 and ESXi2 with 2 ports (10 GbE).

I have 2 NAS (Synology) with High Availability (Active/Passive). They have their own IP addresses and they have they also propose HA ip addresses being the one of the active NAS.

Configuring /etc/hosts I manage to have my two ESXi to communicate with the active NAS. But, if I switch, ESX can't connect to the previous slave now active.

Here is the configuration:

ESXi1 vmk2 192.168.253.1 attached to nic2 & nic3

ESXi2 vmk2 192.168.253.2 attached to nic2 & nic3

NAS1 eth4 192.168.253.31 --> linked to ESXi1 nic2

NAS1 eth5 192.168.253.32 --> linked to ESXi2 nic2

NAS2 eth4 192.168.253.41 --> linked to ESXi1 nic3

NAS2 eth5 192.168.253.42 --> linked to ESXi2 nic3

Virtual addresses on the current active NAS (NAS1 in this exemple):

NASHA eth4 192.168.253.51

NASHA eth5 192.168.253.52

/etc/hosts on ESXi1 :

nasha 192.168.253.51

/etc/hosts on ESXi2 :

nasha 192.168.253.52

Presently due to nic teaming on ESXi1 and ESXi2, and nic teaming order, just nic2 or nic3 looks to be used, but not both at the same time. If I switch my active NAS, the link will continue to be considered as UP on nic2, because I just have changed Active to Passive and Passive to Active on my 2 NAS. So, ESXi1 and ESXi2 can't join NAS2 on nic3...

Is there any simple configuration I'm missing? Or is the only solution NFSv4 multipathing?

0 Kudos
0 Replies