VMware Cloud Community

Teaming trouble: Direct 10GBE connection to 2 NAS with HA (no switch)

Dear community,

I have 2 hosts ESXi1 and ESXi2 with 2 ports (10 GbE).

I have 2 NAS (Synology) with High Availability (Active/Passive). They have their own IP addresses and they have they also propose HA ip addresses being the one of the active NAS.

Configuring /etc/hosts I manage to have my two ESXi to communicate with the active NAS. But, if I switch, ESX can't connect to the previous slave now active.

Here is the configuration:

ESXi1 vmk2 attached to nic2 & nic3

ESXi2 vmk2 attached to nic2 & nic3

NAS1 eth4 --> linked to ESXi1 nic2

NAS1 eth5 --> linked to ESXi2 nic2

NAS2 eth4 --> linked to ESXi1 nic3

NAS2 eth5 --> linked to ESXi2 nic3

Virtual addresses on the current active NAS (NAS1 in this exemple):

NASHA eth4

NASHA eth5

/etc/hosts on ESXi1 :


/etc/hosts on ESXi2 :


Presently due to nic teaming on ESXi1 and ESXi2, and nic teaming order, just nic2 or nic3 looks to be used, but not both at the same time. If I switch my active NAS, the link will continue to be considered as UP on nic2, because I just have changed Active to Passive and Passive to Active on my 2 NAS. So, ESXi1 and ESXi2 can't join NAS2 on nic3...

Is there any simple configuration I'm missing? Or is the only solution NFSv4 multipathing?

0 Kudos
0 Replies