Configuring the LAG is quite simple, follow this VMware Knowledge Base
One important thing to understand, is that you have for example one source and one flow of traffic, the LAG will NOT summarize your thoughtput. You will end up using only one NIC.Triple VCIX (CMA-NV-DCV) | vExpert | MCSE | CCNA
Have a read of this: VMware Knowledge Base
You will need to setup the "route based on IP hash" policy, but won't get both NICs active unless you have multiple IP sessions established between your VM and your file server.
Note also the difference between Standard and Distributed vSwitches regarding LACP support.
Link Aggregation like this is unfortunately not supported Standard vSwitches.
Note: LACP is only supported in vSphere 5.1, 5.5, 6.0, 6.5 and 6.7 using vSphere Distributed Switches (VDS) or the Cisco Nexus 1000v.
I just configured ESXi 7 essentials with Vcenter on this host, host have four 1Gb/s NICs and I don't know what are best practices for network configuration. On this host will be installed 4 to 5 VM, one of them will be "network resources hungry" because this will be VM with Veeam Backup and Replication which will be connected using ISCSI to QNAP using the same switch betweem them.
My management and server vlan are on the same network, so how should I configure my network interfaces?
NIC0 - management only and vCenter
NIC1 and NIC2 - set both as active and assign to VM with Veeam (should I set here LACP \ interface bonding)?
NIC3 - for the rest VM
could you give me any advices?
Since there's no real benefit from bonding the NICs, I'd put this aside.
If you want to use a dedicated NIC for Veeam, but would like to have redundancy without wasting NICs, then consider to attach all four vmnics on vSwitch0 (all of them active), create a dedicated VM port group for Veeam with e.g. vmnic3 (active), and the other vmnics as standby, and configure the Management, and default VM port group with vmnic0-2 (active), and vmnic3 (standby).
This way your traffic is separated, and use its standby vmnics only in case of a network connection issue.
vSwitch0 - vmnic1-3 (active) - Default settings "Route based on the originating virtual port ID"
Management - vmnic0-2 (active), vmnic3 (standby)
VM Network - vmnic0-2 (active), vmnic3 (standby)
Veeam VM Network - vmnic3 (active), vmnic0-2 (standby)
Thanks Andre, but this all vsphere networking, port groups,vmkernel is over my head, I don't understand this yet.
Andre I just added vmnic0 and vmnic3 to my vSwitch as all Active. (now I would working on two ethernet ports)
Now on management I have set vmnic0 as active and vmnic3 as standby.
But when I would to set VM network to have vminc3 as active and vmnic0 a standby I can't because everything here is grayed out.
Are you saying that even the "Override" checkbox in the "Failover Order" settings can't be enabled?
we have a lot of small customer with "QNAP"s or similar Home NAS systems. Most of the time they are unable to fill a 1Gbit pipe as soon as dealing with non streaming write traffic because lack of spindle count and CPU power. Yes i have also seen "large" QNAPs.
For me its not totally clear if you use ISCSI from QNAP to ESXi or iSCSI INTO the GuestOS?
- LACP/LAG is unsupportet for iSCSI
- As mentioned you need vDS for supporting LAG
- Also already mentioned you need multiple "Sender" addresses to get LACP working if your main goal it increasing the bandwidth. But with a single TCP connection this will not work and you will use one nic/line all the time
If you use ISCSI into the Guest i can think about
- Configure 2 vNICs to the Windows VM which are connected to 2 different portgroup with one active uplink
- Try to configure MPIO within Windows and see if you can create multible ISCSI Sessions in the Initiator together with something like RoundRobin. Iam an old EqualLogic Guy and knows that it possible and i also have done it with an iSCSI Bridge for an LTO drive
- Format the the windows disk with ReFS and not NTFS
- Dont use Reverse incremental forever in veeam
- Keep in mind that a Veeam Proxy with Hotadd will compress the backup data 2:1 which means filling a 1Gbit Pipe needs to be 200MB/s read on the ESXi.
I have a customer with a phys. windows Server for backup with 2x1Gbit. We setup a team with dyn. LACP in windows and also on the pSwitch. We increase the number of Proxy servers from one to three (one per ESXi Host) to create multiple Senders address (IP/MAC hashed) and we are able to increase network saturation from 0.9 up to 1.7Gbit/s.
I didn't know that i had to check "Override" now is OK