VMware Cloud Community
normalguy1234
Contributor
Contributor
Jump to solution

4 x NIC ESXi host configuration in cluster

Small office with 8-10 VMs max. Going to purchase 2 x Hosts + SAN (using iscsi). 4 port copper NIC within each host. Going to put hosts in cluster within vsphere. Switch stack will only have 1Gbps links for all traffic.  Considering the following options and would like to obtain some feedback from others. 

Option 1

1 x NIC for Management and vMotion. Enabling Network IO control and limiting vmotion traffic to ensure I don't kill management traffic

1 x NIC for Guest VM Traffic

2 x NICs for iscsi. Configured for isci port binding. taking advantage of multipath

Option 2

1 NIC each for MGMT, vMotion, iscsi and Guest VM Traffic

Tags (1)
1 Solution

Accepted Solutions
IRIX201110141
Champion
Champion
Jump to solution

If this cluster have HA enabled, which i assume, than both options are not sufficient because from a Host perspective the "HA" Network needs to be redudant so you need 2 NICs (Active/Active or Active/standby) there.

- Maybe there is a adv. option to suppress the "missing redudancy" warning which comes up. But is doesnt solve the problem

- You can use adv. Hosts options to specify another (Default is the GW Address) IP as isolation responce which also uses the 2 iSCSI nics.  But you cant use the iSCSI VMKs directly because you told us that ISCSI Binding is used which means 2 VMKs with each as one Active/UnUsed nic. So a VMK named "Dummy/HA" and a VLAN IP on the pys. switch is needed

Or...

vSwitch0 (2 Uplinks)

- Management(VLAN100)

- vMotion(VLAN110)

- LAN(VLAN120)

vSwitch1 (2 Uplinks)

- ISCSI-1

- ISCSI-2

Management and vMotion use the same Active/Standby NIC and LAN use the other Active/Standby combination. Drawback is that all VMs only have 1Gbit bandwith

Next time add more nics, or 10GbE or better use a shares SAS SAN for such small setups.

Regards
Joerg

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

View solution in original post

3 Replies
IRIX201110141
Champion
Champion
Jump to solution

If this cluster have HA enabled, which i assume, than both options are not sufficient because from a Host perspective the "HA" Network needs to be redudant so you need 2 NICs (Active/Active or Active/standby) there.

- Maybe there is a adv. option to suppress the "missing redudancy" warning which comes up. But is doesnt solve the problem

- You can use adv. Hosts options to specify another (Default is the GW Address) IP as isolation responce which also uses the 2 iSCSI nics.  But you cant use the iSCSI VMKs directly because you told us that ISCSI Binding is used which means 2 VMKs with each as one Active/UnUsed nic. So a VMK named "Dummy/HA" and a VLAN IP on the pys. switch is needed

Or...

vSwitch0 (2 Uplinks)

- Management(VLAN100)

- vMotion(VLAN110)

- LAN(VLAN120)

vSwitch1 (2 Uplinks)

- ISCSI-1

- ISCSI-2

Management and vMotion use the same Active/Standby NIC and LAN use the other Active/Standby combination. Drawback is that all VMs only have 1Gbit bandwith

Next time add more nics, or 10GbE or better use a shares SAS SAN for such small setups.

Regards
Joerg

---------------------------------------------------------------------------------------------------------

Was it helpful? Let us know by completing this short survey here.

HassanAlKak88
Expert
Expert
Jump to solution

Hello,

With the existing hardware/specs, we prefer to deploy the below:

One virtual switch for: ESXi management, vMotion & VM traffic, with two Uplinks Active/Active

The other two Uplinks are for second virtual switch for ISCSI

with this scenario, we ensure the availability for all traffic type. but you still have the band-witch issue with 1GB interfaces. if the throughput of your VMs is small so with Active/Active configure it will be enough. otherwise you have to add more network adapters with highest speed.


If my reply was helpful, I kindly ask you to like it and mark it as a solution

Regards,
Hassan Alkak
0 Kudos
normalguy1234
Contributor
Contributor
Jump to solution

Thanks for your input. In this particular case, VM guest traffic on the network does not exceed 200mbps at any point when looking at those particular metrics. should be in good shape with 1gbps links.

0 Kudos