VMware Cloud Community
stoute
Enthusiast
Enthusiast

Network design

Hallo everyone,

I’m looking for some advice on how to resign my network.

Currently I have a cluster of 3 esxi nodes managed by vcenter. every server has 8 1gbe network adapters.

My storage is based on iscsi and is on a dedicated network for iscsi.

I’m looking to replace my storage with a new system and I’ve been testing vsan with a hybrid solution.

To replace my storage I would like to go to a full flash vsan environment, but this requires a 10gbe network.

I’ve been reading a lot online but can’t seem to make up my mind on what to do.

personally I liked this idea but I’m not sure about the impact of fixing MTU sizes.

vSAN Network Architecture: vSAN Architecture Series #9 - YouTube

What would you guys advise me to do?

To give you an idea this is my current design

pastedImage_0.png

For every server I have 1 dual port 10gbe adapters that can replace the current quad port 1gbe adaptor. with the current hardware I have what would be the best solution?

I could do like this. but what I’m wondering is if this won’t generator a lot of error traffic because my management network and vm network doesn’t use jumbo frames.

pastedImage_4.png

Or would this option be better. just keep the management and vm network on 2 1 gbe links?

pastedImage_5.png

0 Kudos
1 Reply
sk84
Expert
Expert

I would put Management, VM Networks and vSAN on the same dvSwitch with 2x 10Gbit uplinks. Each uplink interface is then connected to a physical switch. Personally, I would even configure the  iSCSI traffic on the dvSwitch. But that also depends on your physical switch configuration. Otherwise, if you have an interuption on one of the uplinks, you may have an HA problem in some cases, because in a vSAN setup the HA  communication is handled over the vSAN network and not over the management network.

In addition, I would recommend using NIOC and configuring traffic shares so vSAN doesn't suffer when the uplinks are saturated.

You can configure the MTU at (d)vSwitch level and overwrite it at vmkernel level. That is, the (d)vSwitch has an MTU of 9000 configured, the vmkernel port for vMotion and vSAN as well, and for the management vmkernel port you can use 1500. It is important that the endpoints have the same MTU configured. The switches in the middle can also support larger MTU. So a problem with this setup would be if VM 1 has an MTU of 9000 configured and VM 2 (or an external firewall, computer, etc.) only has an MTU of 1500. But if both endpoint devices have an MTU of 1500, it doesn't matter that the physical switches in the middle and the dvSwitch support an MTU of 9000.
Another pitfall is when your jumbo frame communication hits a switch or switchport which is not configured for larger MTU. For example, if a switch in the middle is not configured for jumbo frames and you are using jumbo frames for vSAN or vMotion or iSCSI. In this case, the packets are dropped. So, you must ensure that *all* switchports in the middle that are used for ESXi hosts and storage  are configured correctly.

To make it short:

Set the MTU for the (d)vSwitch to 9000, set the MTU for the vSAN and vMotion vmkernel ports to 9000, configure each physical ESXi switchport with a MTU of 9000 and if you like do the same for the iSCSI storage switchports and iSCSI vmkernel ports and vSwitch. Ensure that virtual machines and management vmkernel port is set to 1500 and also the uplinks and other switchports of your physical infrastructure.

--- Regards, Sebastian VCP6.5-DCV // VCP7-CMA // vSAN 2017 Specialist Please mark this answer as 'helpful' or 'correct' if you think your question has been answered correctly.