Hello guys,
Could you help me saying a best recommendation to create a vSwitch Standard?
In my current environment, we have two hosts ESXi, two switches (SAN and Core) and one storage. Our ESXi hosts have 4 interfaces connected to a SAN switch and 4 interfaces connected to a LAN switch. We have two Standard vSwitches (one to LAN and other to SAN). Is it a recommended configuration?
Recently, we bought three servers, two switches SAN and two storages to create a new scenario. To this new scenario, we will keep using the current core switch as LAN and we will create a new SAN with the two new switches.
To this new scenario, is it recommended to create two Standard vSwitches as the current environment or can we create just one Standard vSwitch?
Best Regards,
Josué
In my current environment, we have two hosts ESXi, two switches (SAN and Core) and one storage. Our ESXi hosts have 4 interfaces connected to a SAN switch and 4 interfaces connected to a LAN switch. We have two Standard vSwitches (one to LAN and other to SAN). Is it a recommended configuration?
This is a common configuration.
To this new scenario, is it recommended to create two Standard vSwitches as the current environment or can we create just one Standard vSwitch?
You should separate those out into separate vSSs because the duties they perform are different.
In my current environment, we have two hosts ESXi, two switches (SAN and Core) and one storage. Our ESXi hosts have 4 interfaces connected to a SAN switch and 4 interfaces connected to a LAN switch. We have two Standard vSwitches (one to LAN and other to SAN). Is it a recommended configuration?
This is a common configuration.
To this new scenario, is it recommended to create two Standard vSwitches as the current environment or can we create just one Standard vSwitch?
You should separate those out into separate vSSs because the duties they perform are different.
daphnissov, many thanks.
In addition to what has already been said, take a look at the Best Practices guides for the storage to see how the storage vendor recommends to setup networking. Some vendors recommend to create separate vSwitches for SAN connectivity with one uplink each.
André
That's right, André. Thanks. I'll be check out.
We will configure two Dell Storage SCv3020. If you have a recommended configuration to this storage, could you send to me, please?
Best Regards,
Josué
I installed an SCv3020 a few weeks ago, and followed Dell's recommendation to use a separate vSwitch per iSCSI uplink.
Please remember that due to the different subnets per Fault Domain you must not configure explicit port binding, by adding the iSCSI VMkernel port groups to the iSCSI adapter's Port Binding configuration.
In addition to this I configured the following settings based on "Dell EMC SC Series Best Practices with VMware vSphere 2019-09.pdf"
Please note that the settings must be configured before presenting the first LUN to the ESXi host, and that the host needs to be rebooted for some of the settings to take effect.
Conditions for round robin path changes, all datastores on host (section 6.9.1.1)
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V COMPELNT -P VMW_PSP_RR -o disable_action_OnRetryErrors -e "Dell EMC SC Series Claim Rule" -O "policy=iops;iops=3"
esxcli storage core claimrule load
Software iSCSI Queue Depth (section 4.2.2)
esxcli system module parameters set -m iscsi_vmk -p iscsivmk_LunQDepth=255
Software iSCSI login timeout (section 4.2.2)
esxcli iscsi adapter param set -A=vmhba64 -k=LoginTimeout -v=5
Disable Delayed ACK (section 3)
vmkiscsi-tool -W -a delayed_ack=0 -j vmhba64
HA Cluster Settings (section 4.3)
esxcli system settings kernel set -s terminateVMonPDL -v TRUE
HA Cluster Settings (section 4.3)
esxcli system settings advanced set -o "/Disk/AutoremoveOnPDL" -i 1
Advanced options HA Cluster setting (section 4.3)
das.maskCleanShutdownEnabled = True
Note: If the iSCSI software adapter on your host is not "vmhba64", you'll have to modify the commands above.
André