Hi Folks,
It may be the old topic but couldn't get what I needed. We have two hardware iSCSI ports in ESXi box. We have created two VMkernel port group each with single physical NIC and mapped it to each hardware iSCSI under network configuration. Could not find anything better from VMware documentations. In this way i believe it is only active/passive. Is there any better configuration we can do to improve the throughput?
Hi Hari,
What storage are you using?
The storage vendor normally have a reference architecture or deployment guide and best practice how to deploy and configure.
The document by storage vendor normally covers both VMware side and storage side
VMware has a design considerations and deployment guide too here: iSCSI Design Considerations and Deployment Guide
As per VMware design & deployment guide; to improve throughput, you can have multiple iSCSI session with multiple iSCSI target
From the network perspective, you create multiple (at least two) iSCSI VMkernels either using single vSwitch or multiple vSwitch
Then use iSCSI port binding and associate 1st VMkernel to 1st NIC (2nd NIC unused) and 2nd VMkernel to 2nd NIC (1st NIC unused)
HI Bayu, Thank you for your response. Currently out configuration is as mentioned in second pic you attached: multiple vSwitch. Where single pNIC is attached to each vSwitch and each vSwitch is bound to individual iSCSI hardware initiator. My query is; as we are facing severe congestion during svMotion or likewise activities, is there any betterment to improve IO throughput in contrast with current configuration mentioned?
We are using EMC Unity new box. Will surely check vendor guidelines on VMware best-practices.
Thank You!
Hari.