I am hoping one of you experts might be able to advise me on why my iSCSI/Network Perfomance is doing so badly...
I have just installed ESXi 6 in a new environment which has the following hardware config
ESXi Host (HP ML 150 G6)
2 x CPU INTEL XEON 1.87Ghz Processor
2 x 1GB NIC
HP Smart Array P212 Hardware RAID Controller (No cache)
4 x 7200 RPM SATA3 Drives in RAID 10 Config
QNAP 259 Pro+
2 x WD RED 7200 2TB Disks in RAID 1
2 x 1GB NIC's (Teamed)
3 iSCSI LUNS (VM, DATA, BACKUPS)
1 x Gigabit Unmanaged Netgear 16 port switch
1 = SATA RAID 10
2 = iSCSI
3 = NFS
I am having some problems with the network setup for my environment and was wondering if someone could advice a newbie on the best configuration for performance.
I recently configured iSCSI port binding but for some reason am only getting traffic through one of my NICS? I am guessing it is because the IP Address I assigned to the VMKernal Port needs to be assigned to the physical adapter somehow???
Also can anyone recommend some quick low cost performance gains that might help my network.
I appreciate it...
Here are some screenshots of my configuration:
Right now I only seem to get network traffic through my vmk0 Management Network.
Note: I know vmk0 is my management network but it works until I can fix my problem.
Hello, you need create a distributed switch for create a port group or link aggregation. You can create this switch of vcenter server.
VMware KB: Configuring LACP on an Uplink Port Group using the VMware vSphere Web Client
Please attach the subnet mask for your QNAP and for vmk1 and vmk2.
Hi Thanks for the reply...
The subnet mask is: 255.255.255.0
All machines are on the same network 126.96.36.199\24
We are not running vCenter, just using the free ESXi 6.0 license, with vSphere 6...
I would be really grateful if someone could help with this question. I am really stuck!
If you need any additional information please let me know!
from the pictures you posted it looks like you have the wrong VM kernels in the iscsi initiator. You have 1mgmt (vmk0) and 1 iscsi (vmk1) instead of both iscsi (vmk1 and vmk2)
i don't have any experience with qnap so I can't say whether or not you need to break the team and just have two separate 1gb connections or not (not sure about path policies, iscsi discovery addresses, etc on the VMware side)
would also check to make surge that if you get ESXi to use both nics, the physical switch/qnap is actually balancing traffic on both nics and not just sending everything down only 1.