Connect to the target via 10Gb links.
Standard switches have been created.
The target is based on SSDs.
The latest patches are installed.
Tried various MPIO settings
When testing 4K blocks shows an incredible reading speed - 3 GByte per second.
When testing 4K blocks shows a very small read speed - 1 MB per second.
When testing 64K blocks shows an incredible reading speed - 3-3.5 GByte per second.
When testing 64K blocks shows a very small read speed - 19 MB per second.
When connecting to the same target from Windows OS:
When testing 4K blocks shows a good reading speed - 1.4 GByte per second.
When testing 4K blocks shows a good write speed - 1 GByte per second.
When testing 64K blocks shows good reading speed - 1.8 GByte per second.
When testing 64K blocks shows good write speed - 1.2 GByte per second.
What could be the problem?
Also have a similar problem
3 x Dell 730xd Servers ( 2 x 10GB dedicated NIC's for iscsi traffic to DELL SAN)
Dell MD3860i SAN
2 x Dell N4032 switches
Jumbo frames enabled on switches/vswitches etc.
Destination Lun: 1 x 800GB SSD
Max Write speed: 160MB/s ??
Any help would be appreciated.
Few things to check,
1. If Jumbo frame is used at target side then make sure Jumbo from is enabled through out the network. vnic -> vswitch -> Top of the rack switch etc.
2. Whats the type of vDisks? if its lazy zero or thind disk then the write performance will be very low compare to Eager zero disk.
For what's it's worth, I tested dell scv2000 with 7200rpm disks connected with dual sas 12gb cards :
Sequential Read (Q= 32,T= 1) : 521.674 MB/s
Sequential Write (Q= 32,T= 1) : 488.263 MB/s
Random Read 4KiB (Q= 32,T= 1) : 15.692 MB/s [ 3831.1 IOPS]
Random Write 4KiB (Q= 32,T= 1) : 9.760 MB/s [ 2382.8 IOPS]
Sequential Read (T= 1) : 387.146 MB/s
Sequential Write (T= 1) : 374.747 MB/s
Random Read 4KiB (Q= 1,T= 1) : 2.060 MB/s [ 502.9 IOPS]
Random Write 4KiB (Q= 1,T= 1) : 7.024 MB/s [ 1714.8 IOPS]
Tested inside win2012r2 vm with crystal disk mark 5.2.1
Esxi version 6.5 build 4887370
Did you manage to improve these speeds?
Are you writing to VMFS 6 volume?
What is the iPerf results between ESXi hosts locally ? and what is SCP results?
scp testfile root@<ip address>:/tmp