VMware Cloud Community
malutka
Contributor
Contributor

ISCSI issues and SSD Synology need some input

Hi

I really need expert advice from users working pro with ISCSI and Vsphere 5 and upwards.
I have cases with Synology but it is for the moment not helping as I get no answers that are correct back.

My enviroment.

Two ESx server with 132 GB RAM
10 pieces of 1Gb/s Nic cards
4 dedicated for ISCSI
The rest Vmnetwork and Management

NAS
RS3413xs+ from Synology
Configuration RAID 5 + spare
Disks: WD Red WD30EFRX 3TB NAS Hard Drive - 3.5" SATA, 5400RPM, 64MB Cache
Volume: File level ISCSI
SSD Cache Read Intel SSDSC2CW240A3 256 GB each
Ram 8 GB.
DSM 4.2
4 pices of 1gb/s nic cards

Switches
Cisco Catalyst 3750 three of them
Three vlans
Deafult vlan: Normal trafic
Vlan 3 Vmotion Trunk ports
Vlan 5 ISCSI trafic )Dedicated trunk ports
ESXi specific ports non ISCSI trunked
Non routed Vlans
Create Port fast for all ports in above
Create Flow control for all ports in above
Create full throughput for ports, above 1 gb/s
Disable unicast Storm control on ISCSi ports, keep broadcast and multicast
Cabling cat 6

ESxi 5.1 Setup with vcenter 5.1 Enterprise
ISCSI MPIO and Round Robing enabled on both ESXi servers verifed and tested according to Vmware best practices and using the same subnet.
And for network design.

Here comes the question/issues.

Nr 1.

SSD in the above environment does not work, the white paper says that you should have about 512 GB for 8 GB of RAM in the nas but the manual in DSM says 16 GB I have verifed with support that it is wrong in the DSM help file and it should be 8 GB, that answer took 7 days to get.
Using SSD with above environment will actually slow down your trafic, Synology can not confirm if SSD is working or not, today they are turned off in my system.
My SSD disks are whitelisted for the NAS
Question has anyone here in this forum verified that they have functional SSD disks with cache with above configuration or that SSD disks for with ISCSI data and datastores using vmfs5.

Nr 2.

MPIO and Round Robin with ISCSI with four dedicated card connected to the NAS that has four dedicated cards.
This is the correct way to setup for ISCSI trafic, you should not use link aggregation as support suggested because you will loose both MPIO and RR functions.

I see very bad speeds here, every datastream initiated should start on each card and and therotically give you speeds up to 125 MB per card times four.
This does not happen at all, max speed is always 125 MB even if other datastreams are started for the ESXi system.
Has anyone here verifed MPIO and RR with ISCSI and seen the correct speedsm any suggestions are welcome, Synology points me to the wiki which is wrong and useless for my above configuration for the moment.
Can anyone verify that MPIO actually works with Synology and can see lan speeds that should be at least around 300 too 400 MB for this kind of trafic.

Nr 3.

VAAI support and thin Provisioning of disks.
DSM 4.2 forces you to use Thin Provisoned disks to be able to use VAAI support, in my case I have to turn this off per Lun the reason for this is when doing backups of Thinproviosen disk in the Nas it will increase the backup time with 80 procent due to the fact it has to inflate and deflate the disk, so this function almost doubles backup time.
This has been verified withh both Veeam and Vmprotect8

Can anyone verify better speeds with VAAI turned on Synology, I know that formatting volumes will work instead for an hour it took 2 minutes, but I can not verify anything more.

PS..

I have tried Jumbo Frames.

Any way thanks for reading this.

If you have any suggestions that would create faster ISCSi operations and MPIO thoughput plz add the information here, or any tweaks.

Sincerely Cabaj

0 Kudos
1 Reply
erpomik
Contributor
Contributor

Hi Cabaj

We have been using Synos for a long time, and they perform very well.

First thing to check:

How many paths do you see per LUN? You should see only one path per vmknic per lun.

Second:

You say "using the same subnet". This might be the issue, if you see more MPIO paths than you should. Try separating each path into its own broadcast domain/VLAN.

Regards

Ernst Mikkelsen

0 Kudos