christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
Mnemonic
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS - Virtual Machine

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM on ESX 3.5.0 Update 4

CPU TYPE: AMD Third-Generation Opteron Quad-core 2378 2,4 Ghz / NUMBER: 1 vCPU

HOST TYPE: HP Proliant BL495C G5, 64GB RAM, 2 CPU

SAN Type: HP Eva 4400 / Disk Type: 4GB FC 450GB 15k / RAID LEVEL: Raid5 / Number: 15+1 Disks / Adaptor: QLogic QMH2462

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......______5___..........___9510__........____297___

RealLife-60%Rand-65%Read......_____11___..........___3699__........_____29___

Max Throughput-50%Read........_____43___..........____1214__........____38____

Random-8k-70%Read............._____13___..........____3427__........____27____

##################################################################################

-


++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS - Physical Windows Server 2003 R2 Standard

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: Physical

CPU TYPE: AMD Third-Generation Opteron Quad-core 2378 2,4 Ghz / NUMBER: 2

HOST TYPE: HP Proliant BL495C G5, 4GB RAM

SAN Type: HP Eva 4400 / Disk Type: 4GB FC 450GB 15k / RAID LEVEL: Raid5 / Number: 15+1 Disks / Adaptor: 2 x QLogic QMH2462

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......______3___..........__21779__........____681___

RealLife-60%Rand-65%Read......______9___..........___3699__........_____30___

Max Throughput-50%Read........_____46___..........____1134__........____35____

Random-8k-70%Read............._____10___..........____3891__........____30____

EXCEPTIONS: Dual Fiber Channel Adaptors with MPIO Driver

0 Kudos
Mnemonic
Enthusiast
Enthusiast

Did you use 1 or 2 HBA's in the physical test?

With or without MPIO Driver

0 Kudos
radimf
Contributor
Contributor

Hi, I saw your PS6000VX reults - it looks very good.

Just in my test case we were unable to trick (configure) 4 older 5000 boxes to deliver decent performance...

There was some weird glitch in our Windows 2008 + EQL SAN config....

0 Kudos
eagleh
Enthusiast
Enthusiast

1 HBA (dual-port) on my physical server. with the latest MPIO driver.

If you found this information useful, please kindly consider awarding points for "Correct" or "Helpful". Thanks!
0 Kudos
Axis
Contributor
Contributor

Just to get some feeling with the test, I ran it on my workstation (while doing other stuff, sorry). Perhaps anyone is interested in seeing how a single workstation SSD (of 300 EUR) performs.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: my desktop

CPU TYPE / NUMBER: CPU / 1

HOST TYPE: Dell Precision T3400, Q9550, 8GBG, 64-bit Vista

STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS single OCZ Vertex 120GB SSD

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek-

##################################################################################

Max Throughput-100%Read......___11.11____......._5328__........._166.49___

RealLife-60%Rand-65%Read.....___28.43____.......__2082__.........__16.27___

Max Throughput-50%Read.......___39.93____.......__1488__........._46.49___

Random-8k-70%Read............___24.35____.......__2446__.........__19.11___

EXCEPTIONS: CPU Util. 20% - 15% - 10% - 13%;

And here's some results for on of my production servers. This is also in production, with other database servers and web servers working on the SAN.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: database server

CPU TYPE / NUMBER: CPU / 2

HOST TYPE: Dell PowerEdge M600, 2*X5460, 32GB RAM.

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Equallogic PS5000E / 14*500GB SATA in RAID10

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek-

##################################################################################

Max Throughput-100%Read......___10.29____......._5694__........._177.94___

RealLife-60%Rand-65%Read.....___31.75____.......__1382__.........__10.80___

Max Throughput-50%Read.......___10.51____.......__5664__........._177.02___

Random-8k-70%Read............___34.34____.......__1345__.........__10.51___

EXCEPTIONS: CPU Util. 20% - 15% - 10% - 13%;

Microsoft iSCSI initiator. All connected to 2 M6220 switches, jumbo frames enabled, flow control disabled (yes, I found out later that the other way around should get better results, no time to switch yet), and MPIO using EQL's plugin using 'least queue depth' algorithm. No further tweaking has been done.

0 Kudos
5oadmin
Contributor
Contributor

Right, well I'm flabergasted, because I'm not getting results anywhere what I should, maybe someone here might have some advice?

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS for alextest

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VIRT.

CPU TYPE / NUMBER: vCPU / 1

HOST TYPE: Proliant dl580 g5 2x2cores 2.33 xeons 5140, 24gb ram

OS: winxp

STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC SAN CX300 - 15kdrives 300gb 15disks raid5 FC2

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........_56.43__..........___1061.03__........._33.157261__

RealLife-60%Rand-65%Read......_262.01__.........._227.15___.........__1.77_

Max Throughput-50%Read.........._89.548__..........__663.736302__.........__20.741759_

Random-8k-70%Read.................__263.06__..........__225.98__.........__1.76___

EXCEPTIONS: CPU Util. 21.16, 15.14, 18.80, 14.49

##################################################################################

Clearly thats horrible, the storage array cost something like GBP 20K per tray, nevermind the SAN unit itself, so I was expecting something at least faster than my local single SATA drive Smiley Happy

The test was run on an XP VM

Update: am getting these figures from iometer 2006.07.27 using the ICF provided in the thread.I get the result number from the csv file that gets generated at the end of the test by reading the 'IOps', 'MBps' and 'Average Response Time' colums. I've also attached the vmware graphs generated while the test was running to this message.

Message was edited by: 5oadmin - added gfx and further info

0 Kudos
Jcyou
Contributor
Contributor

I got some results. I think it is very poor performance. But I really need you advise what the result is good or bad for my system. Thanks. JC

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM (ms 2003 server) 512 MB

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: hp dl385 G2, 16GB RAM for host esx server, AMD Opteron, 2 cpu's

STORAGE TYPE / DISK NUMBER / RAID LEVEL: sanmelody server on win2003 r2 raid 5 (1tb lun) 4gb ram

iscsi, sata disks 12 spindles

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

Max Throughput-100%Read........___23_______..........___2533_______.........____79______

RealLife-60%Rand-65%Read......_____130_____.........._____450_____.........____3.51______

Max Throughput-50%Read.........._____86.77_____..........______661____.........____20.65______

Random-8k-70%Read.................___117_______..........____493______.........___3.85_______

EXCEPTIONS: All cpu is between 19% to 26.7%

##################################################################################

0 Kudos
ablej
Hot Shot
Hot Shot

Have you used Netapp mbralign on you're disk? This greatly improved our performance.

David Strebel

www.holy-vm.com

If you find this information useful, please award points for "correct" or "helpful"

David Strebel www.david-strebel.com If you find this information useful, please award points for "correct" or "helpful"
0 Kudos
Jcyou
Contributor
Contributor

Thanks ablej.

No I didn't use Netapp mbralign yet. I will do some more research on that. By the way, do you think what kind of results I should get according our system Configuration. Thanks for any input.

Jcyou

0 Kudos
larstr
Champion
Champion

ran iometer with "OpenPerformanceTest.icf" found in this thread, however can't sort out how to produce the summary "TABLE OF RESULTS" users are posting here. what am i missing?

1. After having run the test (4x5 minutes) you will have a file called results.csv. Open this file in your favorite spread sheet application.

2. You will need to convert it into the table. In Excel 2007 this can be done by first selecting Column A, then choose Data / Text to Columns / Delimited / Comma / Finish.

3. You now have a good view in Excel of the results. IOPS are now in column G, MBps in column J, Latency in column O and cpu usage in column AT.

Lars

0 Kudos
dennes
Enthusiast
Enthusiast

Ik ben vandaag niet aanwezig. Vanaf maandag ben ik weer bereikbaar.

Voor dringende zaken kunt u contact opnemen met kantoor op nummer: 013-5115088

Deze e-mail wordt tussentijds niet gelezen.

Groeten,

Dennes

Feju Automatisering BV

Nijverheidsweg 21 | 5071 NL Udenhout

013 - 511 5088 013 - 511 0138 http://www.feju.nl

Microsoft Certified Partner | Microsoft Small Business Specialist

VMware VIP Professional Partner | RICOH Partner

Feju Automatisering BV (Feju) heeft aan het opstellen en verzenden van dit e-mail bericht (met bijlagen) de nodige zorg besteed. Desondanks is het mogelijk dat dit bericht onvolledig is, onjuistheden bevat, niet voor u is bestemd en/of te laat wordt ontvangen. Feju aanvaardt daarvoor geen aansprakelijkheid. Evenmin kunnen aan dit bericht rechten worden ontleend. De informatie verzonden in dit e-mail bericht inclusief de bijlage(n) is vertrouwelijk en is uitsluitend bestemd voor de geadresseerde van dit bericht. Indien u niet de beoogde ontvanger van dit bericht bent, verzoekt Feju u vriendelijk doch dringend dit bericht te verwijderen en eventuele bijlagen niet te openen. Feju wijst u op de onrechtmatigheid van het openbaar maken, gebruiken, vermenigvuldigen, verspreiden en/of verstrekken van de inhoud van dit bericht aan derden. Tevens wordt u verzocht de afzender per omgaande van de onjuiste adressering/ontvangst op de hoogte te stellen en dit bericht, met inbegrip van eventuele bijlagen, uit uw systeem te wissen. Feju aanvaardt geen aansprakelijkheid voor schade, van welke aard ook, die verband houdt met risico’s verbonden aan het elektronisch verzenden van berichten.

-


0 Kudos
asp24
Enthusiast
Enthusiast

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS for asp24_bench

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ESX4

SERVER TYPE: VIRT.

CPU TYPE / NUMBER: vCPU / 2

HOST TYPE: Supermicro Barebone, 4 x QuadCore Opteron 2,1 GHz, 32GB DDR2 667

OS: winxp

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Infortrend S16E-G1130 - 16kdrives 300gb 15disks raid10 ISCSI

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........_16.24__..........___3597.95__........._112.44__

RealLife-60%Rand-65%Read......_10.17__.........._5085.22___........._39.73

Max Throughput-50%Read.........._17.13__..........__3468.10__........._108.38

Random-8k-70%Read.................__10.096__..........__4636.27__.........__36.22___

##################################################################################

Single gigabit nic for iscsi traffic. I'm having some trouble getting MPIO to work properly. Probably a SAN config issue. But random IO is most important.

Looks like ESX4 iscsi is a little bit faster (10-15% better on the random tests compared to ESX 3.5)

0 Kudos
k995
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS for K9

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

ESXi3.5

SERVER TYPE: VIRT.

CPU TYPE / NUMBER: vCPU / 1

HOST TYPE: DL360 G5 , 4 x 2.5GHZ , 10GB RAM

OS: Windows 2003 R2 SE

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Openfiler installed on DL380 G4 - 10kdrives 300gb / 5 disks / raid 5

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........_16.53__..........___3560.__........._111.2__

RealLife-60%Rand-65%Read......_52.57__.........._1051.89___........._8.2

Max Throughput-50%Read.........._13.98__..........__4030.01__........._125.93

Random-8k-70%Read.................__50.97__..........__1094.22__.........__8.52___

##################################################################################

Single gigabit nic for NFS traffic.

0 Kudos
RogerAli
Contributor
Contributor

SERVER TYPE: VM ON ESX 3.5.0 Update 4, 100GB VMDK, 1024MB RAM.

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DELL 2950 32GB RAM; 2x XEON 5160, 3.0 GHz

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM SVC (Backend: 2x DS4700 and 1x DS5100) / 45 SATA Disks @ 7200RPM / VIRTLA RAID5 BACKEND HANDLED BY SVC @ RAID 5

SAN TYPE / HBAs / Fabric : FC / 2x Emulex LPe11100 4 Gb /4 Gb Brocade 200E

###########################################################################

TEST NAME--


Av. Resp. Time ms
Av. IOs/sek

Av. MB/sek
--


###########################################################################

Max Throughput-100%Read...........__5.1443__.................__11161.50__...........__348.80__..............

RealLife-60%Rand-65%Read.......__5.3146__.................__751.69____...........__5.87____..............

Max Throughput-50%Read.............__0.7791__.................__2975.07___...........__92.97___..............

Random-8k-70%Read....................__37.8628_.................__294.11____............__2.30___...............

EXCEPTIONS: VCPU Util. Avg: 73.72%, 12.16%, 32.23%, 33.84%;

TESTING SETUP: Each test was done one-by-one using the latest Stable Release from 09/29/2006 from the sourceforge IOMeter page. The ESX server has about 3-4 VMs in addition to this server that is on and operational but they are sitting idle during the test (not sure if this has any bearings on the test). The Guest VM used is running Windows 2003 R2 with the latest Hotfixes/SPs, VMware Tools, and the McAfee AV v8.5.

Our datacenter just purchased the IBM SVC storage solution and they're in the process of finishing the install and configuring some of the monitors. They've handed out to me some storage to test on the VMware platform since we have over 100+ VMs and this number plans to triple in the coming year. From what was explained to me, the storage type is an IBM SVC and the allocated disk for the VMware environment is all SATA 7200 RPM disks. I was told that the IBM SVC brokers the connection to the backend controller (in my case 2x DS4700 and 1x DS5100....they said this wouldn't matter since the SVC is what I connect to).

They LUN provided to me for this test is a 300 GB RAID 5 array created off the SVC (so essentially Virtual Raid over numerous smaller backend RAID 5 arrays). The backend config lives across 3 shelves of SATA disks with numerous 3-disk RAID 5 arrays. The 3-disk RAID 5 arrays use 1 disk from each tray. They suggested this was done for redundancy/protection as we could lose 1 tray and still be operational.

I know these numbers are theoretical but I'm concerned with the number of VM's I'll be able to swing on my server (eventual solution will be 4x Qua-Core Intel’s with 64 GB RAM per ESX Server) with these numbers, especially the Real Life test only showing 5.87 MB per seconds. Should I be

concerned with these numbers I’ve gotten since I'm looking to push about 35-40 VMs per ESX Server?

Thanks,

Roger

0 Kudos
rb2006
Contributor
Contributor

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS NetApp 2xFAS3140c Metro-Cluster configuration

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.5 Update 2

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DELL PowerEdge 2900, 2x Intel Xeon Dual Core 5160, 16GB RAM (4 GB allocated to VM)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: 2x FAS3140c metro cluster(sync mirroring)/2x26 FC 450GB 15K HDD's/RAID DP

SAN TYPE / HBAs : Brocade 300 FC 4GB / QLA2432 HBAs

##################################################################################

TEST NAME--


Av. Resp. Time ms-Av. IOs/sek-Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......___4,62_........__12075___.....___377,36___

RealLife-60%Rand-65%Read...___5,54__......___8867___.....____69,27___

Max throughput-50%Read... ..___2,91__......__14489___.....___452,80___

Random-8k-70%Read...... ....___5,84__......___8430___...._____65,86____

Size of the test file was 16 GB. During the test's 34 VM' guest were powered on, also our AIX cluster with Oracle DB

and one exchange server(iSCSI) was running, but I think that all systems were idle.

For comparison below result's from our old FAS3020c cluster wich we have replaced.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS NetApp 2xFAS3020c Metro-Cluster configuration

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON ESX 3.0.1

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: DELL PowerEdge 2900, 2x Intel Xeon Dual Core 5160, 16GB RAM (2 GB allocated to VM)

STORAGE TYPE / DISK NUMBER / RAID LEVEL: 2x FAS3020c metro cluster / 2x26 FC 144GB 10K HDD?s / RAID 4

SAN TYPE / HBAs : Brocade 3250 FC 2GB / QLA2432 HBAs

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read....__10,05___......___5814___.....___181,72___

RealLife-60%Rand-65%Read......___20,45__..........___2586___.........____20,21__

Max throughput-50%Read..........____7,05____..........___6818___.........___213,07____

Random-8k-70%Read.................____25,88____..........___2073___.........__16,2____

0 Kudos
Mnemonic
Enthusiast
Enthusiast

2 Tests on the same host, vm and SAN. The only difference is the NTFS formatiing. One i formatted as NTFS default, and the other as 32k blocksize.

TABLE SAMPLE

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM on ESX 3.5.0 Update 4

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP Proliant DL385C G5, 32GB RAM; 2x AMD 2,4 GHz Quad-Core

SAN Type: HP EVA 4400 / Disks: 4GB FC 172GB 15k / RAID LEVEL: Raid5 / 38+2 Disks / Fiber 8Gbit FC HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......______5___..........____8293__........___259____

RealLife-60%Rand-65%Read......______9___..........____5316__........____42____

Max Throughput-50%Read........_____49___..........____1162__........____36____

Random-8k-70%Read.............______9___..........____5431__........____42____

##################################################################################

TABLE SAMPLE

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM on ESX 3.5.0 Update 4

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: HP Proliant DL385C G5, 32GB RAM; 2x AMD 2,4 GHz Quad-Core

SAN Type: HP EVA 4400 / Disks: 4GB FC 172GB 15k / RAID LEVEL: Raid5 / 38+2 Disks / Fiber 8Gbit FC HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......______5___..........___10690__........___334____

RealLife-60%Rand-65%Read......______8___..........____5398__........____42____

Max Throughput-50%Read........_____49___..........____1452__........____45____

Random-8k-70%Read.............______9___..........____5390__........____42____

EXCEPTIONS: NTFS 32k Blocksize

##################################################################################

0 Kudos
NZSolly
Contributor
Contributor

Hi guys, my 1st I/O effort ever. I have direct attached storage and wondering if I can hook it up to a G2 HP for a SAN to look after our little firm of 2 hosts....We have a number of arrays, this was on RAID10 for our SQL data files...Doesnt look the best performance, but we have less than 75 staff, so not a biggy!

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS - VM - SQL DB

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM on ESX 3.5.0 Update 2

CPU TYPE: 2xQC E5345 2,33 Ghz / NUMBER: 2 vCPU w\ 3GB RAM

HOST TYPE: HP Proliant DL380 2 x E5345 QC 12GB RAM

DAS Type: MSA50 / Disk Type: 10 x 146GB 10k SAS/ RAID LEVEL: Raid10 / Number: 4 x 146 array / Adaptor: P800

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read.......______7___..........___7734__........____241___

RealLife-60%Rand-65%Read......_____30___..........___1400__........_____10___

Max Throughput-50%Read........_____7___..........____7621__........____238____

Random-8k-70%Read............._____33___..........____1346__........____10____

0 Kudos
sscheller
Contributor
Contributor

Hi,

i have also strange values with my IBM DS3300.

Jumbo Frames activated, over 2 hp 1800-8G.

But poor performance on DS3300.

Any idea??

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE oF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: x3650, 34GB RAM; 2x AMD Quad Core, 2,33 GHz

STORAGE TYPE ISCSI / RAID LEVEL 6: IBM DS3300 x 1 / 8 Disks /

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read........72.877.588..........822.515.461.........25.703.608

RealLife-60%Rand-65%Read......129.784.447..........452.208.701.........3.532.880

Max Throughput-50%Read..........1.193.976.487..........50.921.211.........1.591.288

Random-8k-70%Read.................130.336.148..........452.248.336.........3.533.190

EXCEPTIONS: CPU Util.-XX%;

##################################################################################

0 Kudos
christianZ
Champion
Champion

I would disable jumbo frames - the most important config for iscsi is "flow control" - jumbos don't work well on every gb switch.

Are you using nics or hbas for iscsi?

Reg

Christian

0 Kudos
sscheller
Contributor
Contributor

Hi,

sorry what are hbas?? Smiley Happy

We are using nics.

On switches the port are on auto.

But on esx4 server are on 1000-Full.

On DS3300 you can´t change the controller settings.

Please tell me why jumbo frames are noch so good, but flow control a must?

0 Kudos