VMware Cloud Community
christianZ
Champion
Champion

New !! Open unofficial storage performance thread

Hello everybody,

the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.

Oliver will make a few links between the old and the new one and then he will close the old thread.

Thanks for joining in.

Reg

Christian

574 Replies
Mnemonic
Enthusiast
Enthusiast

I think your performance number looks about right. FATA disks are MUCH slower than SAS. You also have only half the RPM, but the big difference i recon is that FATA gives poor RAID5 Performance. I think that you should try to run RAID10 on them. I think you will get a much better result, but ofcourse you will loose much more capasity.

Reply
0 Kudos
ericmba
Contributor
Contributor

why would RAID 10 give better performance? Im curious to know how throwing expensive disks are going to give better performance.

Eric

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

Feel free to correct me if I am wrong. Smiley Happy

In my expierience it gives better performance with low end disks. I am not sure why, but it might be because there is no parity calculation, and read write in the process.

Reply
0 Kudos
MKguy
Virtuoso
Virtuoso

Yeah, a FATA disk with 7200 RPM isn't very fast by itself, but there are 74 of them. Even with vRAID5 configured LUNs, I would expect better performance than 2 times of what 4 local 15k SAS disks in RAID10 can deliver. Maybe I'll talk to the SAN guys if they can supply us a LUN in vRAID1 to test this. There is also an 48 disk 10k RPM FC disk group on the array, but they seem to be reluctant to provide some of that space to something else like virtualization.

The SAN disks are also used for other stuff besides virtualization, but those aren't very IO-intensive for the most part.

Obviously, RAID1 is supposed to yield better performance than RAID5 with the same amount of disks.

-- http://alpacapowered.wordpress.com
Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

As I understand FATA/IDE/ATA disks vs. SAS/SCSI the problem is that the low end disks have no build in intelligense, do the controllere has to make a initiate every little input output operation, and therefore I think that you could double the number of disks in your FATA array without getting the performance your expecting.

But double check with your SAN ppl, and please post the results for the RAID1+0 tests.

Reply
0 Kudos
ericmba
Contributor
Contributor

I am the SAN guy Smiley Happy We ONLY use SATA disks for the vmware guys cause of the different abstraction layers between the SAN layer and Vmware. Its true RAID and RPM are major factors in performance but one

overseen factor is latency: is your app. latency sensitive? in 95% thats typically not the case. Also SAN is vastly different than NAS however NAS protocols like NFS are only vaguely slower in performance

and CPU usage compared to FC: this according to ie. NetApp and VmWare.

Other factors to check is read/write patterns? read/write ratio, sequentail vs random read and writes etc. My point is that merely looking at RPM and RAID is vastly over simplifying the scenario.

Cheers,

Eric

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

That was not my point.

My point is that FATA disks are not for production applications like VMware. In my oppinion. If you can afford an EVA 8000, you sould be able to afford decent disks .. No offense. Sorry.

I am just saying that in my expieriense FATA disks does not perform very well with RAID5, and I will bet that if you take ALL the 75 FATA disks, and turn it into a RAID10, you will get more than double the performance of tha RAID5 results MK posted. - You are most welcome to prove me wrong, but I am not going to join a verbal debate, so unless you acctuarlly test this I am out of the discussion.

Reply
0 Kudos
ericmba
Contributor
Contributor

Hi again,

Your points are fair, I agree that FATA does not meet all production environments needs when it comes to performance. Given that this is the Vmware forum I thought I could tell you that we use SATA in our Vmware prod. env. and have had NO complaints of performance. We have even moved some prod. FROM FC to SATA. Still no complains. We also use a solution that offers a better TCO than EVA (hint RAID 10 is expensive), EVAs are being phased out. IBM and NetApp remain.

Like you I dont intend to start a debate based on opinions, we agree on that. Yet my point that you oversimplify the "problem" remains. You are only focused at RAID as a factor thus forgetting all

the other factors, its too simplistic.

Finally I am not here to prove you wrong, I am here to learn. I never heard of FATA disks but now know they are what we call SATA disks.

http://en.wikipedia.org/wiki/FATA_(hard_drive)

Cheers,

Eric

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

FATA is like SATA, only Fiber attached.

Reply
0 Kudos
larstr
Champion
Champion

There's an absolutely excellent article comparing the inner details of different disk types here:

http://www.snia.org/education/tutorials/2008/spring/storage/Whittington-W_Desktop_Nearline_Enterpris...

The advantage of FATA vs SATA is that the transfer speed with FATA is faster as long as you work with data that is already in cache. Still, the SATA wire protocol (rev 3) is catching up and is now 6Gb while FC is 8 (same for SAS).

Lars

Reply
0 Kudos
joachims1
Contributor
Contributor

We are currently testing a Sun Storage 7140 as a replacement for our Netapp FAS3020. I just thought i post my findings. The production 7410 (if any) will be 10gbit connected.

The result are done from a Windows 2008 VM with single cpu. i have testet netapp with both NFS and FC

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON VI4

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Supermicro , 64GB RAM; 4x XEON , E5430 2,66 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS3020 14x300gb 10K R6

SAN TYPE / HBAs : 1gb NIC NFS

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

Max Throughput-100%Read........__17______..........___3391___.........___105____

RealLife-60%Rand-65%Read......___38_____..........___1359___.........____10____

Max Throughput-50%Read..........____17____..........___3472___.........___108____

Random-8k-70%Read.................____50____..........___1051___.........____8____

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON VI4

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Supermicro , 64GB RAM; 4x XEON , E5430 2,66 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp FAS3020 14x300gb 10K R6

SAN TYPE / HBAs : FC Qlogic 8GB (Netapp is only 2gb i think)

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

Max Throughput-100%Read........__10______..........___5508___.........___172____

RealLife-60%Rand-65%Read......___22_____..........___1303___.........____10____

Max Throughput-50%Read..........____9____..........___6397___.........___199____

Random-8k-70%Read.................____30____..........___764___.........____6____

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

TABLE OF RESULTS

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

SERVER TYPE: VM ON VI4

CPU TYPE / NUMBER: VCPU / 1

HOST TYPE: Supermicro , 64GB RAM; 4x XEON , E5430 2,66 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: SUN 7410 11x1tb + 18gb ssd write + 100gb ssd read

SAN TYPE / HBAs : 1gb NIC NFS

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

Max Throughput-100%Read........__17______..........___3421___.........___106____

RealLife-60%Rand-65%Read......___6_____..........___7771___.........____60____

Max Throughput-50%Read..........____11____..........___5321___.........___166____

Random-8k-70%Read.................____6____..........___2662___.........____60____

Reply
0 Kudos
AGrueninger
Contributor
Contributor

This is a response to Joachim. The reply button is missing in his message.

The

results for the Sun 7140 can be misleading. The original settings for the icf file define a 400 MB file.

I assume you have at least 64 GB RAM installed in the Sun 7140 and you used SATA disks with a mirrored configuration (=4 vdevs) and 3 spares.

You should give attention to:

- the 400 MB file will be completely cached in the ARC which can hold about 24 GB of cached data

- you have the readzilla where you cache additional data in the L2ARC

- you use NFS where the vmdks are thin provisioned by ZFS.

If you start with the tests you will have the best results. With growing size of the now rolled out vmdk the I/OS will be lower.

You should the use a test file with iometer which has the size of your expected workload. Of course you don't know it.

I used for the same tests a test file with 100 GB and I defined a test in iometer which writes 32 kb, 100% sequential. I defined recordsize=4k and compression=on for the ZFS pool.

After running this test for 10 h I started the tests for this performance thread.

You should use at least 1.5(RAM0.75/2 + size_of_readzillas) as size of the test file. And you will have a hard time to get it rolled out to this size.

This will give you the performance when the box is heavy loaded.

The (unkown) truth will be between this two extreme scenarios.

Best regards

Andreas

Reply
0 Kudos
sima3
Contributor
Contributor

Hi all

its my first post, but i've read a lot.

SERVER TYPE: VM Windows Server 2003, 1GB RAM

CPU TYPE / NUMBER: 1 VCPU

HOST TYPE: HP DL380 G5, 24GB RAM; 2x E5320, 1,86 GHz, QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 11x SAS 15k / R5

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____4.459_____9987.71_____312.12

RealLife-60%Rand-65%Read_____23.89_____2050.38_____16.02

Max Throughput-50%Read_____12.03_____4832.69_____151.02

Random-8k-70%Read_____19.52_____2347.00_____18.34

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 12x SAS 15k / R6

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____4.963_____9330.99_____291.59

RealLife-60%Rand-65%Read_____27.20_____1787.03_____13.96

Max Throughput-50%Read_____13.23_____4366.65_____136.48

Random-8k-70%Read_____21.49_____2024.82_____15.82

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 12x SAS 15k / R10

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____4.751_____9838.42_____307.45

RealLife-60%Rand-65%Read_____19.80_____2639.02_____20.62

Max Throughput-50%Read_____11.82_____4926.51_____153.95

Random-8k-70%Read_____16.03_____3088.49_____24.13

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 11x SATA 7.2k / R5

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____5.078_____9254.85_____289.21

RealLife-60%Rand-65%Read_____68.68_____775.75_____6.06

Max Throughput-50%Read_____13.27_____4310.58_____134.70

Random-8k-70%Read_____74.39_____717.43_____5.60

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 12x SATA 7.2k / R6

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____5.722_____8878.52_____277.45

RealLife-60%Rand-65%Read_____80.62_____652.97_____5.10

Max Throughput-50%Read_____13.47_____4280.33_____133.76

Random-8k-70%Read_____83.81_____612.65_____4.79

##################################################################################

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (512MB CACHE/Dual Cntr) x 1 / 12x SATA 7.2k / R10

SAN TYPE / HBAs : FC, Emulex LPe 11000 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____5.045_____9305.28_____290.79

RealLife-60%Rand-65%Read_____49.86_____1119.75_____8.75

Max Throughput-50%Read______12.78_____4492.33_____140.39

Random-8k-70%Read_____55.87_____972.68_____7.60

##################################################################################

best regards

martin

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

It would be nice to know the size of your disks.

Reply
0 Kudos
sima3
Contributor
Contributor

the SAS 15k are 300GB, the SATA 7.2k are 750GB

LUN-Size at the SAS-Raids is 500GB with 2MB blocksize, at the SATA's 1TB with 4MB blocksize.

Reply
0 Kudos
Mnemonic
Enthusiast
Enthusiast

Thank you. It is interesting to see the RAID comparisons.

Reply
0 Kudos
sima3
Contributor
Contributor

then it was worth testing it. thank you for your interrest.

Reply
0 Kudos
meistermn
Expert
Expert

So the raid raid triangle is still valid for different raid types . Raid 10 faster than Raid 5 faster than Raid 6. Smiley Happy

This is still valid for harddisk. But what about the new sata ssd, pci ssd (fusion io) or flash storage (ramsan).

4793_4793.gif

Reply
0 Kudos
Sebi_1
Contributor
Contributor

Hi,

so here are my test results. So that´s a productive environment with 16VMs on 3LUNs and 10VMs on local storage from the ESX Hosts. For the test I take a forth free LUN with 600GB and 265 segment size. So I´m new to the whole storage and IOps thing but I see that my performance isn´t very good. First i thought it´s my raid6 but then i saw the tests from sima3 which are much better.

Is there a difference between XP and 2003 for IOMeter tests? Does anyone have some tips for me? Thanks

SERVER TYPE: VM Windows XP, 2GB RAM

CPU TYPE / NUMBER: 2 VCPU

HOST TYPE: IBM x3650, 18GB RAM, 2x 5430, 2,6 GHz QC

STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (1024MB CACHE/Dual Cntr) 11x SAS 15k 300GB / R6

SAN TYPE / HBAs : FC, QLA2432 HBA

##################################################################################

TEST NAME--


Av. Resp. Time ms--Av. IOs/sek---Av. MB/sek----

##################################################################################

Max Throughput-100%Read_____20_____2925_____91

RealLife-60%Rand-65%Read_____173_____345_____2

Max Throughput-50%Read_____35_____1687_____52

Random-8k-70%Read_____211_____283_____2

##################################################################################

regards

Sebastian

Reply
0 Kudos
tfapps
Enthusiast
Enthusiast

Try disabling your Antivirus sotware. I was surprisingly shocked when my numbers tripled just from disabling Symantec AV Auto Protect during the test.

Reply
0 Kudos