Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
Hi again,
after tfapps gave me the AV tip I created a new VM with 2003 and wow my storage is fast. Here are my new results.
SERVER TYPE: VM Windows 2003, 1GB RAM
CPU TYPE / NUMBER: 1 VCPU
HOST TYPE: IBM x3650, 18GB RAM, 2x 5430, 2,6 GHz QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (1024MB CACHE/Dual Cntr) 11x SAS 15k 300GB / R6
SAN TYPE / HBAs : FC, QLA2432 HBA
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read_____5_____11010_____344
RealLife-60%Rand-65%Read_____20_____1642_____12
Max Throughput-50%Read_____11_____5029_____157
Random-8k-70%Read_____20_____1790_____13
##################################################################################
Does anybody know how i can use the IOMeter to get the current IOps from a system? With the Idle Specification maybe? I wanna check some Notes and DB2 servers which are physical.
Thanks
Sebi
Just use esxtop or vscsistats for VMs.
http://communities.vmware.com/docs/DOC-10084
http://communities.vmware.com/docs/DOC-10095
For physical systems, use tools like iostat or perfmon. Note that IOPS are not always called that way in some tools.
SERVER TYPE: Windows XP VM w/ 1GB RAM on ESXi 4
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: Sun SunFire x4150, 48GB RAM; 2x XEON E5450, 2.992 GHz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Two EQL PS6000E's with / 14+2 SATA Disks / R50
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read….….15.025.……….…3915.89….....…..122.37
RealLife-60%Rand-65%Read……12.20..…..…..….3324.92.…..…….25.97
Max Throughput-50%Read….……13.18..………….4460.97….…..….139.40
Random-8k-70%Read….….….…..13.40….………..3033.14….…..…..23.69
EXCEPTIONS: CPU%= 44 - 66 - 40 - 63
Using iscsi w/ software initiator. 4 nics, each with a vmkernel assigned to it.
##################################################################################
This is from my pair of Equallogic PS6000E's.
The test was performed on a virtual disk that had not been formatted with NTFS.
I'm not sure if this was the right way to go about doing this test, or whether it should have been formatted with NTFS first.
Do these numbers seem right?
SERVER TYPE: VM Windows Server 2008, 1GB RAM
CPU TYPE / NUMBER: 1 VCPU
HOST TYPE: SUN X4170, 24GB RAM, 2.5 GHz, Dual XEON NEHALEM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PILLAR DATA SYSTEMS AXIOM 500 (24 GB CACHE / 8 Ctrl) / 30 x SATA 7200 RPM /R5
SAN TYPE / HBAs : 1 x FC 4, QLOGIC 2432 using single path
##########################################################################
TEST NAME--
##########################################################################
Max Throughput-100%Read............ 5.4486.......................... 10965..................... 342.67 CPU=21%
RealLife-60%Rand-65%Read......... 1.4193......................... 34207..................... 286.78 CPU=47%
Max Throughput-50%Read.............. 4.7475.......................... 12176.................... 379.71 CPU=24%
Random-8k-70%Read..................... 1.3620.......................... 34443..................... 300.25 CPU=46%
first test on new esx4.0, results as expected in compare to old tests with esx3.
SERVER TYPE:
Win2k8 64bit SP1 VM (4GB RAM, 100GB vmdk) on ESX 4.0
CPU TYPE /
NUMBER: VCPU / 4
HOST TYPE: HP
DL380G6 - 60GB RAM - 2x Xeon5560 2.8GHz Quadcore
STORAGE TYPE /
DISK NUMBER / RAID LEVEL: Equallogic PS3700X / 16x 400GB 10k 3,5" SAS /
Raid 50
SAN TYPE / HBAs : iSCSI; 3x vmk <-> 3x GB pNIC (3 different Intel ET 1000
Quad Port), Jumbo Frames, vmware Round Robin
##################################################################################
TEST NAME--
MB/sek------
##################################################################################
Max Throughput-100%Read 14.3370 4169.41 130.18
VMWare shows 140MB read on that vmhba while
benchmark running
RealLife-60%Rand-65%Read 13.9840 3302.82 25.80
Max Throughput-50%Read 12.5621 4768.15 149.00
Random-8k-70%Read 14.8072 3104.66 24.26
EXCEPTIONS: CPU Util.-10-5-5-10%
Hi all - heres the result of several tests over the course of the day averaged together for our EVA 6000.
What I can't figure out is why the "MAX Thoroughput-50%Read" IOPS are so low and response time so high... any ideas? the rest of the numbers lookgood AFAIK.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM – Windows 2003 STD r2 SP2
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL585 G2, 128GB RAM; 4x DualCore AMD Opteron 8220, 2.8GHz
STORAGE TYPE / DISK NUMBER / RAID: EVA 6100 / 98x 300GB 10k FC / R5
Notes: 12GB RDM attached to VM, IOMeter
run on RAW disk (no FS)
Test Name | Av Resp Time ms | Av IOPS | Av MB/s | Av CPU Utl |
Max Throughput-100%Read | 5.73 | 10361 | 24 | 41 |
RealLife-60%Rand-65%Read | 10.16 | 3918 | 31 | 56 |
Max Throughput-50%Read | 73.84 | 788 | 25 | 14 |
Random-8k-70%Read | 11.89 | 2798 | 22 | 61 |
Your results are similar (but slighly below) to those I had on a syncmirrored pair of EVA6400's with 23 and 48 disks here: http://communities.vmware.com/thread/197844?start=90&tstart=0
Do you have any HP agents installed on your ESX hosts? It seems they will per default issue scsi reservations every 60 seconds that will affect your performance negatively.
See here: http://docs.hp.com/en/5991-2731/ch02s04.html
hpimafcad polls the FC hbas which in turn results in scsi reservations.
Lars
Lars... We are indeed running the HP agents (8.2.5) so I just went ahead and disabled all the SNMP agents during testing and im getting the same results so I don't think they are skewing anything.
I did however realise that I have read caching and write through enabled on the test lun so I turned that off and I'm now +getting MUCH lower IO readings and Much higher latencies. +
My question is are the results posted on this board... yours with the 6400 for instance, with read and write caching on or off for the lun? thanks.
My results are also going to be skewed becasue I am running the IOMETER test on production ESX boxes that are running workloads of 20-30VMs each at time of testing, thats why I tried to do tests at different times of the day and average them all together.
Sorry for the ignorance. I am very new to IOMETER and I'm really trying to understand how to run these tests. Few questions...
1. When running these tests from inside a VM, should I be running them on a yellow icon (the VMs drive c: drive itself), or should I be creating a new unformatted drive that will show up in blue? Are people standardizing here?
2. Should I be using the .ICF file from the "original" storage performance thread?
3. When running these tests, are people running them on LUNs in production with VMs on them? Off-hours so disk activity is minimal? For example, my ESX box is connected to a iSCSI SAN with a single LUN/'Datastore on it that has 10 VMs. Should I just run the tests during normal hours? Or are people shutting down all VMs on the Datastore before running these, etc.??
4. I am really confused about the results people are posting. Everyone lists a Av. IOs/sek column, and a Av. MB/sek column, but I don't even see those in my csv results file (??). Are those the same as Total I/O's per second and Total MB's per second in my spreadsheet? If not, where am I supposed to look?
5. Which of the 4 tests are the best idea for how overall VM performance would be?
Sorry for the ignorance.
ESXi 4
MD3000i Dual Controller w/300GB 15K SAS (Hitachi)
2 x Dell R610 (32MB, 4xE5520@2.27GHz, 4xBroadcom Embeded, 1 Intel Quad GBe Card)
2 x PowerConnect 5424 dedicated to iSCSI
Using Configuration Guidelines from Dell Business Ready Solutions Guide:
Windows 2003 Server R2 (32Bit)
6/1 Disk - Raid 5
Jumbo Frames
IOPs | MB/s | Avg RT | CPU | |
MAX Throughput-100%Read | 3702.74 | 115.71 | 16.26 | -- |
RealLife-60%Rand-65%Read | 2019.69 | 15.78 | 23.47 | -- |
Max Throughput-50%Read | 3649.02 | 114.03 | 16.80 | -- |
Random-8k-70%Read | 2062.32 | 16.11 | 21.37 | -- |
Windows 2003 Server R2 (32Bit)
4 Disk - Raid 10
Jumbo Frames
IOPs | MB/s | Avg RT | CPU | |
MAX Throughput-100%Read | 4010.49 | 125.33 | 14.95 | 19.21 |
RealLife-60%Rand-65%Read | 1766.44 | 13.80 | 28.42 | 33.38 |
Max Throughput-50%Read | 3449.38 | 107.79 | 17.67 | 15.88 |
|Random-8k-70%Read| 1733.58| 13.54| 28.24| 36.15|
Windows 2008 Server R2 (64Bit)
8 Disk - Raid 10
Jumbo Frames
IOPs | MB/s | Avg RT | CPU |
MAX Throughput-100%Read | 4026.03 | 125.81 | 14.89 | 19.45 |
RealLife-60%Rand-65%Read | 2959.78 | 23.12 | 16.69 | 34.38 |
Max Throughput-50%Read | 3778.54 | 118.08 | 15.78 | 18.81 |
Random-8k-70%Read | 3002.24 | 23.45 | 16.17 | 36.97 |
Windows 2008 Server R2 (64Bit)
8 Disk - Raid 10
No Jumbo Frames
IOPs | MB/s | Avg RT | CPU | |
MAX Throughput-100%Read | 3991.21 | 124.21 | 15.02 | 19.95 |
RealLife-60%Rand-65%Read | 2969.25 | 23.20 | 16.45 | 35.02 |
Max Throughput-50%Read | 4984.29 | 155.76 | 12.01 | 20.39 |
Random-8k-70%Read | 3078.36 | 24.05 | 15.68 | 38.43 |
Windows 2003 Server R2 (32Bit)
8 Disk - Raid 10
Jumbo Frames Enabled
IOPs | MB/s | Avg RT | CPU | |
MAX Throughput-100%Read | 4012.90 | 125.40 | 14.95 | 18.72 |
RealLife-60%Rand-65%Read | 3083.42 | 24.09 | 16.33 | 32.47 |
Max Throughput-50%Read | 3453.48 | 107.92 | 17.66 | 15.84 |
Random-8k-70%Read | 3218.32 | 25.14 | 15.30 | 36.15 |
Windows 2003 Server R2 (32Bit)
8 Disk - Raid 10
No Jumbo Frames
IOPs | MB/s | Avg RT | CPU |
MAX Throughput-100%Read | 3976.41 | 124.26 | 15.08 | 17.74 |
RealLife-60%Rand-65%Read | 3081.79 | 24.08 | 16.33 | 32.59 |
Max Throughput-50%Read | 3756.98 | 117.41 | 16.35 | 15.68 |
Random-8k-70%Read | 3199.10 | 24.99 | 15.35 | 36.79 |
Simultaneous Tests (VM1 on ESX1 and VM2 on ESX2)
VM1(ESX1): Windows 2008 Server R2 (64Bit)
VM2(ESX2): Windows 2003 Server R2 (32Bit)
8 Disk - Raid 10
Jumbo Frames Enabled
IOPs | MB/s | Avg RT | CPU | |
VM1: RealLife-60%Rand-65%Read | 1578.28 | 12.33 | 29.56 | 38.72 |
VM1: Max Throughput-50%Read | 3497.63 | 109.30 | 16.30 | 18.56 |
VM2: RealLife-60%Rand-65%Read | 1600.80 | 12.51 | 29.70 | 37.30 |
VM2: Max Throughput-50%Read | 2928.81 | 91.53 | 20.15 | 18.69 |
VM1(ESX1): Windows 2008 Server R2 (64Bit)
VM2(ESX2): Windows 2003 Server R2 (32Bit)
8 Disk - Raid 10
No Jumbo Frames
OPs | MB/s | Avg RT | CPU |
VM1: RealLife-60%Rand-65%Read | 1633.00 | 12.76 | 28.40 | 38.79 |
VM1: Max Throughput-50%Read | 4278.38 | 133.70 | 13.16 | 19.69 |
VM2: RealLife-60%Rand-65%Read | 1552.82 | 12.13 | 30.94 | 36.87 |
VM2: Max Throughput-50%Read | 2801.06 | 87.53 | 20.40 | 21.64 |
Hrm. I'm trying to run this but I'm getting really out there numbers.
Like 41,000 IOPS/s and over 1200MB/s on a 6x 300GB 15k array in RAID6 (local storage) on Server 2008 64 bit.
How many iSCSI links did you have from your host servers to your storage for those numbers? Just curious.
Thanks!
Hi, we have a pretty basic setup, and probably far from best practice. But here are our results. Our array takes a massive hit on 60% random 65% read. Looks like our results are pretty average. I think I should look at seperating the iSCSI traffic, at least onto its own VLAN. I wont be able to get a dedicated switch.
VM tested is on our busiest ESX host and LUN.
Antivirus was on during test.
ESXi 4
MD3000i Dual Controller w/146GB 15K SAS
2 x Dell 2950 (32GB, 2 x E5450@3.00GHz (8CPU), 2 x Broadcom Embedded, 1 Intel Quad GBe Card)
1 x Cisco 3560G for all traffic. iSCSI is not separated onto its own VLAN unfortunately.
2 x Gb nic for iSCSI traffic using software initiator and 2 x Gb nic for VM network traffic on each host.
VM: Windows 2003 Server R2 (32Bit)
7 Disk - Raid 5
No Jumbo Frames
-
IOPs--
CPU
VM1: Max Throughput-100%Read--
VM1: RealLife-60%Rand-65%Read--
VM1: Max Throughput-50%Read--
Sorry for formatting.
Hello All!!
I am working on benchmarking a new storage product in my environment. I have built out a VMmark environment however I am getting very low scores and it seems that VMmark does not really push the SAN too hard. I really want to get SAN benchmark data from a typical ESX environment, which is why I wanted to use VMmark. I stumbled upon this thread and have some questions.
Are you just running IOmeter in a windows VM and posting results? Am I missing somthing? Can anyone suggest any other ways of getting SAN performance numbers out of ESX? Perhaps by leveraging my exsisitng VMmark environment to generate a load, but then use somthing else to show scores/ratings?
any and all ideas would be appreciated.. I have considered ramping up VMmark load and then watching esxtop, however I dont know how to increase the load across all of the VMs in VMmark.
THANK YOU!!!
Ik ben afwezig tot maandag 7 december. Ik verzoek u om voor dringende zaken rechtstreeks met kantoor contact op te nemen.
Telefoonnummer: 013-5115088, of per e-mail naar sales@feju.nl<mailto:sales@feju.nl>.
Deze mail wordt niet doorgestuurd.
Groeten,
Dennes
Hi,
i got an EXP3000 with 12x 300GB 15k HDDs for my DS3400. I tested some RaidLevel an so on with the new free space.
Now I need some tips from you how to configure my new expansion. I need some fast IOs cause i wanna setup an DB2 and Notes server. For the log files i thought i take 4 HDDs in a Raid10 and its ok, but after my tests I saw that the performace isn´t very good. I don´t want to take 10 HDDs in Raid10 with 1,5 TB only for my log files.
So should I build a new Raid5 or 6 with the 11 HDDs? Or maybe expand the Raid6 from my DS3400 and get 22 HDDs in a Raid6?
I can´t test the 22HDD Raid6 so i hope someone has some infos for me.
And here are some results from my tests:
SERVER TYPE: VM Windows 2003, 1GB RAM
CPU TYPE / NUMBER: 1 VCPU
HOST TYPE: IBM x3650 M2, 34GB RAM, 2x X5550, 2,66 GHz QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: IBM DS3400 (1024MB CACHE/Dual Cntr) 11x SAS 15k 300GB / R6 + EXP3000 (12x SAS 15k 300GB) for the tests
SAN TYPE / HBAs : FC, QLA2432 HBA
##################################################################################
RAID10- 10HDDs -
Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read_______5,8_______________9941_______310
RealLife-60%Rand-65%Read_____16,7______________3083_________24
Max Throughput-50%Read________12,6______________4731________147
Random-8k-70%Read___________15,5______________3201________25
##################################################################################
##################################################################################
RAID10- 4HDDs -
Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read_______5,6_______________10402_______325
RealLife-60%Rand-65%Read_____36,8______________1467_________11
Max Throughput-50%Read________12,1______________4873________152
Random-8k-70%Read___________37,2______________1427________11
##################################################################################
##################################################################################
RAID5- 10HDDs -
Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read_______5,9_______________9656_______301
RealLife-60%Rand-65%Read_____20,7______________2374_________18
Max Throughput-50%Read________7,8______________4937________154
Random-8k-70%Read___________20,4______________2551________19
##################################################################################
##################################################################################
RAID6- 10HDDs -
Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read_______5,7_______________9827_______307
RealLife-60%Rand-65%Read_____23,2______________1850_________14
Max Throughput-50%Read________12______________4858________151
Random-8k-70%Read___________21,3______________2005________16
##################################################################################
Thanks
Sebi
Here are my results:
SERVER TYPE: VM Windows 2003 SP2, 1GB RAM
CPU TYPE / NUMBER: 2 VCPU
HOST TYPE: ESXi 4 U1, HP DL380 G6, 64GB RAM, 2x E5520, 2,27 GHz QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP EVA 4400 (2048MB Cache per Controller) 24x FC 10k 300GB
SAN TYPE / HBAs : FC, HP FC1142SR QLogic HBA, HP StorageWorks 8/8 San Switches
##################################################################################
RAID5- 24HDDs -
Av. Resp. Time ms----Av. IOs/sek---Av. MB/sek----
##################################################################################
Max Throughput-100%Read_______5,3______________10900_______340,6
RealLife-60%Rand-65%Read______14,8______________2999________23,4
Max Throughput-50%Read________32,3______________1627________50,8
Random-8k-70%Read____________16,2______________2836________22,1
##################################################################################
It's strange that the EVA seems to perform so low in the MAX throughput 50%/50% test. This however is not the case when performing the test on a physical host with Windows Server 2008. I have seen that other users with EVAs see similar impacts in the 50%/50% test. Any ideas why this might be the case?
Here are my results, brand new Equallogic PS4000 half filled.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE OF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: 2008 R2 VM ON ESXi 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 1GB Ram
HOST TYPE: Dell PE R710, 24GB RAM; XEON X5550 2,66 GHz, Dual Quad
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EQL PS4000 x 1 / 7 +1 Raid 5 10K SAS Drives
SAN TYPE / HBAs : iSCSI, SWISCSI, 2x Intel 1000PT Dual Port Nics, One connection on each
MPIO enabled - Jumbo Frames Enabled - 6 iSCSI connections to Volume - 2x Dell PC 5424 Switches
##################################################################################
TEST NAME--
##################################################################################
Max
Throughput-100%Read........ __15______.......... ___3776___.........___118____
RealLife-60%Rand-65%Read......___13_____.......... ___3345___......... ____26___
Throughput-50%Read.......... ____21____.......... ___2683___......... ___83____
Random-8k-70%Read...............____18____.......... ___2477___......... ____19____
EXCEPTIONS: n/a
Wow, talk about a yo-yo...
-
SERVER TYPE: 2008 R2 VM ON ESX 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 2GB Ram
HOST TYPE: HP BL460 G6, 32GB RAM; XEON X5520
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-240 / 3x 300GB 15K FC / RAID 5
SAN TYPE / HBAs: 8Gb Fiber Channel
Test Name | Avg. Response Time | Avg. I/O per Second | Avg. MB per Second | CPU Utilization |
Max Throughput - 100% Read | 5.03 | 12,029.33 | 375.92 | 21.87 |
Real Life - 60% Rand / 65% Read | 42.81 | 1,074.93 | 8.39 | 19.57 |
Max Throughput - 50% Read | 3.63 | 16,444.30 | 513.88 | 29.67 |
Random 8K - 70% Read | 51.44 | 1,039.38 | 8.12 | 14.01 |
-
SERVER TYPE: 2008 R2 VM ON ESX 4.0 U1
CPU TYPE / NUMBER: VCPU / 1 / 2GB Ram
HOST TYPE: HP BL460 G6, 32GB RAM; XEON X5520
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC CX4-240 / 5x 1TB 7.2K SATA / RAID 5
SAN TYPE / HBAs: 8Gb Fiber Channel / QLogic
Test Name | Avg. Response Time | Avg. I/O per Second | Avg. MB per Second | CPU Utilization |
Max Throughput - 100% Read | 5.05 | 11,896.71 | 371.77 | 55.42 |
Real Life - 60% Rand / 65% Read | 90.51 | 574.87 | 4.49 | 29.05 |
Max Throughput - 50% Read | 3.99 | 14,371.41 | 449.10 | 70.61 |
Random 8K - 70% Read | 109.86 | 482.12 | 3.76 | 27.25 |
I've seen this too on an EVA 8000: (http://communities.vmware.com/message/1350705#1350705)
Someone suggested it might be because of vRAID5 on the LUN we are using. Which vRAID are you using for that LUN?