Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
IOps | MBps | Latency | % CPU Utilization | |
Max Throughput-100%Read | 6774 | 211.7 | 8.8 | 10.8 |
RealLife-60%Rand-65%Read | 3917 | 30.6 | 11.7 | 21.3 |
Max Throughput-50%Read | 6896 | 215.5 | 8.7 | 10.6 |
Random-8k-70%Read | 3496 | 27.3 | 12.5 | 23.1 |
Latency numbers cut in half basically. I read somewhere (but maybe not in this forum), to use as many workers in IOMETER as cpus & cores in the VM, but I'm guessing based in other numbers posted, that people are only using 1 worker
Hi there
Needs some advice regarding my setup
Starting with the Servers :
2 x Dell R710 with quad port each reserved for iSCSI
48GB Ram Each
San:
Dell MD 3220i setup as :
2 x Controllers with 2GB Cache each
8 x 300GB 10 Krpm in Raid 10 in slot 1-8 owned by controller 0
8 x 146GB 15 Krpm in Raid 10 in slot 17-24 owned by controller 1
No Hot Spare
2 LUNS :
1 x 1.1TB ( the 8 x 300GB )
1 X 550GB ( the 8 x 146GB )
Now as you can see i have space in the san to put another 8 SAS Drive in slot 9-16
What would be the better solution from those :
1 : Buy 8 X 3000GB and create another lun with 1.1TB and owned it by another controller = 3 Luns total, one controller will own 2 luns
2 : Buy 4 X 300GB and 4 X 146GB and expand both the raid 10 already in place = Same 2 Luns, 1 luns per each controller
3 : Buy 4 X 300GB and 4 X 146GB and expand both raid 10 but create 4 Luns, 2 luns per each raid 10, 2 luns per controller
Space is not an issue as total data for the business is under 1TB and right now, we still got massive space left, we have around 10 virtual machines.
Right now, both luns can achieve ~ 12000 - 13000 iops from iometer 100% read from 4 network card and 250Mbps from 50%read
Tell me your thoughts.
Regards
OpenPerformanceTest32 against a Nimble CS240, not bad for a bunch of 7.2K drives with some SSD for read cache
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 15.89 | 3779 | 118 | 0% |
RealLife-60%Rand-65%Read | 4.44 | 12898 | 100 | 1% |
Max Throughput-50%Read | 11.50 | 5027 | 157 | 0% |
Random-8k-70%Read | 3.78 | 15343 | 119 | 0% |
Old style VMTN communities table:
SERVER TYPE:Windows 2008 R2 VM CPU TYPE / NUMBER: 5620 HOST TYPE: Cisco C200 M1 STORAGE TYPE / DISK NUMBER / RAID LEVEL: Nimble CS240 / Hybrid |*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*| |*Max Throughput-100%Read*|15.89|3779|118|0%| |*RealLife-60%Rand-65%Read*|4.44|12898|100|1%| |*Max Throughput-50%Read*|11.50|5027|157|0%| |*Random-8k-70%Read*|3.78|15343|119|0%| |
Hi everyone!
I would like to have your opinion about my results. I feel that I am doing something incorrectly since the MBps in the RealLife case and in the Random case are dramatically lower. Can this be possible? Also, is it normal to get the CPU utilization in all the tests equal to 0?
CPU type: Intel Xeon L5638 HC 2GHz (x2)
Host type: ESXi 5 / 96 Gb RAM / OpenSUSE 12.1 Kernel 3.1.0
Storage type: EMC2 VNX
Disk type: Pool of 30 x 15K550GB / RAID 5
LUN: 500 GB
Interface: FCoE SW driver (from ESXi vmware): ixgbe
IOps | MBps | Latency | % CPU Utilization | |
Max Throughput-100%Read | 3434 | 107 | 17 | 0 |
RealLife-60%Rand-65%Read | 516 | 4 | 115 | 0 |
Max Throughput-50%Read | 2790 | 87 | 21 | 0 |
Random-8k-70%Read | 416 | 3 | 144 | 0 |
To end, I posted some questions about IOmeter in another thread that maybe someone could answer : http://communities.vmware.com/thread/397984
Thanks
Hi at all,
You can compare these reports between both storages below;
EMC VNXe 3100:
'Test Type,Test Description
0,IO-Test
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: CISCO UCS 200 M, 24GB RAM; 6 CPUs x Intel Xeon CPU E5645 2.40 Ghz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: EMC VNXe 3100 / 6 Disks (7.200 RPM) x 868.961 GB / RAID 6
##################################################################################
TEST NAME--
Av. Resp. Time ms Av. IOs/sek Av. MB/sek-
##################################################################################
Max Throughput-100%Read......___21.40____......._2825.00__........._88.28___
RealLife-60%Rand-65%Read..___93.48____.........__562.42__...........__4.39___
Max Throughput-50%Read........___15.76____.......__3766.27__........._117.69___
Random-8k-70%Read...............___118.96____.......__440.12__.........__3.43___
EXCEPTIONS: CPU Util. 32% - 15% - 18% - 15%;
SUN SERIES 7110:
##################################################################################
'Test Type,Test Description
0,IO-Test
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: HP DL320 G5p, 8GB RAM; 4 CPUs x Intel Xeon X3320 2.50 Ghz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: SUN STORAGE 7110 / 16 Disks (10.000 RPM) x 137 GB / Double Parity
##################################################################################
TEST NAME--
Av. Resp. Time ms Av. IOs/sek Av. MB/sek-
##################################################################################
Max Throughput-100%Read......___21.71____......._2770.693__........._86.58___
RealLife-60%Rand-65%Read..___84.02____.........__699.9__...........__5.47___
Max Throughput-50%Read........___15.56____.......__3735.425__........._116.73___
Random-8k-70%Read...............___77.47____.......__741.82__.........__5.79___
EXCEPTIONS: CPU Util. 32% - 15% - 18% - 15%;
##################################################################################
Please, could you provide feedback about the results achived?
Ingens,
Best regards
SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 2
HOST TYPE: DELL R610, 64GB RAM; 2 CPUs x Intel Xeon CPU E5540 2.37 Ghz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: ORACLE PILLAR AX600 r3 - AxiomOne 5.3 / 180 Disks (7.2K RPM) x 512 GB / RAID 5 / QOS : LOW (4 RAID GROUPs)
##################################################################################
TEST NAME-- Av. Resp. Time ms Av. IOs/sek Av. MB/sek-
##################################################################################
Max Throughput-100%Read......___4,0021____......._14316,08__........_447,37___
RealLife-60%Rand-65%Read.....___1,632_____......._27211,02__......._213,67___
Max Throughput-50%Read........___2,2343____......._22773,40__........_710,83___
Random-8k-70%Read...............__1,6974_____......._24757,88__........_194,68___
SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 2
HOST TYPE: DELL R610, 64GB RAM; 2 CPUs x Intel Xeon CPU E5540 2.37 Ghz
STORAGE TYPE / DISK NUMBER / RAID LEVEL: ORACLE PILLAR AX600 r3 - AxiomOne 5.3 / 180 Disks (7.2K RPM) x 512 GB / RAID 5 / QOS : PREMIUM (30 RAID GROUPs)
##################################################################################
TEST NAME-- Av. Resp. Time ms Av. IOs/sek Av. MB/sek-
##################################################################################
Max Throughput-100%Read......___3,8216____......._15547,60__........._484,32___
RealLife-60%Rand-65%Read....___1,8760____........_29324,05__........._229,83___
Max Throughput-50%Read.......___1,8841____........_28894,64__........._909,73___
Random-8k-70%Read..............___1,7808_____......_31189,50___......._243;60___
Update : HBA QLA 2462, qdepth=64, The SAN was running 400 others VMs (100 Servers + 300 View) at the same time. CPU usage 1 to 3%.
Enjoy!
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 2.78 | 21405 | 668 | 0% |
RealLife-60%Rand-65%Read | 10.40 | 3651 | 28 | 0% |
Max Throughput-50%Read | 4.46 | 13429 | 419 | 0% |
Random-8k-70%Read | 10.28 | 3392 | 26 | 1% |
Max Throughput-100%Read | 2.76 | 21509 | 672 | 0% |
RealLife-60%Rand-65%Read | 10.56 | 3655 | 28 | 1% |
Max Throughput-50%Read | 4.39 | 13546 | 423 | 0% |
Random-8k-70%Read | 10.38 | 3419 | 26 | 1% |
Is it normal like result what do you think ?
Ive done the test 2 times with about the same result
Thanks !
Pretty good numbers. In my opinion they are in a range which you can consider as normal but I would need more info. How are you accessing the storage (iSCSI, NFS...)? How is the link to the storage (1GbE, 10GbE...)? Does your server have a specific storage interface or are you using a software driver?
Cool Thanks
The blade chassis is link to the storage IBM DS3512 by 2 Fiber Channel Brocade 8 Gbps
Each FC card have 2 links to the storage
The IBM DS3512 is link to my 3 expansion by SAS cable
Hope i give enough details
Thanks !
Your numbers don't seem bad, but I would expect a bit better performance for the random access heavy tests with 48 15k RPM disks.
Is this ESXi5 U1? Btw. one vCPU for the test VM should be sufficient and is it using pvsci?
Is there significant other normal IO going on on the array besides your benchmark and could you put the test VM on it's own LUN?
Also, are you using the roundrobin path selection policy and tried setting the IOPS switchover parameter to 1?
(See esxcli http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vcli.examples.doc_50%2Fcli_advanced_...
like "esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops 1 --device [device ID goes here]")
Jeg arbejder ikke for styrelsen mere.
Kontakt drift@eogs.dk eller +45 3330 7587
I do not work for the agency anymore.
Please contact drift@eogs.dk or 0045 33307587
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 8.16 | 7235 | 226 | 18% |
RealLife-60%Rand-65%Read | 23.26 | 1839 | 14 | 45% |
Max Throughput-50%Read | 28.30 | 2065 | 64 | 14% |
Random-8k-70%Read | 23.32 | 1848 | 14 | 44% |
The sans are in production currently so I dont know if that is throwing my numbers off. Also, we just put in some new Force10 S50 switches but it appears that our Latency is high? What do you guys think?
Josh
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 0.00 | 20411 | 637 | 2% |
RealLife-60%Rand-65%Read | 1.12 | 392 | 3 | 0% |
Max Throughput-50%Read | 97.01 | 5917 | 184 | 0% |
Random-8k-70%Read | 0.83 | 339 | 2 | 0% |
Old style VMTN communities table:
SERVER TYPE: Win2008 x64 VM running on Proliant DL380 host (part of VSA cluster) CPU TYPE / NUMBER: 2 vCPUs E5605 HOST TYPE: Proliant DL380 G7 STORAGE TYPE / DISK NUMBER / RAID LEVEL: RAID6 on the P410i Array , 8x SAS 10k RPM drives, and software RAID0 from the VSA Appliance from VMware (Software SAN) |*TEST NAME*|*Avg Resp. Time ms*|*Avg IOs/sec*|*Avg MB/sec*|*% cpu load*| |*Max Throughput-100%Read*|0.00|20411|637|2%| |*RealLife-60%Rand-65%Read*|1.12|392|3|0%| |*Max Throughput-50%Read*|97.01|5917|184|0%| |*Random-8k-70%Read*|0.83|339|2|0%| |
32GB Test results
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 0.00 | 21448 | 670 | 2% |
RealLife-60%Rand-65%Read | 1.11 | 387 | 3 | 0% |
Max Throughput-50%Read | 99.00 | 6040 | 188 | 0% |
Random-8k-70%Read | 0.83 | 337 | 2 | 0% |
Old style
SERVER TYPE: Win2008 x64 VM CPU TYPE / NUMBER: 2 vCPUs E5605 HOST TYPE: Proliant DL380 host (part of VSA cluster) STORAGE TYPE / DISK NUMBER / RAID LEVEL: RAID6 on the P410i Array , |
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 0.00 | 6269 | 195 | 0% |
RealLife-60%Rand-65%Read | 19.90 | 6936 | 54 | 0% |
Max Throughput-50%Read | 76.49 | 4666 | 145 | 0% |
Random-8k-70%Read | 13.38 | 5444 | 42 | 0% |
Hi Chris,
When you are using Openfile are you not concerned about: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102659... ?
David
davidarnold wrote:
Hi Chris,
When you are using Openfile are you not concerned about: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=102659... ?
David
It was more of a proof of concept as I was struggling to get the performance I knew should be possible. In reality, we have 10G from a Thecus N8800Pro to our physical (non VM) server which is the only one using iSCSI. The ESXi machines only use local storage for (running) VM's.
Not sure if anyone has allready posted Equallogic PS6100XS hybrid array stats, but here goes.
Test name | Latency | Avg iops | Avg MBps | cpu load |
---|---|---|---|---|
Max Throughput-100%Read | 0.00 | 7251 | 226 | 29% |
RealLife-60%Rand-65%Read | 18.22 | 6362 | 49 | 4% |
Max Throughput-50%Read | 129.56 | 7912 | 247 | 31% |
Random-8k-70%Read | 16.30 | 6630 | 51 | 4% |
darking, you have 17 x 600 + 4 x 400 SSD , right ?
7 SSDs 400 gig and 17 600 gig 10K Sas drives running raid 6 Accelerated
Around 13TB usable capacity