Hello everybody,
the old thread seems to be sooooo looooong - therefore I decided (after a discussion with our moderator oreeh - thanks Oliver -) to start a new thread here.
Oliver will make a few links between the old and the new one and then he will close the old thread.
Thanks for joining in.
Reg
Christian
both vm`s have the same configuration except for jumbo frames.
Both are on the same 500GB lun, that lun is connected through 2 dedicated GB connections in a vswitch
Both vm`s have 2 nics for iscsi traffic (mpio round robin), each nic is a seperate vswitch which both also have 2 GB connections.
That all is connected to 2 in stack powerconnects 6224.
At the time there was almost no load of the rest of the servers....
Sweet Results.. What is the Net Bandwidth?
I'm trying to obtain some benchmarks on accessing our HP EVA5000 and EVA6000 SAN using IOmeter, could someone please specify what parameters need to be set to get an accurate measurement of throughput on the HBA cards of my ESX & VCB servers ?
on the VCB Server I want to test HBA throughput to the EVA5000 LUN (NTFS) which is also my VMsnapshot staging area.
What do I set for AccessSpecification ?
How long should I run the test for ?
Thanks
This it the standard for testing in this forum. http://www.mez.co.uk/OpenPerformanceTest.icf
Just open it in IO meter.
TABLE SAMPLE
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE oF RESULTS
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: Phys Windows Server 2003
CPU TYPE / NUMBER: CPU / AMD Opteron 2,4 Ghz / 1 CPU Dual-Core
HOST TYPE: HP BL465c G1, 4GB RAM
SAN Type: HP EVA 4100 / Disks: 4GB FATA 500GB 7200rpm / RAID LEVEL: Raid5 / 12 Disks / Qlogic QMH2462 Fiber
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........______3____..........__16016___.........___500____
RealLife-60%Rand-65%Read......._____35____..........___1029___........._____8____
Max Throughput-50%Read........._____54____..........____903___.........____28____
Random-8k-70%Read.............._____36____..........___1045___........._____6____
EXCEPTIONS:
Greetings. Just thought I'd post my Dell MD3000i stats. I'm running dual controllers connected to stacked Procurve 2510G gigabit switches. No jumbo from end to end.
SERVER TYPE: VM (Windows 2008 x32 Datacenter Edition)
CPU TYPE / NUMBER: 2 VCPU /HOST TYPE: Dell PE1950 III, 16GB RAM; 2x XEON 51xx, 2,5 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Dell PV MD3000i SATA 500 GB RAID5 7+1
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........__17.37___.........._3399.37___.........__106.23____
RealLife-60%Rand-65%Read......_71.18__..........___669.21___.........____5.22____
Max Throughput-50%Read..........__15.92____.........._3725.54____........._116.42___
Random-8k-70%Read.................__76.57____.........._691.60___.........____5.40____
EXCEPTIONS: CPU Util.-XX%;
##################################################################################
Some results for a straight-out-of-the-box PS6000XV. I'm not sure what to make of this yet. Max throughput numbers are much lower than I've seen in other tests. Performance of R10 and R50 is almost identical. Real & random tests show 20% improvement when going from RAID 6 to RAID 50.
SERVER TYPE: PHYS
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G3, 4GB RAM; 2X XEON 3.20 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000XV / 14+2 DISK (15K SAS) / R10)
NOTES: 2 NIC, MS iSCSI, no-jumbo, flowcontrol on
##################################################################################
TEST NAME--
**UPDATE**
Here are the updated results after a few adjustments. The previous tests were ran while RAID Verification was still running.
Some findings from my tests:
flow control slightly decreased sequential throughput
jumbo frames slightly decreased performance while lowering cpu utilization
mpio-rr increased sequential throughput
ntfs 64k block size increased sequential throughput while slightly decreasing random throughput
SERVER TYPE: PHYS
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G3, 4GB RAM; 2X XEON 3.20 GHZ
STORAGE TYPE / DISK NUMBER / RAID LEVEL: PS6000XV / 14+2 DISK (15K SAS) / R50)
NOTES: 2 NIC, MS iSCSI, no-jumbo, flowcontrol off, ntfs aligned w/ 64k alloc, mpio-rr
##################################################################################
TEST NAME--
I will leave it up to christianZ to make the new template first. Maybe if I run into unemployment, I will consider taking the task.
Wow that's funny, it's ALWAYS a great idea UNTIL it becomes your responsibility, then all of a sudden it's not a good idea any more.. hmm.. Amazing.
So if Christian were to make the template and he did all the work, then he can take credit for the idea too right?
What does it take to make the template, maybe I will do it.... If it's worth doing, then what's the problem?
Here's are report. It seems the RealLife-60%Rand-65%Read is not very good if anybody has comments I would love to hear them.
SERVER TYPE: VM
CPU TYPE / NUMBER: VCPU / 1
HOST TYPE: IBM HS21XM, 32GB RAM; 4x XEON 5300, 2,66 GHz, DC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: Netapp 3140 / 14+2 Disks / Raid6-DP
##################################################################################
TEST NAME-FC -
##################################################################################
Max Throughput-100%Read........____4.1_____..........____12003.6_______.........___375.1_______
RealLife-60%Rand-65%Read......___12.3_______..........___502.0_______.........____3.92______
Max Throughput-50%Read..........___.9_______..........___2514.8_______.........____78.5______
Random-8k-70%Read.................__4.0________..........__769.6________........._6.01________
EXCEPTIONS: CPU Util.-XX%;
###############################################################################
David Strebel
If you find this information useful, please award points for "correct" or "helpful"
ablej,
Can you run another test from a physical machine?
The following formation would also be useful:
jumbo frame: on or off
san connectivity: sw iscsi, hw iscsi, FC or NFS
test disk: vmfs, raw, or guest connection
This is FC using VMFS LUN fr. I can't currently run a test om a physical box. We don’t have any physical host connected to the SAN.
The latency is low and read throughput is good. 50% Read takes a big hit and the RealLife and Random tests are just awful. This indicates that the SAN can't keep up with the writes.
Was the system under heavy load from other hosts when you performed this test? I'm not too familiar with NetApp, but is it possible that it was running some cleanup/defrag/verification jobs in the background? I also heard that performance can degrade as the aggregate gets close to full capacity.
Maybe someone with better knowledge of NetApp can contribute.
How many VM's are you running on the same LUN? Any real hard hitting VM's? Only takes one sometimes. It could be iSCSI reservation issues. Being over cautious I tend to put really hard hitting VM's on a dedicated lun. In pre implementation testing we were seeing somewhat similar numbers when we put certain VM's on the same lun with other certain VM's.
Our testing on all this was with a completely different san, may have no relation to your issue.
If you want to view the performance as seen from the san, vmktree can provide you with both IO/s and other stats for your LUNs as seen from ontap.
Lars
Hallo, here are my results for Nexenta storage appliance SUPERMICRO storage server with Intel X25-E SSDs.
I have trouble reaching better sequential results - any hints for config of MPIO welcommed.
Disks are dong 600mb/s seq. internally - so network missconfig is the trouble.
Random numbers are very nice as expected from SSDs
SERVER TYPE: PHYS
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G4, 8GB RAM; 2X old XEON 3.60 GHZ
STORAGE
TYPE / DISK NUMBER / RAID LEVEL: Nexenta 1.1.7 COMSTAR, Supermicro
storage server SAS backplane, LSI1068, 3 x intel X-25E SSDs raidZ1
(raid 5) , 8GB ram 1xXeon 2.5Ghz QC
NOTES: 2 intel onboard 1gbit NICs, MS iSCSI, jumbo, flowcontrol ON,
ntfs aligned at 64, default alloc, MSFT mpio on the same subnet
(probably wrong config)
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........____17.7____..........___3344____.........___104_____ CPU 20.65
RealLife-60%Rand-65%Read.......___9.49____..........___5732____.........___44.8___ CPU 17.8
Max Throughput-50%Read.........____10.64____..........___5148____.........___160_____ CPU 26.6
Random-8k-70%Read..............___13.02____..........___4107____.........___32.08___ CPU 14.31
Impressive results on a new Nehalem based server. 45% improvement in RealLife & Random over a DL380 G5 with the same disk configuration. 100% improvement in Max Throughput-50%Read. Most likely thanks to DD3.
SERVER TYPE: PHYS (X5560)
CPU TYPE / NUMBER: CPU / 2
HOST TYPE: DL380 G6, 36GB RAM; 2X XEON 5560, 2.80 Ghz QC
STORAGE TYPE / DISK NUMBER / RAID LEVEL: DAS (Smart Array P410, 512MB BBWC) / 8 Disks (10K SAS) / R10
##################################################################################
TEST NAME--
Hi there,
I have done two tests on a physica server and a vm. Any comments? Is there anything that is very "unexpected"? It seems the Average Response Time is much higher comapred with other results people posted here. I may run other tests if you are interested. Thanks.
One comparison on my DG1 (16x500G, 7200 rpm FATA disks)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE of RESULTS - Physical Machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: PHYS.
CPU TYPE / NUMBER: 1 (dual core) Intel Xeon 3.4GHz
HOST TYPE: HP DL360 G4, 4GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP SAN EVA4000 / 16x500G Disks FATA 7200 rpm / RAID5
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___4.86______..........___11678.58_______.........___364.95_______
RealLife-60%Rand-65%Read......___41.24_______..........____499.07______.........___3.89_______
Max Throughput-50%Read..........___17.59_______..........___498.37_______.........__15.57________
Random-8k-70%Read.................___47.65_______..........___500.71_______.........____3.91______
##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE of RESULTS -Virtual Machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM on ESX3.5.
CPU TYPE / NUMBER: 1 vCPU
HOST TYPE: HP DL360 G5, 2GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP SAN EVA4000 / 16x500G Disks FATA 7200 rpm / RAID5
DRIVE CONNECTION: raw mapping device
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___11.95______..........___4738.54_______.........___148.07_______
RealLife-60%Rand-65%Read......___58.57_______..........____499.91______.........___3.90_______
Max Throughput-50%Read..........___2.17_______..........___2080.48_______.........__65.01________
Random-8k-70%Read.................___68.63_______..........___463.551_______.........____3.62______
Hallo,
any chance, that you could post RAID1 numbers for FC drives?
Regards,
Radim
Sorry, all my SAN disks are configured in RAID5. There are two DG.
1>16 Disks (FATA 7200rpm)
2>32 Disks (FC 10krpm)
Thanks,
Eagle
Another comparison on my DG2 (32x146G, 10k rpm FC disks)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE of RESULTS - Physical Machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: PHYS.
CPU TYPE / NUMBER: 1 (dual core) Intel Xeon 3.4GHz
HOST TYPE: HP DL360 G4, 4GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP SAN EVA4000 / 32x146G Disks FC 10k rpm / RAID5
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___4.88______..........___11704.48_______.........___365.76_______
RealLife-60%Rand-65%Read......___10.08_______..........____500.58______.........___3.91_______
Max Throughput-50%Read..........___6.10_______..........___918.49_______.........__28.70________
Random-8k-70%Read.................___11.98_______..........___502.17_______.........____3.92______
##################################################################################
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
TABLE of RESULTS - Virtual Machine
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SERVER TYPE: VM on ESX3.5.
CPU TYPE / NUMBER: 1 vCPU
HOST TYPE: HP DL360 G5, 2GB RAM
STORAGE TYPE / DISK NUMBER / RAID LEVEL: HP SAN EVA4000 / 32x146G Disks FC 10k rpm / RAID5
##################################################################################
TEST NAME--
##################################################################################
Max Throughput-100%Read........___11.38______..........___5066.89_______.........___158.34_______
RealLife-60%Rand-65%Read......___11.48_______..........____501.67______.........___3.91_______
Max Throughput-50%Read..........___1.65_______..........___2177.76_______.........__68.05________
Random-8k-70%Read.................___12.82_______..........___502.07_______.........____3.92______