I recently purchased a 3PAR F200 with 16FC drives and three Proliant DL380 G6 5570 40GB's RAM and two 81Q HBA's connected to redundant QLoigic 5800v switches with 50/125 cables. The hosts are running vSphere 4.0 Update 1.
I ran some iometer tests using a file I found on this site and was curious if I was headed in right direction performance wise. The numbers looks pretty good to me but this is all new and I'm sure there are configuration changes that could be made to improve performance and possibly other tests I could run. I've pretty much followed the 3PAR Implementation Guides, VMware Cookbook and other VMWare documentation.
One question I do have about my configuration. everything is running at 8GB's except between the FC Switches and F200 which is running at 4GB. Should I lock the hba's in the server to 4GB so everything is running at same speed. Does running mixed speeds create a buffering problem on the switches?
Theres very little you need to do (or can do) with a 3par that will improve it - they are pretty great out of the box. Those are great #s for 16 drives.
No need to play with locking the HBAs at a specific speed - unless you are completely overloading your switches overall, you'll be fine.
VCP, vExpert, Unix Geek
The numbers will depend on queue depth and test file size (hence cache hit ratio). Looking at the 8K random 70% read test, run this with 128 outstanding IOs with a large test file (say 30GB) with 1 minute ramp and 10 mins run time to get a number close to the real-life throughput in IOPS terms.
But as said, the numbers look pretty good. Possibly the test file is not big enough, since the 50% read test result looks a bit like the writes may have been mostly cached.
Please award points to any useful answer.
I changed the parameters and ran the test. I haven't changed the queue depth values. In my previous test I used a 60 second ramp time and 5 minute run time. I run the test on a second 40 GB drive with the file size set to 0 so it fills the entire drive. The results are slightly lower than the previous run. I results are the average of three runs.
I read the 3PAR VMware Implementation Guide a second time and reconfigured my zoning. I have three esx hosts with redundant hba's. Port 1 on the hosts and controller 1 on the 3PAR connect to a qlogic 5800v (sanbox1). Port 2 on the esx hosts and controller 2 on the 3PAR connect to a second qlogic 5800v (sanboox2). I have a windows 2008 server outside my virtual environment with redundant hba's connected in the same way: port 1 to sanbox1, port 2 to sanbox2. The 5800v switches are stackable, but they are not connected to each other.
Is this configuration and zoning correct, I've attached files with the zoning config from each switch. I believe I've done the zoning correctly but a second opinion would be helpful. The Implementation Guide says the supported fabric zoning relationship in a multi host to one storage server is where an individual zone must be created between each host server hba and the inserv server port, which is what I did.
Thank you for your help.
Keep this KB handy when handling with Queue depth throttling on 3PAR arrays...
VCP 3&4, MCTS(Hyper-V), SNIA SCP.
Please award points, if helpful
Knitlogix India is the 3PAR's only partner in India delivering 24X7 support and 4 Hr SLA and has install base in India.
Please contact 40-64582708 Knitlogix Private Limited,Hyderabad for more details.