VMware Cloud Community
SReuter
Contributor
Contributor

Disk write performance on IBM DS4700 SAN

Hello,

I did some benchmarks on our IBM DS4700 SAN and I am quite disturbed by the "bad" write performance inside the VMs.

But let me first explain my tests:

First I did a benchmark with IOmeter on a physical box (dual xeon 3 Ghz) that is connected to a san lun with 2 x 2 Gbit FC-Adapters.

I used 256K size for reads and writes with 16 outstanding IOs and 1 worker thread. (500 MB File)

I ran a 100% and a 100% write test.

Results for the physical box:

100% Read: 390 MB/s

100% Write: 165 MB/s

So, these values are even better than those reported in the SPC-2 Benchmark (Raid 5 for the DS4700) of the Storage Performance Council.

Here I am very satisfied

Then I did the same test in a similar sized VM with high disk shares setting.

The ESX Host is a 2 CPU Quad Core Machine with 32 GB of RAM and 2 x 4Gbit FC-Adapters connected to the same san

and using the same Raid-Set (Raid 5) as in the first test. For the test I did not use RDMs, but a normal vmdk for the vm.

Results for the VM:

100% Read: 370 MB/s (hmm.. ok, thats fine)

100% Write: 65 MB/s (... any comments for THIS value?...)

Edited Comment: Actually the Write performance is about *100 MB/s *(with 16 outstanding IOs/s), as my first measurement was with a san mirrored disk (from san to san). Sorry for that.

Why is the write performance inside the VMs so much slower than on the physical box?

Does the ESX host use both FC-Adapters in load balanced fashion (like the physical host (w2k3) did)? Does the problem lie here?

And even then, a single 4Gbit-adapter should easily handle the 160 MB/s write speed.

Are there any parameters to tune the ESX host for better write performance inside the VMs?

Or are the results as expected? I would like to hear your good comments on the issue.

0 Kudos
8 Replies
kjb007
Immortal
Immortal

First, was this the only vm on the hosts when you tested? If it was, then disk shares does not come into play. Shares only come into play when there is contention for the resource, so while it may not hurt you, it's not helping you.

Second, ESX will not load balance automatically between paths. There is support round-robin load balancing in 3.5, but it is only experimental, check here for that: http://www.vmware.com/pdf/vi3_35_25_roundrobin.pdf

There is an open thread on the forums for performance on various storage appliances and arrays, check here for that: http://communities.vmware.com/thread/73745

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
PNZForum
Contributor
Contributor

We have had similar issues with our SAN, well we thought it was the SAN, it was in actual fact the IBM server. Certain models of IBM server are not compatible with 2 CPUs under a virtual environment. We removed one CPU and the performance of the SAN and virtual machines increased 10 fold. So the moral of the story is try to keep away from IBM equipment when working in a virtual environment

0 Kudos
SReuter
Contributor
Contributor

There is an open thread on the forums for performance on various storage appliances and arrays

I studied this thread, but as far as I see it, it does not contain just 100% write disk performance tests which is the problem in my case.

The reads are fully fine, but the write speed in a VM is 1/3 of the speed of a physical box.

0 Kudos
Dave_Mishchenko
Immortal
Immortal

What sort of server and FC card are you using and what do you measure with esxtop when you're running the I/O test?

0 Kudos
Berniebgf
Enthusiast
Enthusiast

I would be cautious about setting up round robin with the DS4700 as it is an Active/Passive array and will not act kindly to LUN handover from controller to controller.

however if you are using all 4 host ports (or more if model 72) in a mesh environment there may be some advantage but I do not know how you would control round robin for each LD within the boundary of each Storage processor.

(Does that Make sense?).

Also, depending on your HBA card (Qlogic or Emulex) be sure to set the correct queue depth...

QLOGIC:

To set maximum queue depth first backup the file /etc/vmware/esx.conf and open it for editing.

The file looks similar to the following example:

/device/002:02.0/class = "0c0400"

/device/002:02.0/devID = "2312"

/device/002:02.0/irq = "19"

/device/002:02.0/name = "QLogic Corp QLA231x/2340 (rev 02)"

/device/002:02.0/options = ""

/device/002:02.0/owner = "vmkernel"

/device/002:02.0/subsysDevID = "027d"

/device/002:02.0/subsysVendor = "1014"

/device/002:02.0/vendor = "1077"

/device/002:02.0/vmkname = "vmhba0

Find the options line right under the name line and modify it to specify the

maximum queue depth, as follows (where nn is the queue depth maximum):

/device/001:02.0/options = "ql2xmaxqdepth=64"

NOTE The second character in ql2xmaxqdepth is a lowercase .L.

Emulex

For a single instance of an Emulex HBA on the system, run the following commands. The example shows the lpfcdd_7xx module. Please use the appropriate module based on the outcome of the above step.

# esxcfg-module -s lpfc0_lun_queue_depth=16 lpfcdd_7xx

# esxcfg-boot –b

In this case, the HBA represented by lpfc0 will have its queue depth set to 16.

For multiple instances of an Emulex HBA on the system, run the following commands:

# esxcfg-module -s "lpfc0_lun_queue_depth=16 lpfc1_lun_queue_depth=16" lpfcdd_7xx

# esxcfg-boot –b

In this case, both HBAs lpfc0 and lpfc1 will have their queue depths set to 16.

Make sure you have set your host type correctly, try different stripe sizes and raid types on the DS4700 to see what effect it has.....

regards

Bernie

http://sanmelody.blogspot.com

SReuter
Contributor
Contributor

Sorry, I had to correct a value in my measurements. The VM was writing to a disk that had a synchronous mirror to a second san active.

After stopping that mirror, the write performance jumped up to about 100 MB/s.

I was even able to further pump up the write performance of the VM by reducing the outstanding IO/s from 16 to 8 (in iometer).

Then I get about 135 MB/s write, which is nearly satisfactory I would say.

Thanks everyone so far for the good suggestions.

0 Kudos
SRENMAN
Contributor
Contributor

I'd really appreciate finding out which IBM servers dont perform well in the virtual world from your experiences. I am getting ready to deploy a mid size VI and its all IBM Gear with 2 CPUs and more. My primary esx hosts are x3650's with dual 3Ghz quad cores, but I do have some older x346's which I might look at converting to ESX hosts in vcenter. is it the machines themselves or the processors? (quadcore, dual core, single core w/hyperthread) that you had issues with?

thanks

0 Kudos
WilderB
Contributor
Contributor

SRENMAN, first of all: we have installed ESX 2.0, 2.5, 3.02 and 3.5 in more than 500 IBM servers, from single processor x3200 to 16 processor x3950 and , in no single one, we had any kind of "virtualization problem" related to the SMP or any other. In fact, some IBM machines offers the top TPC benchmarks, what is very good for VMware. Not satisfied, I contacted IBM 2nd level support (I have a friend there) and asked about this 2 CPU problem... no one have ever heard about it. So, I can surelly say : use IBM on your virtualized environment, even with 8 or more processor... you won't regret.

About the DS4700 multipath question: there is no multipath in the DS4XXX class storage. Only one CTRL can see some specific LUN and the other CTRL will only see it if the first CTRL fails. If you set multipath (round robin) in ESX, it will cause multiple AVTs on the DS4XXX, stopping ALL I/Os and you will loose your storage (at least till the next reboot). The way to balance HBAs access is to create more than one LUN, set the Prefered Path of each one to an alternated controler (one to CTRLA and other to CTRLB) and tie them togheter inside ESX (expanding volumes). This way you will see both HBAs ports working and both CTRLs performing cache/disk I/Os, what will improve performance.

0 Kudos