jmmarton
Enthusiast
Enthusiast

iSCSI, Linux I/O scheduler, VMFS

I've built a SLES10 SP2 server running the iscsitarget package to provide storage for an ESX 3.5 server in our DR site. I'm looking at optimizing the disk that will be presented by iscsitarget to ESX to maximize performance. With SLES, and perhaps other distros as well, there are four I/O schedulers: noop, deadline, anticipatory, and cfq. The default in SLES is cfq. Is there any performance benefit in using a different I/O scheduler for the iSCSI disk which will be used as one big vmfs datastore?

Underlying system:

two dual-core AMD Opteron 2.00GHz CPUs

4GB RAM

12 x 500GB SATA disks in one big RAID10

SLES10 SP2 x86_64

2TB iSCSI disk for ESX

Thanks!

Joe

0 Kudos
4 Replies
fejf
Expert
Expert

Hi,

perhaps you should start with the SAN design guide: http://www.vmware.com/pdf/vi3_san_design_deploy.pdf - e.g. page 73 where there is something about whether it's better to make one big vmfs or smaller ones. E.g. normal values are around 15 to 25 virtual machines per VMFS. The right design can be a bigger advantage than the Linux IO scheduler.

The problem is that it also depends on the IO load of your VMs... So perhaps the best way is trial and error...

--

There are 10 types of people. Those who understand binary and the rest. And those who understand gray-code.

-- There are 10 types of people. Those who understand binary and the rest. And those who understand gray-code.
0 Kudos
jmmarton
Enthusiast
Enthusiast

perhaps you should start with the SAN design guide: http://www.vmware.com/pdf/vi3_san_design_deploy.pdf - e.g. page 73 where there is something about whether it's better to make one big vmfs or smaller ones. E.g. normal values are around 15 to 25 virtual machines per VMFS. The right design can be a bigger advantage than the Linux IO scheduler.

The problem is that it also depends on the IO load of your VMs... So perhaps the best way is trial and error...

In my case, I won't be running many VMs. A few mission-critical VMs from our main site will be replicated to this location for DR purposes but we won't actually run them unless we're in DR mode. There will still be a few VMs running in production at this site, but by a few I mean a half dozen at most. It's really a small location that doesn't need much storage but we have this existing box that I can redeploy. Just trying to get an idea of how to optimize it.

So, worst case, if we were in DR mode we might have to run as many as 10-15 VMs, but in that case we're looking more towards accessibility of data rather than performance. So for performance I'm just looking at optimizing for around 5-6 production VMs running at any one time. The VMs will be a combination of Linux and Windows, mostly Linux.

Joe

0 Kudos
mcowger
Immortal
Immortal

We have generally used the deadline scheduler for disk intensive applications.

cfq isn't really a good choice, because you dont have a bunch unrelated tasks competing

noop obviously isn't a good choice

anticipatory also isn't a good choice because reads from an iSCSI VMFS are going to be mostly random

--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
danpritts
Contributor
Contributor

Red Hat suggests use of "noop"

http://kbase.redhat.com/faq/docs/DOC-5428

This leaves i/o scheduling to ESX and/or your storage. Your guest knows very little about the actual disk layout; it's many layers of virtualization deep. Makes little sense to have your guest try to optimize in this situation.

It probably won't hurt anything, but it might; and it is a waste of time for the guest to do things that vmware or the storage controller is going to reorder anyway.

0 Kudos