tcimello
Contributor
Contributor

iSCSI SATA RAID vs Local SATA RAID

Jump to solution

I believe I've figured this out from reading posts, but I'd like someone to confirm. I am running ESXi 3.5 U2 on a Supermicro X7DWN+ server, dual 5410 Xeons, 24 GB RAM, e200i RAID array, 2 arrays of 4 x 500 GB HDDs in RAID 5. (3 live, one spare) I feel like the IO performance is not where it should be. I've enabled caching for the drives, and the unit has a battery. From what I've been reading, iSCSI may make this faster because the OS should be more adept at handling the disks than ESXi. Is this a generally correct assumption? (Same disks, same card, probably a dual CPU Dell server I've got kicking around as the box.)

If my assumptions are correct, is there open source iSCSI software that someone would hazard to say is good enough to use?

0 Kudos
1 Solution

Accepted Solutions
drummonds
Hot Shot
Hot Shot

Use Iometer. I wrote an article here on the community to offer a few pointers: Storage System Performance Analysis with Iometer.

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds

View solution in original post

0 Kudos
8 Replies
drummonds
Hot Shot
Hot Shot

When you say, "the OS should be more adept at handling the disks than ESXi", you mean Windows' iSCSI driver versus ESX's software iSCSI driver? Before reading the next paragraph, fill me in on why you think that Windows' driver might be more efficient than ESX's.

We've done quite a bit of testing between these two. We call using Windows' iSCSI driver the "in-guest iSCSI" configuration and ESX's iSCSI driver the "software initiator". As things stand today with the release of our current products its true that there is a slight improvement in total efficiency when using the in-guest configuration versus ESX's software initiator. This is not so much of a comment on the iSCSI driver as the incredible efficiency of our network drivers. The in-guest appoach uses ESX's network stack and the software initiator uses entirely separate code in the storage stack. Using a guest driver and going through ESX's network stack is a heavily optimized path. The results of internal analysis I've seen on the code that will become our next product leads me to believe that there's going to be parity in efficiency between these two methods in subsequent releases.

Incidentally, note that I'm talking about efficiency here and not necessarily performance (throughput/latency.) The overwhelming majority of data from our performance analysis says that with storage if your accesses are falling outside of your disk's or array's cache, then the bottleneck lies in the underlying platters. The small differences in efficiency between the in-guest and software initiator approaches don't dominate the experiment. The number of spindles on the storage does.

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds
tcimello
Contributor
Contributor

I can't find the article or post now, but the gist was that ESXi, being very stripped to the bone, was not as good at dealing with disk performance as something like OpenFiler. Looking through the posts, articles, etc, it seems that a lot of people have issues with disk performance in ESXi.

0 Kudos
tcimello
Contributor
Contributor
0 Kudos
drummonds
Hot Shot
Hot Shot

One part of my job is assisting customers that are considering signing a license agreement with us on performance problems. Often they'll setup their own internal analysis, get a poor number, and blame ESX. As the "new kid on the block", virtualization is often blamed for poor performance.

I've got no idea what's going on in the thread you referenced. But I can show you a variety of resources that showcase ESX performance:

  1. VROOM! article where we hit 100,000 IOPS on a single server (800 MB/s.)

  2. Storage protocol paper showing wire speed on every supported protocol.

  3. Hell, even EMC's Exchange solution supported 16,000 mailboxes which is 64 MB/s.

Spend some time perusing our resources and you'll see dozens of examples of ESX performance greatly exceeding the limits quoted in that thread. There are no practical limits in ESX storage performance but a hundred other things that can go wrong that can be wrongly blamed on ESX.

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds
tcimello
Contributor
Contributor

True.

I need concrete numbers though.

I've setup an OpenFiler iSCSI box, and configured storage. I've moved a Windows XP VM to it, and just need to figure out how to benchmark the speed of the VM.

0 Kudos
drummonds
Hot Shot
Hot Shot

Use Iometer. I wrote an article here on the community to offer a few pointers: Storage System Performance Analysis with Iometer.

More information on my blog and on Twitter: http://vpivot.com http://twitter.com/drummonds

View solution in original post

0 Kudos
tcimello
Contributor
Contributor

My testing showed that on my machine software iSCSI is just over 50% slower than the local RAID storage. Now I'll have to look at some similar configs in the benching thread and see how mine compares.

Thanks!

0 Kudos
rlund
Enthusiast
Enthusiast

Before I would go there, maybe try benchmarking the iSCSI with another platform.

I have a blog post about setting up SLES as a iSCSI host.

Roger Lund

My Blog:

Roger Lund Minnesota VMUG leader Blogger VMware and IT Evangelist My Blog: http://itblog.rogerlund.net & http://www.vbrainstorm.com
0 Kudos