VMware Cloud Community
khughes
Virtuoso
Virtuoso

Iomega StorCenter ix4-200d NAS Server performance?

I was curious if anyone is running the Iomega StorCenter ix4-200d NAS Server? I'm planning on maybe picking one up for our R/D and DR testing but was wondering how the performance was. Granted I'm not going to be running production on it, and using it mainly for proof of concept and testing but how much can 4 spindles really run?

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
101 Replies
qimen
Contributor
Contributor

qmacker - it really depends. Performance engineering is really a sophiscated area, I'm no expert, but I'll try to throw a few thoughts into this:

One of the major benefits of RAID10 is data redundancy due to mirroring, which does not necessarily translate into performance benefits. With 4-drive RAID10, you get 2 mirrored sets that are striped together. Each mirror set is like a virtual disk, so you practically have only half (2) disk spindles to use (with the exception of some types of read where each copy of the mirror can serve reads and thus all four spindles may be serving at the same time). The stripe size plays a very important role in performance, especially during sequential I/Os. Random I/Os tend to hit both virtual disks (mirror sets) more evenly.

First, let's assume your application I/O size is no greater than half of the stripe size. So say the I/O size is 8K and the stripe size is 16K. The first write will be placed in virtual disk 1 and the second write will be placed in virtual disk 2, so on and so forth. Therefore, each write is basically being written to one virtual disk at a time, the same as being written to a simple RAID1 or a single disk. When it comes time to read that data block, each read is again served from one virtual disk. No performance gains although there are more disk spindles available. In case the I/O size is bigger than half of the stripe size, say I/O size 16K, then each write and read operation could be using both virtual disks and thus should see some performance gains. It gets a little more complicated when the I/O size and stripe size do not align. Let's say the I/O size is now 12K. The first 8K is written to virtual disk1 and the other 4K is written to virtual disk2, while the 4K is being written, the next write comes in so its first 4K is written to virtual disk 2, and disk contention is created which may very well degrade overall performance.

The above scenarios show a perfect world. The meta data consumes disk space too which might skew alignment, and the disks may be fragmented somewhere. I'm not saying whether your test results are good or not, it's what it is. Just wanted to throw my 2 cents about being careful to compare performance numbers unless it's really apple-to-apple.

0 Kudos
qmacker
Contributor
Contributor

REVISED REVIEW:

Okay, I think I'm going to have to backtrack on my previous review a bit. Notwithstanding the "HD Tune" test results, I think in some ways they're not truly representative of VM "activity" or I/O traffic.

I copied my SBS 2008 server over last night. 160GB datastore (DAS) to datastore (NAS/NFS) went in about 3 or 4 hours. I also copied over my Windows Server 2008 TS and my Windows 2003 TS, as well as our trusty old XP VM. so, 4 VM's...

I booted up the SBS 2008 first. (In case you don't know, SBS 2008 is 64-bit Windows Server 2008 + Exchange 2007 + a bunch of other stuff). I only waited a minute or so, maybe not even that long, before I fired up the Windows 2008 TS, and the 2003 TS simultaneously. They were all ready to logon in about a minute, with the earlier started SBS 2008 just pipping the other two to the Ctrl-Alt-Delete logon screen.

To be honest, they're running fine! I can't see any difference than running direct-attached. I fired up my Outlook, sent a bunch of large attachments through Exchange, at the same time opened QuickBooks 2007 (a real dog of a program to startup under the best of circumstances)on two sessions on the 2003 TS...on thye same TS opened Outlook/Exchange, Office 2007 (Word, Excel) as well as Firefox in one session and IE in another session...no problems. Fired up a new session on the 2008 TS. No slowness there either.

I'm going to run like this for a couple of weeks. See how we get on. I'll keep you all updated. This is looking better!

P.S. qimen - Do you think i might have been better off leaving is as RAID-5 out of the box? I figured RAID-10 would be a bit faster, or maybe around the same. I just feel, if you're running all these VMs, you've better redundancy with RAID-10. If you lose a disk, you should have no preformance hit - right? With RAID-5, you'll have a heck of a performance hit if you lose 1 disk out of 4 in an array? Correct me if I'm wrong. Thanks so much for your input. -- qmacker.

0 Kudos
qimen
Contributor
Contributor

qmacker, RAID 5 would have been the most economic configuration for you, but you said you don't really care about usable capacity. I think RAID 10 does give you better data protection, as you said in case of disk failure. Performance wise, I doubt there is much difference between RAID 5 and RAID 10 with the SATA drives. In general, RAID 5 might give you slight performance gains in reads, but on the other hand performance loss in writes due to parity calculation. Again, I don't believe there is significant difference either way. You can find a lot of articles online comparing these two RAID types. Given you are more sensitive to data protection, I think you made the right choice of using RAID 10.

0 Kudos
Powermna
Contributor
Contributor

this thread sounds really sounds promising. Qmacker - can you post a update on how this setup is performing.?

0 Kudos
khughes
Virtuoso
Virtuoso

I had the results for a couple days now, but haven't sat down to type them out so I did. Again just a disclaimer these tests were over a 100MB/Full network so they are most likely skewed but give a decent comparison regarding NFS/iSCSI and the performance of the device right out of the box.

ix4-200d Performance

Hope this helps out some people. I've been using it and haven't seen too much of a performance hit when running many VM's on it during DR restore tests.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
dave_hood
Contributor
Contributor

Here's hoping qmacker comes back with some more news...

I am in exactly the same boat as himself, and a lot of folk here trying to find a cheap(ish!) ISCSI based SAN to be able to run a typical SBS type setup, with a couple of windows servers , 30users and exchange etc.

The IOMEGA looks like it may fit the bill nicely.

Cheers

Dave

0 Kudos
khughes
Virtuoso
Virtuoso

Currently on my 100MB/Full network, I have running on the NAS two domain controllers, a file server, a citrix presentation server, a citrix publishing server, a utility server and later this week I'll be restoring some of our databases onto it. Those are mixed across iSCSI luns and NFS luns. I would say that if you run that on a gig network you should have no issues running in a SMB environment

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
Powermna
Contributor
Contributor

Great. how many users do you have on this environment?

0 Kudos
awliste
Enthusiast
Enthusiast

Been lurking on this thread for a while, figured I'd offer some insight to assist others. I've learned much by watching so many others here, and I want to say thanks to the previous posters on this thread.

Last night, I finally dialed our StorCenter in. In the spirit of 'just to see what will happen' and using a DSL VM build and a minimal XP Pro build, I put the spurs to it. Long and short, I was able to deploy a mixed bag of VMs in excellent time onto it without affecting their performance. Just for giggles, I decided to really try to mess it up so I had the StorCenter create an iSCSI 'folder' (its method of creating a LUN) while I was building. No noticeable impact.

Very, very impressed with this little guy. Properly configured, it's definitely money well spent.

My two.

- abe

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
0 Kudos
Julian2007
Contributor
Contributor

Hi, qmacker, how are you getting on? Been reading with interest just now...

Cheers

Julian.

0 Kudos
ItimKevinW
Contributor
Contributor

Ok 1st post and I hope this is the right place for it Smiley Happy

I'm new to both vsphere and the setting up of iSCSI storage on it but I can usually work things out without too much trouble and a little googling but on this occasion I'm stumped.

My issue is that I have created a 2.2TB iSCSI drive on a ix4-200d StorCenter and have configured the ESX server to use it's software initiator iSCSI to connect to this storage device. I can see the 2.2TB space from the ESX server however when I try to greate a LUN in this space from the ESX server I can only create a LUN with maximum size of 225GB. I can't work out what I've done wrong so any guidance would be appreciated.

0 Kudos
Powermna
Contributor
Contributor

Hi there,

Use 8 mb block size when creating the LUN. that will give you 2 TB storage.

0 Kudos
ItimKevinW
Contributor
Contributor

HI, Thanks for the prompt reply but I've tried that and it still insists that 225GB is the max size I can have Smiley Sad Is it possible that by (probably stupidly) creating an iSCSI space of 2.2TB I have hit some sort of limit imposed by vSphere?

0 Kudos
awliste
Enthusiast
Enthusiast

Didn't have this problem with mine.

Assuming you made iSCSI targets on the StorCenter? It takes a while.

(Update: Sorry man - read that too fast - try to keep your iscsi targets to be <= 2TB. This isn't a hard limit, but it's a dang good idea for optimal performance.)

Also, you could split and make an iSCSI target and an NFS target. I've done this to ours, and am very impressed with it's performance. Assuming NFS works just fine?

Don't forget to enable jumbo frames as well - the ix4 supports them. Hopefully your switch fabric does as well.

Good luck!

Integritas!

Abe Lister

Just some guy that loves to virtualize

==============================

Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
0 Kudos
ItimKevinW
Contributor
Contributor

Yes the 2.2TB target was created on the StorCenter and yes it did take a while (all night to be exact). Jumbo frames are enabled but not much use to me if I can only get a 225GB drive out of a 2.7TB SAN box Smiley Sad

I think I'm going to start again with a smaller size (probably 500GB) and see if that makes any difference, Thinking about it logically for a mo, one great lump of storage space doesn't make a lot of sense anyway so redoing it ins't that bad a deal if it solves the problem, but if I still get the size issue I'm gonna be really confused and looking here for some more help :).

0 Kudos
awliste
Enthusiast
Enthusiast

Don't be discouraged boss. It takes time to learn.

1) stick to jumbo framing. Keep that in your mind - don't strike that one out. It stays.

2) If you're not sure about your storage setup, check here: http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vs...

==> Great place to start. The iSCSI superfriends podcast a few weeks ago is REALLY informative (vmware communities roundtable podcast)

3) You're going to find that making a 500G LUN on the thing is equally as slow. Least, I did. I split ours into JBOD, two 1.5 LUNs, and whatever was left as NFS.

4) you won't see the issue arise. I'm pretty sure I know why you got that issue, but I need to research it further to confirm and I don't have a lot of time today to do so.

Good luck trooper. If you need help on specifics, don't hesitate to DM me.

Regards,

- abe

Integritas!

Abe Lister

Just some guy that loves to virtualize

==============================

Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
0 Kudos
ItimKevinW
Contributor
Contributor

Nice post, Very helpful Smiley Happy

I've rebuilt the iSCSI Drive on the Iomega box as 500GB and all is well with the ESX server now, I have 499.99GB to use Smiley Happy so it must have been something to do with making it a 2.2TB iSCSI that wasn't liked.

Thanks for all the help and advance apologies for all the nab questions I'm bound to ask in the future Smiley Happy

0 Kudos
khughes
Virtuoso
Virtuoso

A 2.2TB LUN is a very large LUN to create and use. If it was in a production environment it would most likely be a large bottleneck for your network. Breaking down your LUNs into manageable sizes is the best way to go, so I think you're on the right track with 500GB sized LUNs. When we built out our network we were young and dumb and did 2 x 1.3TB sizes and I wish I could change that now. Oh well.

I've been using the ix4-200d for a while now and done a lot of moving around and testing on it, been a great purchase for the price of the device.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
qmacker
Contributor
Contributor

Okay, a sort of "final" report back.

This device has been nicely running: 1 x SBS 2008 Server (5GB RAM), 1 x Windows Server 2008 TS 32-bit (1GB RAM), 1 x Windows Server 2003 R2 TS 32-bit (1GB RAM), 1 XP SP3 VM (512MB RAM).

They are all running off NFS. Noperformance "hit" that I can really notice, now that I've been running them for a while. Seems just as good as they were on direct-attached storage. I'm sure they are a tiny bit slower if I measured them, but it's really not noticeable.The only thing I don't have, is a lot of users simultaneously hitting the machines. I suppose that's the ultimate test. I have decided I will deploy this at a small client first though - 7 to 10 heavy-ish users - we'll see how that pans out. I feel confident that it will be fine. This is a great little box.

Now, I've a question, but I'll but it in my next reply on this post...

0 Kudos
qmacker
Contributor
Contributor

I didn't want to mention it above, but I've found a slight problem with using the Iomega ix4-200d with a UPS in a power outage. It may be my mis-configuration, but it's a potential issue. It's certainly not a "fault" with the ix4-200d. Here's the problem:

  • Power returns after a power outage:

  • ESXi server starts up. As you know, this doesn't take very long - maybe a minute.

  • Iomega ix4-200d also starts up: This takes some time - about 2 minutes! This is a problem!! Because...

  • ESXi server goes to boot up the first VM from the NAS - NAS isn't ready yet!

  • All other VM's are not ready yet either, so ESXi skips quickly over them.

  • ESXi gives up trying, after the last one.

  • 2 minutes later: NAS is ready. Too late though...ESXi has already given up.

I've tried changing the "Default Startup Delay" time in the ESXi "Configuration/Virtual Machine Startup/Shutdown" settings, but that only works for delaying SUBSEQUENT startups after the first VM starts. The first VM tries to start immediately the ESXi server is ready. I can't find a way to delay the startup of the first VM. Am I missing something?

Right now, to get around this, I've put an unneeded XP VM on direct-attached storage, and set that to startup first. By the time it starts up, the NAS is ready. Anyone got a better solution? Maybe I should move this question to another forum?

0 Kudos