VMware Cloud Community
khughes
Virtuoso
Virtuoso

Iomega StorCenter ix4-200d NAS Server performance?

I was curious if anyone is running the Iomega StorCenter ix4-200d NAS Server? I'm planning on maybe picking one up for our R/D and DR testing but was wondering how the performance was. Granted I'm not going to be running production on it, and using it mainly for proof of concept and testing but how much can 4 spindles really run?

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
101 Replies
dave_hood
Contributor
Contributor

Thanks for getting back to us Qmacker, I know there are a lot of other folk watching this thread to see how you got on - sounds positive!

One quick question - is there a reason you are using NFS and not ISCSI? Did you try both and find NFS performs better or something else?

thanks for the info!

Cheers

Dave

0 Kudos
Jasemccarty
Immortal
Immortal

Dave,

I also have an IX4-200d, and with regards to NFS vs iSCSI... I feel that using NFS doesn't require "partitioning" of drive space like iSCSI does on the unit.

There appear to be 2 types of presented "storage" a folder or an iSCSI target. Folders can be used for NFS, CIFS, etc., while iSCSI targets can only be used for iSCSI connections. With that being said, if the IX4-200d is a general purpose SoHo NAS, it is easier to manage if not using iSCSI, but rather NFS/CIFS/etc.

Cheers,

Jase

Jase McCarty

http://www.jasemccarty.com

Co-Author of VMware ESX Essentials in the Virtual Data Center (ISBN:1420070274) Auerbach

Co-Author of VMware vSphere 4 Administration Instant Reference (ISBN:0470520728) Sybex

Please consider awarding points if this post was helpful or correct

Jase McCarty - @jasemccarty
0 Kudos
dave_hood
Contributor
Contributor

Hi Jase,

thanks for the reply - so it is more a management choice than performance based? I thought that the performance maybe better with ISCSI, but perhaps not. Good to know either way.

Cheers

Dave

0 Kudos
qmacker
Contributor
Contributor

Hi Jase,

I used NFS for ease of use and backup purposes. In my earlier tests I did find that iSCSI was marginally faster. This was my first time out the gate with NAS storage and VMware though, so I decided to stick with NFS for its versatility.

0 Kudos
BrandonJ
Enthusiast
Enthusiast

qmacker, I'm a little confused here. I've read over your numbers several times and it looks to me like NFS was quiet a bit faster than iSCSI in your tests. Am I missing something?

0 Kudos
erwinrivera
Contributor
Contributor

Link not found Eric, could you possibly post the results please, I am looking at getting this or the Netgear NVX for homelab.

0 Kudos
Powermna
Contributor
Contributor

Jeg er desværre blevet syg.

Ved hastesager bedes henvendelse rettet til BusinessMann ServiceDesk på tlf.: 45168080 eller via mail på servicedesk@businessmann.dk

Med venlig hilsen

Ibrar Mohammad

0 Kudos
Valley911
Contributor
Contributor

Not sure if the OP is still following this thread, or if anyone else can comment.

I have gone through the whole thread and appreciate everyone's feed back that they have provided on this device. I am looking at purchasing this unit for use in my home ESXi lab and am sure the performance should be more then enough. However, I did not see it mentioned for the iSCSI perfromance, is it possible to team or bond the 2 1Gbe links together for better performance? I see it mentioned that it is possible HERE. On my ESXi host I have 2 free ports I could use to direct connect the storage and was wondering if that would provide a little better performance.

Thanks in advance,

-Jason

0 Kudos
awliste
Enthusiast
Enthusiast

Yes, the ports can be bonded. Let us know how your experience goes!

Integritas!

Abe Lister

Just some guy that loves to virtualize

==============================

Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!

Integritas! Abe Lister Just some guy that loves to virtualize ============================== Ain't gonna lie. I like points. If what I'm saying is something useful to you, consider sliding me some points for it!
0 Kudos
khughes
Virtuoso
Virtuoso

Yes the two NIC's on the unit can be bonded together for a faster connection, or they can be configured for a failover state. Obviously you would use the bonding in a home lab. All performance stats on the document I created - http://communities.vmware.com/docs/DOC-10925 were used by bonding the two NICs together for a faster performance. I'm still finding new and little cool things about the ix4, really glad I got it for our company R/D lab.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
Valley911
Contributor
Contributor

Kyle,

Thanks for the link to your document. I know you stated that you completed the tests over a 100mb network. Any chance you have been able to do it over a gig connection? Smiley Happy

Thanks for the help.

-Jason

0 Kudos
khughes
Virtuoso
Virtuoso

If I could convince my company to spend some more money on a gig switch for the R/D network, I would. I have no good reason to ask for the funding at the time sorry. It sucks trust me I wish I had a Gig switch there but I'm using what I got available to me Smiley Happy

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
0 Kudos
Valley911
Contributor
Contributor

Ya, I can understand that! Smiley Happy

I will be placing an order today to purchase mine and I post back with my results. Thanks again.

-Jason

0 Kudos
rnourse
Enthusiast
Enthusiast

I've been reading this thread with interest as I have an ix4-200d array.... I wanted to bring up a point that leapt to mind when I read posts about users having issues with 2.25 TB LUNs. No one has yet mentioned the obvious here....

The maximum extent size for ESX 3 or 4 is 2.0TB - 512 bytes. If you define a single LUN larger than 2.0TB - 512 bytes you will have issues as folks have discovered. This limit applies to Raw Device Mappings as well and is clearly documented in the configurations maximum guide available on the documentation pages for 3.5 and 4. If you want a single vmfs filesystem larger than 2 TB - 512 bytes then you must define multiple luns that fall under the maximum and create a vmfs file system with multiple extents.

I'm told this is not a VMware limitation but rather a limit to the addressing space available under the SCSI-2 protocol. I wouldnt profess to understand why it's there but I can assure you it is

rn

0 Kudos
Powermna
Contributor
Contributor

Jeg er på fædre orlov og forventer at være tilbage 15 Februar 2010.

Ved hastesager bedes henvendelse rettet til BusinessMann ServiceDesk på tlf.: 45168080 eller via mail på servicedesk@businessmann.dk

Med venlig hilsen

Ibrar Ashraf

0 Kudos
Valley911
Contributor
Contributor

Hey All,

I received my ix4-200d on Monday and was able to run some baseline testing on the unit using an NFS mount. I purchased the 2TB model (4x500GB) and left the RAID configuration at default, RAID5. The test were run with a "direct connect" 1GB crossover cable from my whitebox ESX host (Dual Core AMD Opteron 2.4GHZ with 8GB of RAM) and the VM configured as Win2K8 with 1GB of RAM and 1vCPU. I used IOMeter to perform the test with the configuration file provided from the VMware Communities "Open Unoffical Storage Performance Thread" found HERE.

NFS

Test

Avg MBps

IOPS

Avg IO Response Time

Max IO Response Time

CPU Utilization

MaxThroughput 100% Read

86.81

2778.03

21.37

521.52

31.47

MaxThroughput 50% Read

63.37

2027.93

29.68

779.72

27.2

RealLife 60% Rand 65% Read

.95

121.33

486.55

5890.4

16.19

Random 8K 70% Read

.88

122.85

533.37

5908.72

13.96

As satisfied as I am wth these numbers for this unit (and being used for a home lab) I am going to do some additional testing with Jumbo frames enabled.

-Jason

0 Kudos
maxadmin
Contributor
Contributor

Thx to everyone for a bunch of great information on this NAS.

Question: Even in small networks, it concerns me running a group of servers on one SMB NAS. We are thinking about purchasing VMware Essentials Plus for a small deployment (2-3 Physical Servers). What would we be able to do for redundancy if we ran two of the StorCenter NAS devices with Essentials Plus? Is is possible to replicate the VM's between the two NAS, or even just back them up to both?

0 Kudos
qmacker
Contributor
Contributor

Okay, I thought I'd add another post on this device, or devices (I've bought a few of them), a year down the road.

I've changed my opinion of the ix4-200d a bit over the last year. I'd have to say, I don't really trust it. I certainly don't trust it in a production environment. The main reason for this, is because they kind of "bug out" every so often. By that I mean:

Messages every so often, saying that the device has "lost" its connection. Here's an example:

Subject: Possible problems with 'ESX-Iomega' device (120)

Content: Ethernet connection is lost on LAN '2'. Device Information: , 2.1.9.46472, Iomega StorCenter ix4-200d, 0KAK34003D

I woke up to 30 messages like this today. When that happens, and it's happened several times, I cannot access the web interface of the device. The only way to get the web interface back - which is the only way to control and configure the device - is to power it off by pressing the power button. Not a very elegant solution. The VMs are unaffected by the initial error, but obviously before I power-cycle the NAS, I have to go into ESX and power all the VMs off, then power off the NAS itself. This requires a visit to the site.

This alone makes me nervous. It would nearly be enough to put me off using it. What does really put me off though, are the other things I've seen happen, to a few of these devices now.

I've bought 3 of them (4, including the one I returned and had replaced - see item 2 below):

The 1st one was for myself, and has always worked fine, apart from the "Ethernet connection is lost" messages, from time to time.

The 2nd one I bought for a client (to run SBS 2008, a TS, a BBY server, and a few other VMs) and it arrived DOA. The replacement has worked fine, except for one time giving the flashing red light, and the web interface has bugged out twice. It also emails me the alarming "Ethernet connection is lost" messages, every so often. Additionally, I one time got "disk is degraded" error, and could not access the web interface. Manually shutting down the VMs, and power cycling the device fixed this (the red light/error went away), but has since made me anxious about the reliability of these devices (plus the other issues below and above).

The 3rd one I bought again for myself. I had sent my original (good) device over to a client site to perform an emergency P2V. Their old physical server was dying, and I wanted to do a quick P2V onto a NAS device that I could easily move to our new co-lo facility (I'm getting out of the "on-site" game, and setting up my clients with virtual desktops using "View" running off a "real" expensive NAS, and "real" hard-core servers). We are still in the process of getting this done, and some VMs are still running on the ix4-200d. Just today, I got a bunch of those "Ethernet connection is lost" messages, and I can't access the web interface. The VMs are still running though. (Time to speed up that move to the co-lo).

Anyway, I ordered Number Three, as a replacement for the 1st one that went to the client. This one has given me nothing but trouble. When it arrived, I changed the disk configuration to RAID 10. After that, it said "Data is being reconstructed." It would get to 100%, then start the cycle all over again. Oddly enough, I could run a VM on it all of this time. After a week or two, I got fed up looking at the flashing red light, and decided to upgrade the firmware. I downloaded the latest revision from the site. That "appeared" to go alright, then it wanted to reconstruct the data again. "Fine, have at it," I thought. This time, it would take AGES (like, 24 hours) to get to 97%. At 97% it would just stop. I couldn't access the web interface - AND MORE SERIOUSLY - I lost connection to the NFS datastore from my test ESX server. I had to physically unplug the NAS, after it started up again, ESX could see the datastore, and my VM started. However, the "Data is being reconstructed" nonsense started again - and again bugged out at 97%. Then the NAS died, and the red light came on. The device is inaccessible, and a message on the front says that a drive has failed. So much for RAID!

BOTTOM LINE: This device needs to be taken off the VMware HCL. It is NOT SUITABLE for a production environment, and therefore shouldn't be on the HCL. The HCL is something I have trusted up until now, because I always thought VMware were very vigorous in their standards.

I really, really, wanted to like this device. It is very cool in so many ways - when it works. It has great media-sharing capabilities, and a really nice interface - when the interface is not inaccessible. The performance is also pretty good. But, with a NAS and VMs running on that NAS, your have to - HAVE TO, HAVE TO - be able to depend on it. I sadly, cannot say that about this device.

So, sorry....but thumbs now firmly down.

Message was edited by: qmacker

0 Kudos
davidmarkley
Contributor
Contributor

qmacker, I wholeheartedly agree with your conclusion.

I hate to dig up a (somewhat) old thread, but wanted to make sure people knew more facts and alternatives to the buggy, Iomega product.

Our company (VAR / MSP) has pushed several of these out, and trusted that because of the "VMware" sticker and being acquired by EMC, we were going to be ok, and this would be a reliable unit. However, we have had nothing but problems, DOA units, degraded arrays that ONLY were fixed by doing a factory reset (reformat) of the unit. This is UNACCEPTABLE! In most cases we were pushing these out as inexpensive disk backup systems. Using vRanger, Veeam, etc. to do VM backups to the units. However, we can NOT rely on these units due to the overwhelming issues that we have had.

We are currently attempting to return units to Iomega, and they are fighting us every step of the way saying that they don't have enough 'evidence of problems to warrant a unit refund'. Even though, they have to know full well the issues that these devices have had. We are not impressed with Iomega at all, and it definitely doesn't help EMC to be associated with a company whose products are so bad and still no admission on Iomega's part to own up to it.

________________________________________

We have been using Synology's DS1010+ and are extremely happy with the performance and feature set that we gain from this product. You can even strap an extra 5 drive bay unit to increase the capacity. They are VMware Ready (unfortunately, we now tend to discount this label). One thing you will find is that there are many positive reviews on just about every Synology unit as opposed to negative reviews on the Iomega more often than not.

All in all, I wouldn't recommend either unit for what our company considers 'production' use as a VM Datastore, but for the short-term need or for backup, Synology looks to be a really great NAS, and the company is very easy to work with as well.

-David

Message was edited by: davidmarkley

0 Kudos
qmacker
Contributor
Contributor

Thanks David. Sorry to hear about your woes, but kind of glad to know it's not just me!

So, I've abandoned these devices, but like you, I've a fairly brand new one that is mostly unused but just sitting here uselessly with a failed array. Let me see if i can at least get Iomega to swap it out. If you actually get a working model, they're kind of useful for "transporting" VMs from site to site (although you could use a laptop with FreeNAS, I suppose).

Good luck!

0 Kudos