Thanks for the link to your document. I know you stated that you completed the tests over a 100mb network. Any chance you have been able to do it over a gig connection?
Thanks for the help.
If I could convince my company to spend some more money on a gig switch for the R/D network, I would. I have no good reason to ask for the funding at the time sorry. It sucks trust me I wish I had a Gig switch there but I'm using what I got available to me
Ya, I can understand that!
I will be placing an order today to purchase mine and I post back with my results. Thanks again.
I've been reading this thread with interest as I have an ix4-200d array.... I wanted to bring up a point that leapt to mind when I read posts about users having issues with 2.25 TB LUNs. No one has yet mentioned the obvious here....
The maximum extent size for ESX 3 or 4 is 2.0TB - 512 bytes. If you define a single LUN larger than 2.0TB - 512 bytes you will have issues as folks have discovered. This limit applies to Raw Device Mappings as well and is clearly documented in the configurations maximum guide available on the documentation pages for 3.5 and 4. If you want a single vmfs filesystem larger than 2 TB - 512 bytes then you must define multiple luns that fall under the maximum and create a vmfs file system with multiple extents.
I'm told this is not a VMware limitation but rather a limit to the addressing space available under the SCSI-2 protocol. I wouldnt profess to understand why it's there but I can assure you it is
Jeg er på fædre orlov og forventer at være tilbage 15 Februar 2010.
Ved hastesager bedes henvendelse rettet til BusinessMann ServiceDesk på tlf.: 45168080 eller via mail på firstname.lastname@example.org
Med venlig hilsen
I received my ix4-200d on Monday and was able to run some baseline testing on the unit using an NFS mount. I purchased the 2TB model (4x500GB) and left the RAID configuration at default, RAID5. The test were run with a "direct connect" 1GB crossover cable from my whitebox ESX host (Dual Core AMD Opteron 2.4GHZ with 8GB of RAM) and the VM configured as Win2K8 with 1GB of RAM and 1vCPU. I used IOMeter to perform the test with the configuration file provided from the VMware Communities "Open Unoffical Storage Performance Thread" found HERE.
Avg IO Response Time
Max IO Response Time
MaxThroughput 100% Read
MaxThroughput 50% Read
RealLife 60% Rand 65% Read
Random 8K 70% Read
As satisfied as I am wth these numbers for this unit (and being used for a home lab) I am going to do some additional testing with Jumbo frames enabled.
Thx to everyone for a bunch of great information on this NAS.
Question: Even in small networks, it concerns me running a group of servers on one SMB NAS. We are thinking about purchasing VMware Essentials Plus for a small deployment (2-3 Physical Servers). What would we be able to do for redundancy if we ran two of the StorCenter NAS devices with Essentials Plus? Is is possible to replicate the VM's between the two NAS, or even just back them up to both?
Okay, I thought I'd add another post on this device, or devices (I've bought a few of them), a year down the road.
I've changed my opinion of the ix4-200d a bit over the last year. I'd have to say, I don't really trust it. I certainly don't trust it in a production environment. The main reason for this, is because they kind of "bug out" every so often. By that I mean:
Messages every so often, saying that the device has "lost" its connection. Here's an example:
Subject: Possible problems with 'ESX-Iomega' device (120)
Content: Ethernet connection is lost on LAN '2'. Device Information: , 18.104.22.168472, Iomega StorCenter ix4-200d, 0KAK34003D
I woke up to 30 messages like this today. When that happens, and it's happened several times, I cannot access the web interface of the device. The only way to get the web interface back - which is the only way to control and configure the device - is to power it off by pressing the power button. Not a very elegant solution. The VMs are unaffected by the initial error, but obviously before I power-cycle the NAS, I have to go into ESX and power all the VMs off, then power off the NAS itself. This requires a visit to the site.
This alone makes me nervous. It would nearly be enough to put me off using it. What does really put me off though, are the other things I've seen happen, to a few of these devices now.
I've bought 3 of them (4, including the one I returned and had replaced - see item 2 below):
The 1st one was for myself, and has always worked fine, apart from the "Ethernet connection is lost" messages, from time to time.
The 2nd one I bought for a client (to run SBS 2008, a TS, a BBY server, and a few other VMs) and it arrived DOA. The replacement has worked fine, except for one time giving the flashing red light, and the web interface has bugged out twice. It also emails me the alarming "Ethernet connection is lost" messages, every so often. Additionally, I one time got "disk is degraded" error, and could not access the web interface. Manually shutting down the VMs, and power cycling the device fixed this (the red light/error went away), but has since made me anxious about the reliability of these devices (plus the other issues below and above).
The 3rd one I bought again for myself. I had sent my original (good) device over to a client site to perform an emergency P2V. Their old physical server was dying, and I wanted to do a quick P2V onto a NAS device that I could easily move to our new co-lo facility (I'm getting out of the "on-site" game, and setting up my clients with virtual desktops using "View" running off a "real" expensive NAS, and "real" hard-core servers). We are still in the process of getting this done, and some VMs are still running on the ix4-200d. Just today, I got a bunch of those "Ethernet connection is lost" messages, and I can't access the web interface. The VMs are still running though. (Time to speed up that move to the co-lo).
Anyway, I ordered Number Three, as a replacement for the 1st one that went to the client. This one has given me nothing but trouble. When it arrived, I changed the disk configuration to RAID 10. After that, it said "Data is being reconstructed." It would get to 100%, then start the cycle all over again. Oddly enough, I could run a VM on it all of this time. After a week or two, I got fed up looking at the flashing red light, and decided to upgrade the firmware. I downloaded the latest revision from the site. That "appeared" to go alright, then it wanted to reconstruct the data again. "Fine, have at it," I thought. This time, it would take AGES (like, 24 hours) to get to 97%. At 97% it would just stop. I couldn't access the web interface - AND MORE SERIOUSLY - I lost connection to the NFS datastore from my test ESX server. I had to physically unplug the NAS, after it started up again, ESX could see the datastore, and my VM started. However, the "Data is being reconstructed" nonsense started again - and again bugged out at 97%. Then the NAS died, and the red light came on. The device is inaccessible, and a message on the front says that a drive has failed. So much for RAID!
BOTTOM LINE: This device needs to be taken off the VMware HCL. It is NOT SUITABLE for a production environment, and therefore shouldn't be on the HCL. The HCL is something I have trusted up until now, because I always thought VMware were very vigorous in their standards.
I really, really, wanted to like this device. It is very cool in so many ways - when it works. It has great media-sharing capabilities, and a really nice interface - when the interface is not inaccessible. The performance is also pretty good. But, with a NAS and VMs running on that NAS, your have to - HAVE TO, HAVE TO - be able to depend on it. I sadly, cannot say that about this device.
So, sorry....but thumbs now firmly down.
Message was edited by: qmacker
qmacker, I wholeheartedly agree with your conclusion.
I hate to dig up a (somewhat) old thread, but wanted to make sure people knew more facts and alternatives to the buggy, Iomega product.
Our company (VAR / MSP) has pushed several of these out, and trusted that because of the "VMware" sticker and being acquired by EMC, we were going to be ok, and this would be a reliable unit. However, we have had nothing but problems, DOA units, degraded arrays that ONLY were fixed by doing a factory reset (reformat) of the unit. This is UNACCEPTABLE! In most cases we were pushing these out as inexpensive disk backup systems. Using vRanger, Veeam, etc. to do VM backups to the units. However, we can NOT rely on these units due to the overwhelming issues that we have had.
We are currently attempting to return units to Iomega, and they are fighting us every step of the way saying that they don't have enough 'evidence of problems to warrant a unit refund'. Even though, they have to know full well the issues that these devices have had. We are not impressed with Iomega at all, and it definitely doesn't help EMC to be associated with a company whose products are so bad and still no admission on Iomega's part to own up to it.
We have been using Synology's DS1010+ and are extremely happy with the performance and feature set that we gain from this product. You can even strap an extra 5 drive bay unit to increase the capacity. They are VMware Ready (unfortunately, we now tend to discount this label). One thing you will find is that there are many positive reviews on just about every Synology unit as opposed to negative reviews on the Iomega more often than not.
All in all, I wouldn't recommend either unit for what our company considers 'production' use as a VM Datastore, but for the short-term need or for backup, Synology looks to be a really great NAS, and the company is very easy to work with as well.
Message was edited by: davidmarkley
Thanks David. Sorry to hear about your woes, but kind of glad to know it's not just me!
So, I've abandoned these devices, but like you, I've a fairly brand new one that is mostly unused but just sitting here uselessly with a failed array. Let me see if i can at least get Iomega to swap it out. If you actually get a working model, they're kind of useful for "transporting" VMs from site to site (although you could use a laptop with FreeNAS, I suppose).
I have this device now connected to my esxi 5.0 machine. I am simply using a nic on the host connected to a private network dedicated to this san and this hosts connection. I connect with Windows server 2008r2's iscsi initiator.
The think keeps maxing out at 5.5x MB/sec transfer period (gigavit card and switch).
It always burst at first at what I would say is the proper gigbait iSCSI speed and then about 10 seconds later it stops and drops down. This makes absolulty no sense, I tried everything including all updates on the esci 5.0 host.
What’s funny is that all my legacy USB 2 drives that I backup to that worked fine before I upgraded also seem to get stuck at 5.5x MB/sec also and multiple different hosts.
I don’t get it.
I have complained since 5.0 came out that many of my devices (especially USB) are limited to 5 mb/sec and no one has ever replied. I think recently I have found that if I format drives IN 5.0 that if fixes that problem. My Iomega has been pushing out 30 mb/sec for over a year now with absolutely no problems. I dont however use it for VM's. I use it as a secondary store of my Veeam backups. I have pumped a LOT of data to this device over the past year and it really has held up to the task. Consider that I have the 8 TB version and 5 GB of it are full of Veeam VM backups done twice a day on 35 servers. I am very happy with it so far. Wish it were a bit faster but it’s not bad. Takes a long time to back up some of my 500 gig NAS servers when I do a full.