VMware Cloud Community
rstoecker
Contributor
Contributor
Jump to solution

Affordable NAS solution?

I'm looking for a NAS appliance to use for a home lab setup. something under $500 if possible. I saw a post someone was using a QNAP single drive, but according to vmware and QNAP it's not supported. I don't want to build a PC with Linux I'd rather stick with an appliance of some sort. Thanks

0 Kudos
1 Solution

Accepted Solutions
jjkrueger
VMware Employee
VMware Employee
Jump to solution

I would check out the Iomega StorCenter ix2 - it's on the HCL, and should be well within your given budget. They come in 1 and 2TB varieties (both default to mirrored configs, providing just shy of half the on-the-box capacities - gotta love rounding - and can be reconfigured into a JBOD setup if you need the space). Management is through a web browser.

Performance for a lab is pretty good, even acceptable for a small production environment. There is rumor that later this year via a firmware upgrade, they'll get both Jumbo Frames (which should help performance a bit) and iSCSI Target code, so they may not be stuck as NFS devices forever.

I replaced my home file server with a pair of these - one dedicated for the ESX boxes in the lab, and the other for general data storage for everything else.

Just my 2 cents,

-jk

View solution in original post

0 Kudos
31 Replies
vmroyale
Immortal
Immortal
Jump to solution

Hello and welcome to the forums.

Have you checked out OpenFiler?

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
0 Kudos
rstoecker
Contributor
Contributor
Jump to solution

Thanks for the link but I'm hoping to find a NAS box that I don't have to build from scratch.

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

I would check out the Iomega StorCenter ix2 - it's on the HCL, and should be well within your given budget. They come in 1 and 2TB varieties (both default to mirrored configs, providing just shy of half the on-the-box capacities - gotta love rounding - and can be reconfigured into a JBOD setup if you need the space). Management is through a web browser.

Performance for a lab is pretty good, even acceptable for a small production environment. There is rumor that later this year via a firmware upgrade, they'll get both Jumbo Frames (which should help performance a bit) and iSCSI Target code, so they may not be stuck as NFS devices forever.

I replaced my home file server with a pair of these - one dedicated for the ESX boxes in the lab, and the other for general data storage for everything else.

Just my 2 cents,

-jk

0 Kudos
TAZ99
Contributor
Contributor
Jump to solution

Can you please post how you setup the Iomega StorCenter IX2 share?

We have enabled NFS and created a share called VMDSK1 on the IX2.

When we try to add Storage to ESXi, what should we put for the Folder?

The only documentation from Iomega shows the mount point should be /nfs/sharename, but we get an error:

Error during the configuration of the host: NFS Error: Unable to Mount filesystem: Unable to connect to NFS server.

We have this working with an older StorCenter (not an IX2), but on that device I was able to specify the mount path on the device (Iomega).

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

When I set up my device, this is what I did:

1. Removed all of the default shared folders in the "Shared Folders" tab.

2. Created a new share (I called mine "vmware"), and did not check the "Enable Security" checkbox.

3. Went to the "Settings" tab, selected "Network Services", then selected "NFS"

4. Checked the "Enable NFS Service" box, and clicked "Apply" (I do not recall if this required restarting the ix2)

5. Verified the configuration of the VMkernel port on my ESX hosts

6. Added a new NFS datastore using the IP address of the ix2 and the folder name "/nfs/vmware"

I've got two of these on two different networks - one private network for my VM storage, one on another network for access to ISOs.

The share name is case sensitive - if you named the share "VMDSK1" on the ix2, you'll need to mount "/nfs/VMDSK1" from ESX - "/nfs/vmdsk1" will not work.

0 Kudos
rstoecker
Contributor
Contributor
Jump to solution

Thanks for the info. I just ordered one of these. I had passed on this unit before due to someone posting performance was very slow and the drives are not user serviceable, but it looks like the best solution for a lab solution.

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

Performance actually isn't bad in my lab, but I only ever have a small handful of VMs running at a time. It could certainly be quicker, but you really only have a single spindle to write to if you've configured the device as a mirrored set. Be sure to not enable write caching - that did real bad things for performance in my case. It seems rather counter-intuitive, but I've learned to just live with it Smiley Happy

So long as you understand where the performance bottleneck is, it's pretty useable. The only thing I think I should have changed is I should have purchased the 1TB model instead of the 2TB model, that way if I had a bunch of VMs running, I could take advantage of multiple storage devices instead of one. You live, and you learn, I guess.

Something that may help performance is the rumor that these devices will get a firmware update sometime later this year that supports jumbo frames (but that'll only help if you can do jumbo frames from end-to-end in the storage path - switches, ESX hosts) Along with that rumor is the rumor that iSCSI target support is coming for the ix2, which will be nice to play with, but probably won't do anything one way or the other for performance.

That said, it's almost perfect for a lab. Inexpensive, easy to manage, has a number of great features (SNMP, email notifications, etc). Performance isn't that of my Win2k8 NFS server with a 4-spindle RAID or a full-on storage system, but it's not supposed to be.

The drives are not user serviceable (under warranty - I'm sure you could tear the device apart and do what you will), but iomega support has been pretty good to me so far (the ix2 i picked up for a home media server showed up with a dead drive). They sent a replacement quick and have been very reasonable to deal with.

And the biggest reason I got into this device - it's on the supported compatibility guide Smiley Happy

-jk

0 Kudos
TAZ99
Contributor
Contributor
Jump to solution

Thanks for the info... we were able to get it to work like that :smileyblush:

However, there is now no security on any of the folders, correct? It appears you can get to the share from Windows and do whatever you want to the files (maybe OK for a lab environment).

Once we turn on security, it no longer works. The documentation from Iomega states that the only security is by IP address (which would be alright for us), but even that doesn't work.

I seem to remember seeing an article to change the user-id from root to something else with another password in ESXi to mount the share (if anyone knows how to do this, please let me know) - I wonder if that would work (adding the user to the Iomega IX2 with the same user name / password).

We tried to add the user 'root' to the Iomega, but it said that user already existed.

0 Kudos
rstoecker
Contributor
Contributor
Jump to solution

Is there a setting for no root squash?

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

Right - there is no security beyond the permissions ESX places on the files. It also does not appear that CIFS can be disabled.

At this point, I would probably have to defer to iomega support for a definitive answer to why the IP address security isn't working for you.

I've seen a couple of references to an experimental feature in ESXi that would allow you to change the user that NFS uses, but I can't seem to find any documentation about how or where to change that. I'm not running ESXi here, so I can't do any poking around to find it, either.

Also keep in mind that, for IP storage in production environments, a best practice is to isolate the storage network (at the very least through VLANs or a separate, non-routable IP subnet). That would also mitigate the worry that a Windows machine would just be able to mount the NAS and muck with your VM's files.

-jk

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

by all appearances, root access is allowed by default on the ix2, negating the need for a "no_root_squash" option.

-jk

0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

I've seen a couple of references to an experimental feature in ESXi that would allow you to change the user that NFS uses, but I can't seem to find any documentation about how or where to change that. I'm not running ESXi here, so I can't do any poking around to find it, either.

That feature hasn't made it to ESXi yet.

0 Kudos
TAZ99
Contributor
Contributor
Jump to solution

Not through the normal user interface.

There is a 'working model' of the actual interface here:

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

That would explain why there's not much talk about it yet! Smiley Happy

Thanks,

-jk

0 Kudos
TAZ99
Contributor
Contributor
Jump to solution

We contacted Iomega support about this already.

Their response was that they only support the StorCenter IX2 on windows, no other operating systems (3rd party), and thus they couldn't help us.

I am amazed with that statement that it made it on the Vmware's HCL.

0 Kudos
sakacc
Enthusiast
Enthusiast
Jump to solution

TAZ99 - Sorry about that, I'll point this thread to iomega, that's ABSOLUTELY not the right answer.

BTW - the ix4 is now officially on the HCL as well - it's a 4 drive unit, with user serviceable drives, and a larger proc.

0 Kudos
CotswoldsOli
Contributor
Contributor
Jump to solution

Hi jjkrueger

Could you possibly expand on your comment about buying the 1Tb version instead of the 2Tb version?

I'm on the verge of buying the 2Tb version primarily for use with ESXi, to store ISOs and to back up my PC, but your comment is making me hesitate.

Many thanks for your help

Oli

0 Kudos
jjkrueger
VMware Employee
VMware Employee
Jump to solution

If you're just going to use the device for ISOs and PC backups, the 2TB should be fine (I've got one doing just that).

I've also got a 2TB unit dedicated for VMs. Were I to go on a spending spree and buy gear like this again, I would still buy the 2TB unit for unstructured data, but for VMs, I'd look to the smaller unit. What my previous post essentially means is that I didn't architect my lab environment well. I leapt before I looked.

Unstructured data (ISO images, contents of your PC, My Documents folder, etc) is generally write-once, read only occasionally. This type of data does not need a high-performance interface on either the front-end (network) or back-end (disk). For a PC, a single NAS device with a marginal processor, minimal RAM, single GigE network interface, and mirrored SATA disks is more than acceptable. If I had to put a number on it, the ix2 would probably be capable of supporting the unstructured data needs for 10-15 PC users.

Virtual Machines are a slightly different type of data. In many respects, they still hold many of the common traits of unstructured data, but that's all within a big VMDK file. Which means that many I/Os go to or from a single file. So the VMDK file is really a write many, read many data structure. So we need something with a little bit better performance than what we would typically need for common unstructured data. The ix2 can still fit the bill for small numbers of VMs, but due to the limited I/O capacity of the device, you'll hit I/O constraint with VMs than with unstructured data.

In other words, 100-200 IOps (I/O operations per second) at 1Gb/sec is more than enough for a couple of users storing and occasionally retrieving unstructured data, but it may not be enough to run more than a couple of VMs. One ix2 will deliver somewhere between the previously mentioned 100-200 IOps. More devices means more IOps. Since we're looking at NAS devices here, if you have NIC Teaming for your IP storage VMkernel port, more devices could also mean more network throughput (depending on your NIC teaming configuration).

Hope that helps clear things up a bit. From the sounds of it, the 2TB device will suit your particular needs.

-jk

0 Kudos
CotswoldsOli
Contributor
Contributor
Jump to solution

Thanks jk. That's a very helpful answer.

I think the 2Tb version is the one for me. I don't think I'll be pushing the unit too hard with a handful of mostly-idle test VMs.

I'm also pleased to hear that iSCSI might be on its way to this device via a future firmware update. That will be a useful thing to have in my test environment. I have an iSCSI SAN to play with at work, but since it's in production I don't have as much scope for experimentation as I'd have with a test box.

Thanks again for your help.

Oli

0 Kudos