VMware Cloud Community
BoneTrader
Enthusiast
Enthusiast

NAS for VMWare Homelab

Hi,

I´m looking for a NAS for my Homelab, I thought of  Qnaps  TS-459 Pro+ (http://www.qnap.com/pro_detail_feature.asp?p_id=162)

Will it be able to run 3-5 VMs?

Is there anybody with some experience on these things?

Reply
0 Kudos
15 Replies
DSTAVERT
Immortal
Immortal

They can certainly work in a small environment. They will not provide much disk performance in a virtual environment especially since you must disable disk caching for ESX(i) etc.

You can do better using an old PC you have available and adding disks. Use a standard Linux distribution and set it up as an NFS server. Use something like Openfiler, FreeNAS, Open-E or a number of other free or free ish software packages to give you a pretty interface to NFS or iSCSI connectivity.

-- David -- VMware Communities Moderator
Reply
0 Kudos
BoneTrader
Enthusiast
Enthusiast

yepp, I know...butt the thing is even if I use a Barebone it still has a powerconsumption of roughly 200-250 Watts

One of those little buggers has only 20-30...

is there some other/better NAS for the same amount of money?

Reply
0 Kudos
unexpected
Contributor
Contributor

I am using the iOmega StorCenter IX4-200D and I am very satisfied with it. Works very well with VMware and it is on the HCL. You can use it as NFS or iSCSI (I use it as NFS at the moment).

Geek, tech-enthusiast, VCP3, VCP4 & blogger @ http://www.unexpected.be | twittering @ http://twitter.com/unexxx
Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

PC power consumption depends what's in it - an old Dell Optiplex P4 3GHz with one SATA disk running Debian linux with CPU power management (cpufrequtils) idles at about 45W.  An Atom based solution would be considerably lower.

Bear in mind each disk will use probably 8W 'at the wall' (typical 3.5" 7,200rpm SATA drive).

Reply
0 Kudos
ldentella
Contributor
Contributor

Well I use the same storage (iOmega StorCenter IX4-200D) via iSCSI but I have to warn you about some suspiciuos reboot I noticed when high I/O was performed (for example a storage vMotion or a snapshot).

Reply
0 Kudos
BoneTrader
Enthusiast
Enthusiast

hi, sorry took me a while to reply.

first i wanted to buy one of these iomega boxes, but i read a lot about them (and their problems).

So I decided to take the ts-459pro+ which should do for testing (I hope so)...

I talked to my hardware supplier and he told me that I won´t work cause its to "slow" (shure, maybe...but it ain´t an IBM SAN, and it´s only for testing....)

anyway, end of the month I´ll order one, and if somebody is interessted in having some testing results/pics -> I could post them...

greetings Bone

Reply
0 Kudos
cdc1
Expert
Expert

I'm interested in hearing how it went for you with your TS-459 Pro+, if it's not too much trouble.

  • Any "gotchas" to be aware of?
  • How is performance for you?
  • How many VM's do you have running on it?
  • How did you carve up the storage (i.e.: how many/how big the datastores are)?
  • Easy to install?
  • Easy to manage?
  • Firmware updates easy to perform (any unexpected behavior during an update, for example?)
  • Which version of ESX/ESXi do you have talking to it?  Were there any problems getting them talking to it?
  • Anything else you can think of?

Thanks.

Reply
0 Kudos
ctrotter
Contributor
Contributor

Has anyone heard about a replacement for the IX4-200d?  It was released ca.2009, you would figure it was due for a refresh.  Reason I ask is that I'll be purchasing two, but not if there's a new one around the corner.

Google so far has not turned anything useful up.  Iomega's site has no news about something to replace it.

Reply
0 Kudos
mjcar
Contributor
Contributor

I have been using the Thecus N4200 as an Iscsi target and its working well, Dual Gigabit ports works nicely.

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

For fast homelab shared storage, look no further than linux, nfs, and an SSD: http://blog.peacon.co.uk/esx-lab-hardware-shared-storage/

Obviously viability of SSD depends on how much capacity will be needed.

Reply
0 Kudos
ctrotter
Contributor
Contributor

Thanks for the link, I was actually thinking about that after I posted - doing something with RAID1 or RAID10 SSD.

A little birdie also mentioned that holding off for a while on the Iomega couldn't hurt.

Here's another option I was considering, although it would have performance ramifications:

  • ESX host with lots of fast local storage.
  • For shared storage use VSAs that take the local storage and turn it into iSCSI or NFS.

So your I/O path would be something like:

  1. VM whose disk is on the iSCSI target
  2. ESX host is connected to the iSCSI target of the VSA
  3. Network (iSCSI VLAN)
  4. ESX host hosting the VSA
  5. ESX host local storage backing the VSA

Versus:

  1. VM whose disk is on the iSCSI target
  2. ESX host is connected to the iSCSI target of the NAS
  3. Network (iSCSI VLAN)
  4. iSCSI target (LUN) on the NAS

So one more layer to go through, be curious to see how that impacts I/O.

Another consideration for a RAID10 SSD would be saturation...1Gb link could easily be saturated by four SSDs, so you'd really want to be able to leverage MPIO or at least bonding of interfaces.

Thus: VSA on local ESX host storage providing an iSCSI target for the other ESX hosts in the lab.

Have I missed anything?  Bad idea?

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

NAS running itself as a VM, or VSA or course, should perform fine.  RAID-0 may be an option for a lab environment to keep the costs down.

Reply
0 Kudos
BoneTrader
Enthusiast
Enthusiast

Hi @ CDC,

sorry that I didn´t answer earlier, I´ve got a lot of work at the moment.

the TS-459pro+ is in my opinion the ultimate NAS Box...it´s not quite cheap, but thats how it is with the good stuff...

Q: Firmware updates easy to perform (any unexpected behavior during an update, for example?); Which version of ESX/ESXi do you have talking to it?  Were there any problems getting them talking to it?

A: My current setup  is:

    Firmware: V3.4 (Firmware Update configured as AutoUpdate -> works preety well)

    HDD: WD1003FBYX 1TB RE4 24/7

    Mode: RAID 1 (currently, will upgrade to a RAID10)

    Switch: Cisco SLM-2008EU

    ESXi Box: (AMD Athlon 5200+, 4 Gig RAM, Intel Pro GT Desktop Adapter, USB Stick 8 Gig for ESXi)

Q: How is performance for you?; How many VM's do you have running on it?; How did you carve up the storage

A:Im using the 459pro+ as a iSCSI Target for my ESXi (4.0), currently storing 4 VMs (XP, W2K8, W2K8, Ubuntu Server).

    The CPU usage with all 4 up and running between 2,5 and 3,5.

    Yesterday I increased the size of a LUN (LUNs in total 2, 500 gig and 100gig) from 300 to 500gig -> no Problems.

If you need/want screenshots of the web interafce tell me 😉

Reply
0 Kudos
stgepopp
Hot Shot
Hot Shot

Hi,

i`m using a QNAP SS439  Pro since a year. I've put in 4x 2,5" 750GB 7200k SATAII notebook disks in a RAID-5 configuration and performance is acceptable.

iSCSI and NFS is possible, NFS is performing a little bit better.

I use that box mainly for demonstrating VMware products like vSphere, Lab manager, View and vCloud director.

A few weeks ago i built a complete vCloud environment with:

1 ESX host physical:

2 ESX hosts virtual

1 vCenter server

1 vCD server

1 vCD DB server

1 vShield server

4 VMs (XP, W2k8)

all objects on NFS share

over all performance very acceptable.

regards

Erich

Reply
0 Kudos
harryj
Contributor
Contributor

When it comes to NFS performance and reliability you can't beat OpenSolaris.

Reply
0 Kudos