VMware Cloud Community
nullian
Enthusiast
Enthusiast

Does a NAS do the job?

Hello!

I'm looking for some help on storages since i'm new to this :smileysilly:

We are thinking about increasing our storage capacity for our 2 ESX Hosts (who currently only have local disk storage). What i want to ask is if a NAS like e.g the Iomega Storecenter ix4-200r ( https://iomega-eu-en.custhelp.com/cgi-bin/iomega_eu_en.cfg/php/enduser/std_adp.php?p_faqid=21841 ) would be to slow to serve as a storage for 2 ESX hosts running ~7-10 VMs (some small ones, one SQLServer for our CRM, AD Server, a Terminal Server).

Both servers have Gbit NICs and a SCSI adapter to connect external SCSI solutions, so maybe someone knows of an alternativ thats not too expensiv (lets say 3-4k) that works well or can point me into a direction Smiley Wink

I'm not all that happy with the NAS solution since most of them i have looked up aren't expandable for future upgrades, but than again i'm a noob 😕

Reply
0 Kudos
15 Replies
bulletprooffool
Champion
Champion

Ideally you want one of 3 things:

iScsi

FC

NFS

there are multitudes of solutions . . .you could even use an appliance like openfiler or freenas, but 4k is a tight budget for a commercial grade solution.

One day I will virtualise myself . . .
Reply
0 Kudos
JayStone
Contributor
Contributor

We recently tried an IX4-100 via NFS to an ESX 4 host and the performance was terrible, we ended up returning the unit. We were running the VMs themselves on internal storage and had planned to use the IX4 as file server storage on one of the VMs. Interestingly enough, performance via CIFS(or whatever its called) direct from a Windows server/workstation to the IX4 was about twice as fast, but still slow Supposedly the IX4-200r is a bit beefier (more cache), but I'd make sure you have an option to return the device if it doesn't work out.

nullian
Enthusiast
Enthusiast

Yeah the 4k limit gives me a hard time thinking about a solution i can be satisfied with...

Thats why i turned to a NAS like the one mentioned above. It can serve as iscsi target or NFS and is VMware certified. But than again it is in no way upgradeable and with 4 bays i can't even expand the diskspace further in the future.

But the main point is still the performance. I don't have any experience with external storages so i can only take wild guesses that the performance might not be that good.Maybe not even useable for production environments with 2 Server accessing the storage.

"We recently tried an IX4-100 via NFS to an ESX 4 host and the

performance was terrible, we ended up returning the unit. We were

running the VMs themselves on internal storage and had planned to use

the IX4 as file server storage on one of the VMs. Interestingly enough,

performance via CIFS(or whatever its called) direct from a Windows

server/workstation to the IX4 was about twice as fast, but still slow

Supposedly the IX4-200r is a bit beefier (more cache), but I'd make

sure you have an option to return the device if it doesn't work out."

Gah... thats what i was afraid of.

Cache, again a point i don't have on my checklist. Thanks for sharing your experience Smiley Happy

Message was edited by: nullian

Reply
0 Kudos
nullian
Enthusiast
Enthusiast

Ah another question came up while thinking about a more expensiv solution:

Is the difference between SAS Disks and SATAII Disks on a networkstorage connected with 1gbits worth the additional cost?

Reply
0 Kudos
nullian
Enthusiast
Enthusiast

No other opinions / experience to share? Smiley Sad

Reply
0 Kudos
bulletprooffool
Champion
Champion

Like I said before - NFS, iSCSI or FC.

you could use an NFS applicance . . and really . . you could even host NFS on a Windows serevr . . but the performance won;t be great.

Try Openfiler or FreeNas?

i have heard of guys using those budget Buffalo NAS boxes . . but they're not the best.

Good luck

One day I will virtualise myself . . .
bulletprooffool
Champion
Champion

Like I said before - NFS, iSCSI or FC.

you could use an NFS applicance . . and really . . you could even host NFS on a Windows serevr . . but the performance won;t be great.

Try Openfiler or FreeNas?

i have heard of guys using those budget Buffalo NAS boxes . . but they're not the best.

Good luck

One day I will virtualise myself . . .
Reply
0 Kudos
nullian
Enthusiast
Enthusiast

Hm any guess at what "performance won't be great" means ?

The idea sounds nice to have a low budget NAS working in the background, but im not sure if the performance is ok enough to put all our servers on just one of them 😐

Perhaps i'm just getting to worked up about the performance issues.

I'm sorry for all the questioning but since i'm currently an apprentice, and this will be my final project i'm very careful ^^ (not to mention that i'll be the one that has to manage the "crap")

Reply
0 Kudos
bulletprooffool
Champion
Champion

A cheap option:

http://justin-bennett-msjc.blogspot.com/2008/11/esx-server-v301-connected-buffalo.html

Not really production scale, but could work for a small environment,

One day I will virtualise myself . . .
Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso

The question is, what do you want from shared storage? It sounds to me that internal storage will be the way to go, unless you have HA, FT or vMotion? If you just want somewhere to provide a general shared store between two machines, for backing up for example, as said an old WIndows server with Services for Unix should suffice (NFS share).

Provided the internal RAID has BBWC the performance should be pretty good too Smiley Happy

Reply
0 Kudos
TimPhillips
Enthusiast
Enthusiast

Hi there! Because you are using SQL Server, you should better use iSCSI solution. Cause you are noob, as you tell, I`ll tell you the differense: NFS uses file-based access, what creates additional network overhead, and lowing network perfomance, while iSCSI uses block-based access, so called raw-disk mode, with very low network overhead. Moreover, for using with such apps as SQL, Exchange, etc it is recommended to use SAN, but not NAS soluion. (SAN - iSCSI and FC). Additional info you can get here, in my blog.

And also: if you are new to this, i wouldn`t you recommend to use such free solutions as OpenFiler or FreeNAS. They are free, but support sosts too much, and for noob they are too complicated in use.

Reply
0 Kudos
Brechreiz
Contributor
Contributor

Her in Germany there are some semi-pro NAS offers , have a look @ xtivate and there exonas. (according your 4k prce limit)

Reply
0 Kudos
kghammond2009
Enthusiast
Enthusiast

I have been running OpenFiler in the lab with iSCSI and performance has been adequate. I don't know if I would want to rely on a "free" solution for production though. I guess you can spend 4K on hardware to get performance or split the cost on hardware and support. All depends on what you are comfortable with.

ESX likes SAS better but we are running 150+ VM's all on SATA disks. We have 96 SATA disks spread across 2 clusters, but SATA can work.

If you go OpenFiler, one upside is that it uses RAM to cache NFS. So with a good RAID card and a slew of RAM, you should be able to get some decent caching on NFS. Google OpenFiler and ESX and you should find quite a bit of information. One other note on OpenFiler, it does not cache block level access. iSCSI is block level, so if you go that route, RAM on the OpenFiler will not help.

Hope this helps,

Kevin

Reply
0 Kudos
TimPhillips
Enthusiast
Enthusiast

Yes, OpenFiler is free as product, but support for it costs too much - even manual costs 40 euro. That`s too much. Better to test some other solution. This is situation when commercial is better than free. And about disks I can say that in your case you do not need SAS, SATA disks perfomance will be absolutely enought.

Reply
0 Kudos
nullian
Enthusiast
Enthusiast

Thanks alot for all the information you guys provided!

I'll be looking into this a bit more and keep all the stuff you gave me in mind Smiley Wink

Reply
0 Kudos