tekhie
Contributor
Contributor

Moving from SAN to NAS

hi everyone - i wonder if anyone can assist on the following scenario ...

I currently have 3 ESX Hosts attached to an MSA1000 (RAID1+0) with 256MB read and 256MB write cache.  I have extracted the  READ/WRITE rate information as well as READ/WRITE requests information from vCentre for each Host.  I have been doing this daily and am getting stats at a 5 minute interval.  I am seeing that each day, all 3 Hosts combined throughput to all my datastores are  as follows ..

Maximum READ/WRITE RATE = 80,000KB.  This happens overnight when our vRanger backups run

Average READ/WRITE RATE = about 20,000KB during the day

Maximum number of READ/WRITE commands (IOPS) = 2000 (at the same time the data throupghput peaks)

Average number of READ/WRITE commands = 750 during the day

My first question is, what is the max amount of data that can be read/written per READ/WRITE command ?  The SAN has 128kb stripe size (is this the same as block size?), whereas the VMFS Datastore formatting are either 1MB, 2MB or 4MB - so if i issue 1 WRITE command what is the maximum amount of data that can be written as per that command ?

We have been specced a new filer to replace the SAN ( a netapp 3210 HA cluster) with 24 x 1TB SATA and 256GB READ Cache card. Im being advised that the controller is capable of 5000 IOPS - but is IOPS the best way to assess whether the performance will be acceptable, and how much room i will have to grow into.

I do see a direct correlation between data throughput and number of read/write commands in my graphs so im confident that data extrraction methods are correct 😉

Maybe data throughput is a better way of doing it ?  any advice would be most welcome

My managers do not want to spend all that money only to find out that either a) the new hardware cannot handle the cureent SAN load and b) that there is no room to gorw into.  Im sure that both a and b will not be an issue - but i  need to prove this !!

Thanks

0 Kudos
2 Replies
bulletprooffool
Champion
Champion

techie,

I'd suggest 2 things.

1) Get some tests running if you have the hardware available

2) get the vendor of the new hardware to 'prove' o you that they can handle the workload (whether this means demo kit or whitepapers is up to you. Most vendors will have a way of demoing . . even if it means a visit to their labs)

Do not make this decision until you have proof that your new solution is going to be fit for purpose.

Personally, I have been both impressed and happy with the NetApps I have used in the past. I've never had speed issues etc, but my work is not enough . . get some hands on time!

One day I will virtualise myself . . .
0 Kudos
AndreTheGiant
Immortal
Immortal

Is not so simple compare a block based storage with a file based.

IOPS can be the max that you can reach... but usually there are a lot of bottleneck before Smiley Happy

NFS could be interesting in some cases (for example VDI).

But with huge VM with high level of I/O I prefer a block solution.

Andre

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos