VMware Cloud Community
m1kkel84
Contributor
Contributor

Storage considerations - recommendations

Hello.

I am about to replace our old storage, because it is too slow.

I have some doubts of the following;

I have narrowed it down to the following 2 boxes, because of price and amount of features needed. i do not need deduplication, flash copies etc. I just need a damn fast storage for my 4 esx servers (soon expanding to more) with approx 28 virtual servers (server 2008 r2 terminal servers, dc´s, and exchange)

I have chosen to focus on 24x SAS 10K rpm 300 GB 2.5" disks, instead of 12 3.5" disks. I believe that i will get more IO from 24 disks, instead of 12.

Therefore i have to choose between the two:

IBM DS3524 dual FC controller 8 GBit

Fujitsu Eternus DX80 Dual FC controller 8 GBIT

Both boxes can be expanded with 4 shelves or something similair.

Fujitsu only supports 3 GB/sec SAS, while IBM supports 6 GB/sec SAS. Other than that, amount of cache on the controllers are the same. The Fujitsu does not need batteries in the controllers, ibm does.

IBM have 8 FC ports, Fujitsu have 4 FC ports.

So, what to choose?

Is 8 GBit FC worth it, over 4 GBit FC?

Will i benifit from 6 GBit SAS disks instead of 3 GBit SAS?

Am i buying old water on new botteles with the Fujitsu (because of no 6GBit SAS support)

Do you agree that the wisest choise is 24 ´10K disks instead of 12 15K disks?

FYI: We are not talking about Near Line disks!

Witch box would you recommend?

Thanks

Regards MIkkel

Reply
0 Kudos
7 Replies
ThompsG
Virtuoso
Virtuoso

Hi Mikkel,

Your question is probably going to open a can of worms Smiley Wink

Is 8 GBit FC worth it, over 4 GBit FC?

Obviously unless you have end-to-end 8GB FC it is not worth it. If you are going to pay extra for 8GB FC then I would put the money in to extra spindles over FC speed. We are running a HP EVA8400 with 176 15K drives that averages around 350MB/sec which is not enough to flood a single 4GB FC port - peak is well above this but even then it is not enough to flood 2 X 4GB FC ports so 8GB would be a waste. Now if we had SSDs....

Do you agree that the wisest choise is 24 ´10K disks instead of 12 15K disks?

Yes and no. If you look at the average IOPS per drive, a 10K drive will give you about 120 IOPS whereas a 15K drive will give 160 IOPS. Working on the numbers of drives you are looking at then you will be 960 IOs better off with the 10K drives but if you purchase another 6 X 15K drives then the balance shifts. If it was me I would lean towards the 15K drives, taking an inital "drop" in performance but looking to future expansion. The other reason to go with the 15K drives is the access times are significantly better than the 10K drives. Given that VMware workloads tend to be random IO having drives with lower access times helps to get to the data quicker.

Please note that IOPS figures above are theoretical and may change depending on your storage vendor but will be fairly accurate.

Will i benifit from 6 GBit SAS disks instead of 3 GBit SAS?

At the upper limit of the disk, yes the 6GB will be better. Also confirm if the drives are dual ported as this helps as well.

Not sure if this has helped or hindered.

Kind regards.

Message was edited by: ThompsG for grammar and layout

depping
Leadership
Leadership

I agree. One thing to note here is that the focus is heavily on performance, the question is if the workload will push the array that much or not or that you are doing it as is possibly might?

I hardly ever see anyone pushing those boundaries of 10 disks, let alone 24. Now, the question also is if you have 24 disks would they be a single diskgroup/aggregate serving multiple LUNs / Datastores? And if so how will the IOPS be carved up? How much will a single workload need and will a single LUN be able to serve that amount? Obviously there is a lot to consider here and on top of that you also have caching coming in to play.

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

Reply
0 Kudos
m1kkel84
Contributor
Contributor

Hello Guys, and thanks for answering.

I am sorry for my extreme late reply, but last week was busy!

Depping; you say: "I hardly ever see anyone pushing those boundaries of 10 disks"

Well my DS3200 with 12x15K sas is performing bad. It is 2x1 GBE iscsi, and i think that might be the reason.

I have no idea how i will create the luns if i had 24 disks. Maybe 8x3 raid 10? or 5? or 2x12 ? I have no demands for how to do this. If i create two luns 2x12, i guess the io will be queueed and waiting to be read or written. So basically, i will do what is the smartest and best way.

I have 30 virtual servers. Consisting of a few really light sql loads, some DC's serving files, 4 exchanges, and a bounch of server 2003 and server 2008R2 terminalservers.

In the morning, opening files, ms word etc. is slower than a few hours after opening hours at our customers. So that is why i concluded my existing san to be too loaded. And also, there is no vmware 4.1 support from IBM! 😞 I even moved a few vm's to the local esx datastores to speed up things.

I have been looking at some more sans:

HP P2000 dual controller FC. I can actually save a lot of money if i bought SAS HBA's and betted that there will be a SAS switch avaiable soon, to plug more than 4 servers in. Anyway, price on the san's is almost the same if i choose 24 or 12 disks. 24 will give me more space, and more IO!

On the other hand, i do not understand why "ThompsG" says that seek times are slower on the 2.5" disks, hence the diameter of the spinning plate is much smaller???

If i go with fiber, and a san with dual controllers - will i in that case need dual port hba's to load balance? or can 1 single portet hba make access to both FC controllers if everything is plugged into a fc switch? (im aware i will lose redundancy, but im okay with that for starters)

The IBM DS3512/3524 is NOT an active /active array - what do you think about that?

The others are active /B active.

The Fujitsu is the cheapest, but backplane is only 3GB sas..

Damn this is a jungle!

And now i've met someone talking about nexenta stor using 18x500 GB SATA + 6 SAS drives witch will give me 300 GB CACHE - i wont get that anywhere else - but i dont feel safe betting on a opensouce system from a small vendor. The HP, IBM etc. will after all give me 4 hour hardware onsite service etc. But the nexentastor will give me free snapshots, deduplication etc. But those are things i do not use! I have asked the seller to run a iometer test with the test frmo the "Unofficial vmware storage thread" then i can compare. He claims that it will knock out any 12 disk sas system in io! but the big advantage of the nexenta is that i moves files into storage with is often in use, and non so often used files will be moved to the sata disks. But as "depping" stated, vmware is RANDOM IO ..

Let me know what your opinions are on all this. Thanks

Reply
0 Kudos
ThompsG
Virtuoso
Virtuoso

Hi,

I'll answer more fully later on but for the moment would like to clarify something quickly.

On the other hand, i do not understand why "ThompsG" says that seek times are slower on the 2.5" disks, hence the diameter of the spinning plate is much smaller???

My comments were not in relation to the physical size of the drive but to the rotational speed, i.e. 10K vs 15K - it just happens that in your case the 3.5" drives are 15K and the 2.5" drives are 10K. To get 15K drives in the SFF normally adds a significant amount to the end price.

Kind regards,

Glen

Reply
0 Kudos
DSTAVERT
Immortal
Immortal

Just remember that the nexentastor is just a server with a bunch of disks not a dual controller SAN. You would need to look at getting two of those devices and do device replication to get the same fault tollerance as a dual controller SAN.

-- David -- VMware Communities Moderator
Reply
0 Kudos
depping
Leadership
Leadership

I will make it even more difficult, have you looked at EMC's VNXe? I have seen the prices on those and I must say that for SMB it seems pretty compelling and especially starting with NFS/iSCI can and will make your life from a migration/learning curve perspective easier.

Again, it would be good to identify the bottleneck of your current environment first before you make decissions for your new SAN. For instance if it is the 1Gbe link that is causing the slowdown this is something you will need to factor in. If it is the amount of spindles backing your LUN you might want to go for faster disks.

I would also recommend to look for an array that has VAAI capabiltities as that will reduce some of the "pressure" on the environment due offloading of specific tasks.

All in all it is always difficult to make decisions like these,

Duncan (VCDX)

Available now on Amazon: vSphere 4.1 HA and DRS technical deepdive

Reply
0 Kudos
antonior14
Contributor
Contributor

Hi! I'm considering to build a VM ware solution form my company using an HP Proliant, witch local storage....

But what's the best performance and redundancy choice for storage (iscsi, fc nfs)?

thanks a lot!

Reply
0 Kudos