VMware Cloud Community
dragon20
Contributor
Contributor

ESXi Server Design question (DELL 2950)

Hi all,

I have a question for you ESX experts, hoping you can point me in the right direction.

To start of with i will say that we cant afford a NAS or SAN

We are looking to purchase a DELL 2950 with the following Spec

2x Xeon E5420

32GB RAM

2x 146GB SAS Drives in Mirror (Install ESXi)

4x 1TB SATA Drives in RAID10 (Store Guest OS)

I had the though that as long as i had Ram i could create as many VM's in the 2TB RAID 10 array

The are all Windows 2003/2008 Servers running limited applications. No servers a dedicated SQL Servers. Alot of these will probably have small footprints. The servers that might have the highest footprint might be Windows SBS 2008

Now i was speaking to colo reseller and they said to me that i would be wasting the 2950 as there would be a large I/O bottleneck. They said you might as well buy a couple of DELL R300's and break up the usage with them.

From what i understood there wasn't much of a performance bottleneck on RAID10 with SATA Drives. and the benefit in putting in SAS 15k drives was marginal and really depended on what kind of applications you were running?

There was a thread i saw in the VM communities that was discussing different RAID levels different drives e.g. SATA SAS and the benefits draw backs.

I just thought id get some advice from the experts, if you had the choice to either spend on Shopped up DELL2950 or perhaps 2/3 DELL R300

What would you choose?

Any advice would be greatly appreciated.

Thank you

0 Kudos
9 Replies
wila
Immortal
Immortal

Hi,

SATA local storage will not give you the same performance as SAS local storage. As soon as you want to run more as a couple of VMs, your performance goes down big time.

The question is already if your SATA array will be recognized, although i expect it might work with ESX3.5 Update 4.

It depends, did you check the storage controller on the HCL?

Like your reseller, i expect you to have IO problems in this setup (first in order to get it working, next on performance)

It will work for a lab, not so much for production.

Also make sure that your storage controller has a battery backed cache



--

Wil

_____________________________________________________

Visit the VMware developers wiki at http://www.vi-toolkit.com

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
0 Kudos
depping
Leadership
Leadership

I would rather have 2 or 3 hosts than just a single host. load balance the environment! this will give you more flexibility and uptime.

Duncan

VMware Communities User Moderator

-


Blogging:

Twitter:

If you find this information useful, please award points for "correct" or "helpful".

0 Kudos
dragon20
Contributor
Contributor

Hi guys,

Thanks for the information Smiley Happy

I apprecaite you clarifying the SATA issues and potential problems.

Cheers!

0 Kudos
wila
Immortal
Immortal

Hi,

Well support for using SATA disks has only been added recently. So the issue can be that there simply is no driver in the package, it depends on the storage controller that you will be using.

You could check against the HCL here: http://www.vmware.com/resources/compatibility/search.php

The other problem i expect to see is scalability, but if you run not many VMs you might not notice it.

SATA just doesn't cut it when you want to run many virtual machines, raid 10 might mitigate it a little bit, but using SAS disks will be better.



--

Wil

_____________________________________________________

Visit the VMware developers wiki at http://www.vi-toolkit.com

| Author of Vimalin. The virtual machine Backup app for VMware Fusion, VMware Workstation and Player |
| More info at vimalin.com | Twitter @wilva
0 Kudos
TomHowarth
Leadership
Leadership

moved to the ESXi forum

If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points

Tom Howarth VCP / vExpert

VMware Communities User Moderator

Blog: www.planetvm.net

Contributing author for the upcoming book "VMware Virtual Infrastructure Security: Securing ESX and the Virtual Environment”.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
kjb007
Immortal
Immortal

If you are running low-intensive workloads, you will probably be fine. If you have the budget to go higher and spread the load over multiple serverse, then do so. I run a few blades with SATA storage, and they are light workloads, and they run fine on SATA storage. As pointed out already, generally speaking, SAS will give you better performance than SATA, and you are limited by disk speed, but the storage capacity makes a huge difference. I would also not dedicate 2 146 GB drives for ESXi. It will not be used. ESXi is a small footprint hypervisor. Take a small chunk out of the SATA for your ESXi boot image, and use the SAS drives for your SBS server, to give it the fastest storage you can. Also, since you're already using SATA, I would not use RAID10 either, use RAID5, you'll split your writes over more drives, and may will get addequate write performance.

-KjB

VMware vExpert

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
Erik_Zandboer
Expert
Expert

The two SAS drives are almost a waste. Those kind of drives can actually perform quite well for running VMs from. SATA.... not so much. Luckilly, it is no rocket science Smiley Happy Imagine you would have four physical servers with a single SATA drive. How well would they perform (not looking at redundancy, just performance). If you put these four SATA drives in an ESX(i) server, you could expect about the same performance. So keeping that in mind, you could run 4, 5 maybe even 10 VM servers without issues from them.

A lot of people expect to be able to run 40 VMs from them. If you tell them that 10 VMs would share a single spindle, that ussually triggers their mind into the right setting Smiley Happy ... Just do not expect your 50 physical servers (having 100+ mostly SCSI disks!) will run from 4 SATA drives and you'll be fine.

Also, look at costs versus spindlesize. You can go for very large SATA, in your case you would end up with 2TB. Ifyou run 10 VMs from them, all using 15GB of disk, you can only fill your SATA drives up to 1015 = 150GB! You should leave the rest EMPTY. Thats ok if you hafve a master backup-to-disk scenario to run at night, but otherwise I would look at 4146GB / 15K, which will get you much further performance-wise...

Bear in mind, this is not an exact science (for example: it is hard to tell how many spindles are actually used in a RAID10 config; basically it is only two, but the RAID1 sets might add read performance on top of that if you have a smart controller)... Cache will also help a lot. It is merely meant to make you understand the global nature of things.

Visit my blog at http://erikzandboer.wordpress.com

Visit my blog at http://www.vmdamentals.com
0 Kudos
AsherN
Enthusiast
Enthusiast

I would put 6 X 300GB SAS (15K, 10K might work) in a RAID6 array.

As well, if you are looking at ESXi, why not get the USB version from Dell, and free up drive bays for the VMs

0 Kudos
Kallex
Enthusiast
Enthusiast

Hi!

I'll go to quite specific (and biased) in detail, for the reason that I just was going to buy PE 2950, but now when Dell introduced the new generation (replacement) R710, went to there. R710 gives you options to go for up to 72 GB ram with 4 GB sticks, or just stay at 32 GB now and upgrade later. Also the CPU capacity of new Intel architecture alongside the memory bandwidth; it's like more than double of the earlier generation. Yes, double.

1. For a small budget, I'd rather go for one decently equipped and well managed (with proper 4hr critical support) hardware, than two lower-class ones. Either case, you're semi-screwed for the duration of the downtime; but the bigger server has more redundancy options. And you need backups / recovery plan in either case.

2. ESXi should not require much I/O when its up and running. The SAS is wasted there, whereas it would be much better utilized for VMs. I personally have liked currently our PE 2900 raid 10 with 8 x 15k RPM SAS disks, that can start up 10 VMs simultaneously without choking. I'd say that with 4x SATA (or that's actually nearline SAS, same price though) you'll have plenty of capacity, but the drives can't really handle that much simultaneous I/O as would decent real-SAS raid set.

That said, I just ordered for much of the same basis (can't afford, not worth the separate drive system):

R710 with 2x Xeon E5520, 72 GB of RAM, redundant power supplies, 8x 300 GB SAS 2.5" 10k rpm (seek times /performance is close to 15k 3.5") to raid 10. At this early time, our Dell rep didn't see an option for "free" ESXi embedded, only the higher level ESXi versions for standard and up were available. However that's still quite valid option to allow using the ESXi without using any of the raid spindles for that.

We however accept the fact that as we require full ESX (not the ESXi) for virtual lab anyway, we setup it on the same raid 10 array as the VMs will be on. I will investigate also the option to be able to kick ramdisk off during the boot and load / run ESX from the ramdisk...

Regardless, as the disk count is a limitation of those 2U Dells, I consider having ESX with the same set as the VMs acceptable. I have certain confidence on ESX scheduling for the I/O after seeing multiple VMs go heavy on the same raid array (of those 8 disks on the PE2900) and getting enough of the slice so that every workload was progressing and not starving to death altogether.

Br,

Kalle

0 Kudos