VMware Cloud Community
Hoesi
Contributor
Contributor

Dell R300 and ESXi

Hi,

I am interested in entering the world of virtualization for a non-profit community I run, and I am currently in the process of ordering new hardware.

We'd like to order a couple of 1U servers from Dell, and our current candiates are either the R300 or the PE 1950. The latter is officially supported for ESX/ESXi, however it is also significantly more expensive. Another factor with the PE 1950 is that I am under a strict 0.5 amp (240V) limit on power consumption, something I know the R300 fits under, but I am not sure if the PE 1950 will do this even with a single L (50W) quadcore processor option. Logically I would assume so, however I have heard it is a heavyweighter in terms of power use.

Anyway, to get to the point: Does anyone have experience running ESXi on Dell R300's? Will it work? Are there any issues? Does it perform well?

I need this for real-time latency sensitive applications that are quite resource intensive with a lot of CPU and network use. Gameservers would be a good example for this. Does anyone have experience running any form of game servers under ESXi?

Any help would be much appreciated! Thank you.

Kind regards,

Hoesi.

0 Kudos
7 Replies
s1xth
VMware Employee
VMware Employee

Wow, this is quite the configuration you are trying to do. Is it possible? Yes. But hear me out.

Without going too much into the virtualization basics, let me throw a couple things out that jump out at me. The Dell R300/1950 is only a TWO hard disc server. Anything that is resource intensive or latency sensitive needs more than two discs. I am not sure I understand the power requirements though, I mean .5 amps?! yikes, the R300 may IDLE .5 but at full usuage across the board your looking at .8-1 amp according to my readings, this also depends on hardware you choose. I would go with a E series Xeon for energy efficiency and energy efficent PS's to try to stay under the limit, but its going to be tough. Dont forget harddrives spinning and full load + controller cards + fans can pull quite a bit if this box is going to get hit like you say it is.

As far as the R300 is concerned, all of the hardware meets the HCL for ESXi, heck there are alot of people with it installed on whiteboxes here. I have it running on a R200 and the boards are similiar and hardware on all Dell boxes are pretty similiar (raid controllers, nic cards, video). The biggest concern I would have is disk latency, 2x 15,000RPM 300GB drives would be my first and only choice in this configuration.

The other option you have is simply getting a 2950 which will shortly be replaced by the new R705's (emmmmm Nehelam!!), getting E series proc's, load up on RAM and get either 6 15ks or 8 10ks 2.5inch drives.

If you have more q's shoot me a PM.

sixth

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Hoesi
Contributor
Contributor

Hi s1xth,

Thank you for your reply.

When you go "yikes" on the harddrives, is this because the virtualization significantly more stress on the harddrives or requires a lot more responsiveness to perform well? I already have a server environment with both normal 7200 RPM S-ATAII disks and 10.000 RPM disks - and these are perfectly fine for the services I run with a normal non-virtualized setup / traditional OS installed on disk, communicating directly with hardware. For the record I am planning to fit the servers with an SSD drive, having 200MB/s read, 160MB/s write and an access time of 0.10ms (versus something like 3-3.5ms with a 15k RPM SAS). Do you think this will be a bottleneck, considering that even a 7200 RPM disk is adequate without virtualization?

I am only going to run two virtual servers on each machine, the virtualization is merely to make disaster recovery easier and ease/improve server management. When I think about it, I guess what you mean is that for every virtual machine I add - there is an extra OS accessing the disks, causing more disk use and more overhead compared to running the same amount of services on a single normal OS? What is the most critical here in that case? Acutal read/write bandwidth of the disks, or the random access time? The SSD is faaar superior on access time, but it has about half the max transfer rate on large files. These disks are expensive so I would ideally want to cope with one, but do you think this will become an issue?

When it comes to power usage, I must underline that I am talking about 0.5 amps on a 240V (european) circuit here (as mentioned in the original post as well). From past experience the R300 fits under this (I already have a few), and this is also accepted by my host who use these servers themselves.

Unfortunately I am not able to take any of the current R300's out of "production" to test if ESXi will work properly.

Are you sure the R300 hardware meets the HCL? You say it is very similar to the R200, but this is not really the case as far as I know. The R300 is an intel 5X00 chipset with a 771 socket, FB-DIMM memory and what I would call more servergrade hardware. The R200 has an intel 3X00 chipset, LGA775 socket, cheaper memory, etc. They are both great servers, but fairly different in hardware I believe. Does anyone here use the R300 specifically, and are able to confirm whether it works well or not?

As space is both limited and cost a lot of money, a server bigger than 1U is unfortunately not an option for me - so that rules out the 2950 or R705. I also need the servers very soon, so I don't think waiting for Nehelam processors is going to be an option. That's still months away when it comes to availablity with Dell, right?

Anyone else are of course also welcome to answer or provide their insight on any of the above. Your help will be greatly appreciated.

Kind regards,

- Hoesi.

0 Kudos
Hoesi
Contributor
Contributor

Anyone, please? Smiley Happy

Does ESX/ESXi put a lot more stress on HDDs, and is the mentioned SSD likely to cope with it?

What is more important for the HDDs, random access time or transfer rate/bandwidth?

Do you run ESX or ESXi on a Dell R300? What are your experiences?

Your help is much appreciated. Thanks!

0 Kudos
s1xth
VMware Employee
VMware Employee

Hoesi,

I will try to answer all of your questions, if I miss any just ask again! Smiley Happy

First let me say this, there is NO REASON why ESXi should not run on the R300. When I said the R300 and R200 were similiar I am referring to raid controllers/nic cards/hard drives/processors. Yes the boards are different, and use different sockets, but these are all on the Whitebox HCL posted here: http://www.vm-help.com/index.html. There isnt any hardware inside the R300 that is totally not supported, with ESXi the biggest components you need to worry about about are the raid controllers, which are SAS6/Perc 6 cards which are used across ALL Poweredge lines, if they work in one, they work in another. The processors should be fine because they are Intel proc's and have good compatiblitly, again they are listed on the Whitebox HCL as working. I would have a hard time believing the R300 would not work with ESXi, as there isn't a single component I can think of that will give you a problem.

As far as SSD's go, I would not use them, personally. Does Dell even sell SSD's with the R300? I wasn't aware of this. I looked on the configuration and the only options are 3.5inch drives, so you can't even go with lower voltage 2.5inch drives. I am not aware of anyone even running ESXi on SSD's, or what vmware's stance is on SSD's, maybe someone else can chime on that.

As far as what is more important regarding hard drives, it comes down to what you are running on the server. Anyone on here will agree that both are important. Fileshare servers that do large transfers are going to need better transfer rate, and a SQL box will need better access times. What kind of raid levels are you planning on doing? You decision comes down to what you are running, 2x 15k 300GB SAS drives are good. I have a PE 1950 with 2x 15k's Raid 1, 1x Quad Proc, 16GB ram and 6 Windows XP VM's doing backend job processes and they run fine. Granted they dont get hit hard in the hard drive area but they run well.

To sum it up, I have a hard time believing ESXi wont run on the R300, I have it installed on NON-HCL Dell hardware working fine, 850's, 860's (w/ P4's! test boxes), DC, Quads not even listed, and they all work fine. You have to remmeber Dell uses alot of the SAME hardware across the board, the 5000 chipset is in the 1950 I believe along with another variant in the 2950, which work fine.

-sixth

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Hoesi
Contributor
Contributor

Is there any particular (technical) reason to why you would not use an SSD? Dell does not supply them with the servers, at least not standard, but that does not matter. The bays are 3.5 inch, and the disks are 2.5 inch - but that is easily solved with an adapter such as the ICY DOCK MB882SP-1S-1B. The R300 HDD bays are cabled and it won't be a problem anyway, but even with hotswap bays the adapter would fit perfectly. The SSD read/write performance beats S-ATA's by a mile, has no moving parts, less chance of failure, are more energy efficient, and have a very low access time far superior to even the fastest SAS drives. They use the S-ATA interface and, afaik, communicate with the controller like any other S-ATA storage device. Therefore I cannot see any potential compabitility issues here. Do you have anything to suggest otherwise?

It's an application server, and the only disk access will be for the OS (Win 2k3) with pagefiles etc, some minor filebased database stuff, and some basic data logging (maybe 3-4MB per 30 minutes for all the applications in total). No major file downloads or anything like a download server. Probably more like a small database server in terms of disk access.

Anything critical will be stored in databases on separate machines (with redundancy/backups), and the content on the application servers are expendable. Raid for redundancy is therefore not cost effective for my use, and striping is something I want to avoid unless absolutely necessary. (Increases risk of failure, more power usage)

When you've installed on non-supported machines, have you had to follow any particular procedures? I'm limited to working remotely, using a Dell KVMoIP unit - and datacenter remote hands if needed.

Thank you for the help so far, s1xth. Smiley Happy

If anyone has first hand experience with ESX/ESXi on the Dell R300's - your input would also be valuable!

0 Kudos
s1xth
VMware Employee
VMware Employee

The only reason why I would not use an SSD is because I would want to know what VMware thinks of them being used with ESXi. If you are using them for just the installation and not local storage of the vm's than you should be fine, considering embedded versions of ESXi utilize flash drives, so an SSD should more than fine, I was just stating that I dont THINK VMware has certified SSD's to be used, can you use them? Most deinfitly that decision is up to you. But if you want to save some money I would just get the SAS drives, they are still very fast and we are talking access times that (in my eyes) aren't that big of a deal with the virtual machines you are running anyway. SSD's havent been tested enough for ME personally in the enterprise to go ahead and say, USE THEM NOW!

As far as the other machines that are not 'certified' or on the HCL, I did not do anything special to get them to run. I just booted up the disc and installed, just like any other machine. Since Dell's use PERC cards across their platforms compatibility is very good.

Glad I could be some help, maybe someone else can chime in on the R300 with ESXi, but it should be perfectly fine.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
0 Kudos
Hoesi
Contributor
Contributor

I can't think of a reason to why VMWare should have any logical objections to using SSDs for VM local storage. As long as it communicates with the controller as any other S-ATA storage device there would not be any compatibility issues with ESX/ESXi from a hardware perspective. I.e. it would be no different from using any other S-ATA drive, apart from much faster access time, faster read/write speed, and better reliability. If you get a motor or head failure in a spindle based drive - that drive is rendered useless. You also get a complete loss of data unless you send it in for costly recovery. With a SSD drive you have no moving parts and a much smaller chance of failure. If sectors of the flash memory starts to fail over time, the drive will still be usable till you can replace it and you can still recover large portions of the data. So.. With drives using the S-ATA interface/protocol, being faster and having less chance of failure - why would VMWare oppose the use? An even better question is: why should people care as long as there's no technical reason to why it would not function well? Smiley Wink

SSD has not become widespread yet because it is new technology in development, and therefore the high performance drives has been very expensive. The cheaper ones had problems with a very low read/write speed. Companies such as OCZ are now releasing drives with read/write speed of 200/160MB/s and an access time of 0.10ms at a very reasonable price. A 60GB drive at the mentioned spec is just below £200, while a 73GB 15K SAS drive costs £150 + the need for a controller card at around £100. If the SSD drive lives longer, use less power an outperforms the SAS on the areas that matters for your applications - the choice should be pretty simple. I intend to give it a shot, rather than sit and wait for others to make the discoverys for me some time in the future.

0 Kudos