RoblLaw80
Contributor
Contributor

Advice on hardware for virtualization

Jump to solution

Well, I've been researching the posibility of converting our existing physical servers into virtual servers, and I've come up with a posible solution. So I figured I'd post it and see what, if any, input the community has.

Our current physical setup is this:

One exchange/file server (30 exchange users) perfmon shows some very erratic disk access, usually below 300 on the Disk Transfer/sec entry, but with occasion jumps up above 500.

One domain controler (35 users here, with 15 more users at another site with their own DC)

One SQL server (currently with almost no disk access but that will change as we migrate to our new line of bussiness application, no more than 50 users max though)

One Terminal Services server. (15 users currently but will grow a few more soon) We've currently out grown the CPU capacity of this server and definitely need to replace it with something.

All of these servers are running older single core 3ghz Presscott era processors.

Proposed new hardware:

2x Dell PowerEdge 2950III servers

Each with two 3.0Ghz Xeon X5450 processors

16Gb RAM

PERC6i SAS Raid Controller

Four 73Gb 10KRPM SAS 2.5in drives in Raid 10

Two Intel PRO 1000PT Dual port Nics

We'll run VMware Infrastructure Standard High Availability (HA) Acceleration Kit on both machines with the Vmotion option if it's possible to add that, and create VMs for each of our servers, with a few additional VMs for low usage applications.

The part I'm not real sure about is the iSCSI storage device. Our rep at CDW recommended a StoreVault NetApp S500 w/ 10 drives. Does anyone have any experience with the performance of this device?

Thanks for the input.

0 Kudos
1 Solution

Accepted Solutions
EllettIT
Enthusiast
Enthusiast

I think you'd be just fine with 2 Ghz quad core processors and you might be able to get by with 8 Gig's of RAM, that would reduce your costs as well. If you don't want HA and Vmotion you could just use local storage and not get a SAN.

If you're a dell shop then you could look at the MD3000i, it just got added to the HCL. I priced one out with a single controller and 300 GB 10k SAS drives (bare minimum's on other options) to get it at around 18,000.00. That might be an option, then again, iSCSI is iSCSI so you could just build a box on your own (something like www.aberdeeninc.com maybe or SAN melody?) if you don't mind not being on the HCL.

View solution in original post

0 Kudos
4 Replies
EllettIT
Enthusiast
Enthusiast

My company is a little larger than yours (200+ users) and we have virtualized 13 of our servers that run the gamete of Exchange 2003, SQL 2000, AD Controllers, App Servers, File Servers, Print Servers, etc with no issues (well none that are related to VMware Smiley Happy). All of these are still running on the older GSX software with plans to move it all to VI3 and a SAN this summer.

With that said I'm no expert with this stuff however if you do plan on using HA and vmotion you'll need some kind of consolidated storage (iSCSI, NAS, Fibre etc) to make that happen. I'd take a look at the SAN HCL and start contacting vendors and see what turns up. I'm probably going to go with Equallogic but I've also had good conversations with NetApp about the FAS2020. I'd think about how much usable storage you need and what features you want with your SAN to help narrow the choices down. Also if you find a brand and model you like do some searches here to see what peoples experiences are with it.

As far as your proposed hardware is concerned it looks good and should provide you with lots of room to grow. It's overkill for what you're wanting to virtualize however I'd say if you can afford it leave it as is if you can. You'll need multiple NIC's in each server to allow for your SAN, service console, VM's, etc so make sure to include those. The consensus on the amount of physical ports you need seems to be hard to nail down but I think most folks would say at least 6 (so either 2 dual port NIC's or a quad port NIC in addition to the onboard). Also since your VM's are going to be running from a SAN then I would go with 2 15K drives per server in a RAID 1 config as that's where the drive / RAID performance will matter.

RoblLaw80
Contributor
Contributor

Thanks for the input. I was just looking at dropping the processors in those two machines to two 2Ghz quad cores, that knocks about $2k off of each server's price. Unfortunately I've been put into the possition where we HAVE to do something with our servers in the next few months. So this is just one of the possible solutions I've been working on. However, I think it's the best solution for us long-term, so I have to make it as attractive as possible to the upper management.

I talked with our CDW rep about an Equallogic unit, but unfortunately I'm pretty sure that the $30k+ price tag on it would make it very unlikely to be accepted.

On another note, up to this point I had been assuming the easiest way to handle licensing for the MS server OS is to get a Datacenter licenses for each processor involved. Am I correct in that thinking?

0 Kudos
EllettIT
Enthusiast
Enthusiast

I think you'd be just fine with 2 Ghz quad core processors and you might be able to get by with 8 Gig's of RAM, that would reduce your costs as well. If you don't want HA and Vmotion you could just use local storage and not get a SAN.

If you're a dell shop then you could look at the MD3000i, it just got added to the HCL. I priced one out with a single controller and 300 GB 10k SAS drives (bare minimum's on other options) to get it at around 18,000.00. That might be an option, then again, iSCSI is iSCSI so you could just build a box on your own (something like www.aberdeeninc.com maybe or SAN melody?) if you don't mind not being on the HCL.

View solution in original post

0 Kudos
RoblLaw80
Contributor
Contributor

Thanks for the tip! With 12 146Gb 15k RPM drives the price comes out to less than the StoreVault unit. We really won't require a huge storage capacity, I just want to make sure it performs well in our production environment

0 Kudos