VMware Cloud Community
Atamido
Contributor
Contributor
Jump to solution

Best practice suggestions for Dell hardware

We are going to be setting up our first virtualized production systems and I need some advice about which equipment/setup would be most appropriate. Our plan is to move existing backend applications that are all running on two servers onto about 10 virtual machines.

We are a Dell shop, so I was looking at getting two dual quad core PowerEdge 2950's and running VI 3.0.1 Standard or Enterprise on them. They would have 16GB RAM (might expand to 32GB when prices come down) and 80GB SATA in RAID 1 for the host.

The problem is that I'm not sure what is appropriate for storage. We don't want to spend to much, but at the same time we don't want to start down a dead end path. At most, we might add one more Dell box in a year or so if it ends up working well. We are a small local government that is relatively close to hitting it's max size, so we don't have to plan for to much growth. The applications themselves are each pretty low load, but are more on the critical side. We may also want to put a Windows file server in VM that just serves up files over a regular SMB file share.

Ideally, whatever storage box we have would be arranged in a RAID6 with a hot spare, to absolutely minimize chances of a critical failure. We don't have any networked attached storage, and we're not likely to ever get anymore, so investing in fibre channel switches and cards seems overkill for just two systems.

I don't HAVE to go Dell/EMC on the storage, but it is an easier sell from a support standpoint. We don't have much SAN experience, so support for setup and issues could be an issue. Also, if I recommend we drop >$10k USD on a product, it doesn't work, and the vendors all point fingers at each other, things could get real ugly for me.

For the storage, I was looking at the Dell|EMC AX150i iSCSI box. It only offers RAID5, but it seems like it might be an 'okay' box. Still, paying >$12k for something that is "okay" seems a odd. (Or maybe I'm just mystified at the cost of a SAN. Why does a SAN cost so much more than a decent server?)

1. Would an additional 2 port gigabit NIC (in addition to the 2 onboard) in each of the servers be all I need to get these servers connected to the SAN?

2. Are there any issues with this type of setup that an inexperienced person such as myself be made familiar with?

3. Could I just purchase this with the base 3 drives and then buy some more 500GB drives from somewhere else for half the price?

4. Are there other options I should be looking at?

0 Kudos
1 Solution

Accepted Solutions
crazex
Hot Shot
Hot Shot
Jump to solution

We are also a Dell shop, and we are currently running 4x 2950's with the Intel X5355 quad core chips, 16GB of RAM, and 4 NICs. With a configuration like this you will run out of RAM resources much faster than CPU, especially if all of your VMs can be single vCPUs. We are running our ESX cluster on a FC SAN, so we are able to get away with 4 NICs, though I would prefer at least 2 more. Since you are planning for iSCSI, I would start with no fewer than 6 NICs (integrated broadcom and 4 port Intel PRO 1000). You will use 2 of the NICs for iSCSI traffic, 1 for the Service Console, 2 for VMs, and 1 for Vmotion/SC failover. If most of your servers aren't very CPU intensive, you may want to look into the Woodcrest Dual Cores to save some money, as you'll be able to cut costs here, and then use the SAS drives for the OS. The MTBF on the SAS drives is much better than SATA. We are using a Compellent SAN, which gives us the ability to use both FC and iSCSI. I am not sure how cost effective this would be for your environment. I haven't heard many good things about the EMC AX150, but since you are planning for iSCSI, why not look into Equalogic? Many people on these boards use them, and they seem to be tied in pretty closely with VMware. I can't imagine that a 1-2TB Equalogic will be all that exepnsive, and the EQ will be much more scalable than the AX150.

-Jon-

-Jon- VMware Certified Professional

View solution in original post

0 Kudos
15 Replies
Dave_Mishchenko
Immortal
Immortal
Jump to solution

Here's a thread to look at regarding the

http://www.vmware.com/community/thread.jspa?messageID=630096&#630096

1. Would an additional 2 port gigabit NIC (in

addition to the 2 onboard) in each of the servers be

all I need to get these servers connected to the

SAN?

At a minimum you would just want to add one NIC port to dedicate to your iSCSI traffic, but it wolud be two add 2 ports and configure a NIC team for redundancy. So you could create a team of one onboard NIC and one port for from the 2 port NIC for VM traffic, and a copy of that for iSCSI traffic and the service console. If you plan to use vmotion in the future, another NIC could be dedicated to service console and vmotion traffic.

2. Are there any issues with this type of setup that

an inexperienced person such as myself be made

familiar with?

I'd go with SAS drives in the 2950, as SATA won't match the performance. If you do go with SATA drives, make sure Dell gaurantees that ESX will install and be supported on that config.

Have you done any capacity planning for what you expect your VMs to use for storage, CPU and memory. As a general rule, you can run 5 - 8 light VMs per processor core and you're looking at 8 cores. Likewise on the memory.

3. Could I just purchase this with the base 3 drives

and then buy some more 500GB drives from somewhere

else for half the price?

4. Are there other options I should be looking at?

Atamido
Contributor
Contributor
Jump to solution

I'd go with SAS drives in the 2950, as SATA won't

match the performance. If you do go with SATA

drives,

Will the drives in the 2950 matter? I had assumed that once they load the ESX server, they don't really matter any more. Is that incorrect?

make sure Dell gaurantees that ESX will

install and be supported on that config.

Dell sells VMWare as an installed OS on that system, so support should be fine.

Have you done any capacity planning for what you

expect your VMs to use for storage, CPU and memory.

As a general rule, you can run 5 - 8 light VMs per

processor core and you're looking at 8 cores.

Likewise on the memory.

We were looking at having 1GB of RAM per vm, and 10-15 vms. At the moment we want to be able to start up all of the vms on a single box in a pinch, so 16GB. We can put in more RAM later if it's needed, and it will be cheaper. We are maxing out the cores as it is a small part of the cost, and not something easy to do later if we want to throw more on it.

For space, none of the vms are likely to be over 10GB, except if we move the file server to there, and then we want 500GB for that. A SAN with 1TB usable space should be more than enough for the foreseeable future.

0 Kudos
crazex
Hot Shot
Hot Shot
Jump to solution

We are also a Dell shop, and we are currently running 4x 2950's with the Intel X5355 quad core chips, 16GB of RAM, and 4 NICs. With a configuration like this you will run out of RAM resources much faster than CPU, especially if all of your VMs can be single vCPUs. We are running our ESX cluster on a FC SAN, so we are able to get away with 4 NICs, though I would prefer at least 2 more. Since you are planning for iSCSI, I would start with no fewer than 6 NICs (integrated broadcom and 4 port Intel PRO 1000). You will use 2 of the NICs for iSCSI traffic, 1 for the Service Console, 2 for VMs, and 1 for Vmotion/SC failover. If most of your servers aren't very CPU intensive, you may want to look into the Woodcrest Dual Cores to save some money, as you'll be able to cut costs here, and then use the SAS drives for the OS. The MTBF on the SAS drives is much better than SATA. We are using a Compellent SAN, which gives us the ability to use both FC and iSCSI. I am not sure how cost effective this would be for your environment. I haven't heard many good things about the EMC AX150, but since you are planning for iSCSI, why not look into Equalogic? Many people on these boards use them, and they seem to be tied in pretty closely with VMware. I can't imagine that a 1-2TB Equalogic will be all that exepnsive, and the EQ will be much more scalable than the AX150.

-Jon-

-Jon- VMware Certified Professional
0 Kudos
glynnd1
Expert
Expert
Jump to solution

Atamido,

You mention just getting a base of three drives, that implies just one storage controller, not exactly a good option for critical applications. While the drives in an array may look identical to ones you'll see in the local store, they typically come with custom firmware - I know that the firmware differers between the physically same drives used in the Clariion and Symmetrix arrays. So that nixes buying "normal" drives. In the rest of the Clariion range it is posible to have hot spare drives, I'm surprised this does not exist for the AX150. This is different from RAID6, as the hot spare can be used by any of the RAID groups.

For running just 10 VMs you can go with slower speed quad core CPUs, memory is something that you can add again in the future as needed, but I would spend the money on local SCSI drives.

If money is really tight, drop the SAN all together, and drop down to ESX Starter. Add some additional local disk capacity and then have the VMs replicated between that two hosts on a frequent bases (vizioncore). That way in the event of the loss of a host, for what ever reason, all the VMs exist on the other host and all you need to do is to register and power them on - which could be scripted. If you do something like this you would need to size the individual server to be able to host the full load of all 10 VMs, but that is straight forward enough.

Atamido
Contributor
Contributor
Jump to solution

I haven't heard many good things about

the EMC AX150, but since you are planning for iSCSI,

why not look into Equalogic? Many people on these

boards use them, and they seem to be tied in pretty

closely with VMware. I can't imagine that a 1-2TB

Equalogic will be all that exepnsive, and the EQ will

be much more scalable than the AX150.

They are still pretty expensive boxes. Again, I am mystified as to why these are all so expensive.

I could get another Dell PowerEdge 2950 with their fastest dual core CPU and 6x 300GB 15k RPM SAS drives with OS for about $10k. Add on SANmelody to make it an iSCSI device, and I'm suddenly paying less than AX150i of the same size, but this has a bigger cache and much faster drives. Plus, I can repurpose the SAN to a server later if needed.

Am I missing something here?

0 Kudos
ngrundy
Enthusiast
Enthusiast
Jump to solution

I work in an environment where we use Dell servers for x86 work.

For the config you talk about we run 2950's as they offer the PCI expansion needed. Make sure you have an absolute minimum of 4 Gbit NIC's installed in the chassis. Use one for the service console, another for VMotion and the remaining two for production data. If you have a networking group who isn't anal-retentive make sure they run you a PortChannel and get them to deliver a 802.1q trunk to your server IF you have more than one VLAN on the network.

We run FCAL SAN's for storage. We had a look at iSCSI as an option and while it is compelling the FC infrastructure was only a few thousand more in the end once you factored in the extra network swithes (Cisco 3750's) compared to McData 4400's (in the case of the project we were doing.

While we are a Dell shop we don't use any of their storage equipment. Long story short, when we went to tender for SAN gear Dell/EMC put a CX300 up against a Hitachi Data Systems 9570V. To compete on capacity and capability grounds they would have needed to offer a CX700 at the time.

We have reacently purchased a AMS200 with 6x300GB FCAL (1.2TB RAID5) + Hotspare and a second shelf for 15x500GB SATA drives (2x3TB RAID5) + hotspare.

If you don't have the budget to stretch to something like a AMS200 then I can only suggest looking at the Equalogic platform. I would have but I can't find a vendor in Australia Smiley Sad

If you look at the HDS arrays i'd suggest bypassing the WMS100 unless you know for a fact that you will never be doing work like database or exchange/notes mail systems. SATA drives just can't deal with the workload of these applications.

0 Kudos
mcwill
Expert
Expert
Jump to solution

I could get another Dell PowerEdge 2950 with their

fastest dual core CPU and 6x 300GB 15k RPM SAS drives

with OS for about $10k. Add on SANmelody to make it

an iSCSI device, and I'm suddenly paying less than

AX150i of the same size, but this has a bigger cache

and much faster drives. Plus, I can repurpose the

SAN to a server later if needed.

Am I missing something here?

No, we are using SANmelody and it's working very well for us. So well in fact that I'm looking into their mirroring options, but thats another story.

A couple of suggestions though...

a) Drop the top of the line cpu for a cheaper variant. We have it running on a Xeon 5130 and that handles the load without any trouble. I would go for the cheapest dual core that has a 1333MHZ FSB for memory speed.

b) If you have the rack space, look at a rack mounted 2900 instead. Granted it is 5U instead of 2U but it is cheaper than the 2950 and it has 10 3.5" drive bays.

0 Kudos
Atamido
Contributor
Contributor
Jump to solution

b) If you have the rack space, look at a rack mounted

2900 instead. Granted it is 5U instead of 2U but it

is cheaper than the 2950 and it has 10 3.5" drive

bays.

I didn't even realize the 2900 could be rack mounted. Good call.

1. Do you know if the flex bay drives 9 and 10 can be part of the same RAID array?

2. Would you suggest using the onboard PERC 5/i RAID controller for the whole thing, or getting one of the secondary RAID controllers for handling everything except the OS?

0 Kudos
AndrewJarvis
Enthusiast
Enthusiast
Jump to solution

Plus the 2900 gives you a shedload of PCIe slots and more DIMM bays so cheaper smaller DIMMS are an option

0 Kudos
mcwill
Expert
Expert
Jump to solution

1. Do you know if the flex bay drives 9 and 10 can be

part of the same RAID array?

I believe it can.

2. Would you suggest using the onboard PERC 5/i RAID

controller for the whole thing, or getting one of the

secondary RAID controllers for handling everything

except the OS?

I would use the PERC 5/i for it all. It appears to be a fairly high powered LSI controller, what I don't know is whether it comes with the battery backup option or if that has to be ordered seperately.

0 Kudos
CoreyIT
Enthusiast
Enthusiast
Jump to solution

A little off-topic off here but I would like to see a 2900 rack-mounted. I know they have the rack mount kit, most tower-rack conversion kits come with new side panels and face plates to re-orenitate the front panels and include the shoulder nuts for the rails.

it would be interesting to see one in a rack.

0 Kudos
Atamido
Contributor
Contributor
Jump to solution

Well, if we get the 2900 for the SANmelody (for the extra drive bays) and for the VMWare boxes (for the extra DIMM slots) I'll send you a picture. They are 5U each though, and 15U is a lot of space in the rack.

0 Kudos
CoreyIT
Enthusiast
Enthusiast
Jump to solution

Yeh we have about 25 of them in our company spread out across our remote sites. Just thought it would be interested to see the faceplate. Im guessing they have the face plates with the diagnostic lcd and instead mimic the 6850's.

0 Kudos
glynnd1
Expert
Expert
Jump to solution

Corey, we recently got the kit to convert a regular 2900 to rack mount - long story, but some one didn't notice it wasn't a rack mount till it arrived.

Part of the work does involve moving the LCD panel so it is in the right orientation, along with a new front panel and lots of screws.

0 Kudos
CoreyIT
Enthusiast
Enthusiast
Jump to solution

does the replacement face plate say 2900 on it? I am guessing the lcd screen gets oriented to align on the upper left like the 1950/2950/6950?

0 Kudos