VMware Cloud Community
rlaurnoff
Contributor
Contributor

Dell users - what configuration are you using

For all you Dell shops out here using either Server or ESX Starter, would you mind sharing your server specs/configuration and how you are using either Server or ESX starter?

0 Kudos
19 Replies
canadait
Hot Shot
Hot Shot

Hi,

We aren't using starter but if I was I would be looking at a 2 X Quad Core 1950 or 2950 or 2970 (AMD) with 8 GB of RAM. I personally would not run Server as ESX simply rocks!

0 Kudos
femialpha
Enthusiast
Enthusiast

I have several 2950's with 16G of RAM running ESX enterprise.

0 Kudos
rlaurnoff
Contributor
Contributor

What type of RAID configuration and controller are you using?

0 Kudos
canadait
Hot Shot
Hot Shot

We are using RAID 1 for ESX and everything else is on the san (Enterprise license)

I would suggest that you dedicate a couple of disks for ESX using RAID 1 and then use the rest of the drives in either a RAID 5 or a RAID 10.

It is going to depend on which server you are looking at buying.

What are your plans for the servers and what business need are you trying to solve?

0 Kudos
rlaurnoff
Contributor
Contributor

This is the earlier thread I started looking for pointers as this is a new project we are looking at.

http://www.vmware.com/community/thread.jspa?threadID=88417&tstart=0

There will be no SAN option unless everything works the way we hope it to. Initially we are wanting to virtualize some PE 750/850 as we a number of them and dropping them down to 1 or 2 PE1950/2950s would free up some much neeed space in our data center.

Most systems are 2000 Pro or 2003 Server and are very under utilized. I would say 9 out of 10 system do not store any data locally, we are just wanting to spec a 1950 or 2950 that will allow us to use as many VMs as possible but have some redundancy in the server hardware and possibly having a plan in the event of the hardware crashing. The next step would be to have identical hardware at an offsite data center for DR purposes.

0 Kudos
canadait
Hot Shot
Hot Shot

Looks good so you just have to determine how much disk space on each host you need and load up with drives to satisfy that requirement.

For my first local ESX server I had two 73 GB RAID 1 for ESX and at least 3 146 GB for VMs. Maybe add in a hot spare as well. Should work great.

I personally like the 2950 or 2970 as they have the most capacity in a 2U format.

0 Kudos
rlaurnoff
Contributor
Contributor

What are you using as the controller, PERC5/i ?

0 Kudos
canadait
Hot Shot
Hot Shot

I think that is your only choice if you want SAS and RAID. It is a great choice!!

0 Kudos
scarpozzi
Contributor
Contributor

3 Dell 2950s configured with a total of 4 physical network ports, 1 DRAC, 1 HBA to connect to our DELL/EMC CX320. The network ports are configured as so:

Port 1 = Service Console

Port 2 = VMotion

Ports 3&4 = VM Networks

They have whatever Dual Core processors we could get at the time and 8GB of RAM in each.

When we order additional hardware, we will order 3-4 servers at a time as our infrastructure grows to continue VMotion support.

0 Kudos
thechicco
Enthusiast
Enthusiast

Several 2900's.

2 x Quad-Core 2.66's

24GB RAM

2 x 73GB SAS 15k RAID-1 (esx)

4 x 500GB SATA Western Digital RAID-10 on a Adaptec 4800SAS (local vmfs)

QLogic 4052C iSCSI HBA (Equallogic SAN)

Intel Pro/1000 Quad Port PCIe

4 x Ports - VM Network / Service Console

2 x Ports - VMotion

ESX Enterprise.

0 Kudos
ngrundy
Enthusiast
Enthusiast

Cluster 1:

4x 6850's 4x3.66GHz Single Core / 16GB RAM

Each box is fitted with two QLE2360 HBA's and 2 Dual Port GigE cards.

These boxes are connected to a Hitachi 9570V for storage.

This cluster currently has 70 VM's running on it.

Cluster 2:

2x 6950's 4x2GHz Dual Core / 32GB RAM

Each box is fitted with two QLE2460 HBA's and two Dual Port GigE cards.

Again connected to our 9570V

This cluster is for 64Bit VM's and is only now just being setup. We

expect around 40VM's per box.

In a smaller site (~7k users)

3x 2950's 2x1.6GHz Dual core C2D / 16GB RAM

Each box is fitted with two QLE2460 HBA's and a Dual port GigE card

This is connected to a second Thunder 9570V

Cluster is capable of around 40VM's

In our smallest site to have VMWare deployed to date.

2x 2950's 2x1.6GHz Dual Core C2D / 16GB RAM

Each box is fitted with two QLE2460 HBA's and a Dual port GigE card

This is connected to a second Hitachi AMS200

Cluster is capable of around 15VM's

Disk config wise we use RAID5 4+1 sets in the Thunders and the AMS is

running a RAID10 disk set. Internaly each box has 2x73GB drives on a PERC4/i or PERC5/i depending on HW age.

Networking wise our network core is comprised of twin Cisco 6509's. the onboard nic's are used as active/passive service console NIC's, one to each chassis. The four PCI card based NIC's are used for VM+Vmotion. They run active/active one port per PCI card goes to each of the chassis. The links are channel bonded and run 802.1q trunks on them.

We don't use Quad core CPU's as they are a waste of CPU resource and cash for us. An example, our 6850 cluster with 70 odd VM's is running 25% cpu on each box. on the 6950's we'd expect the same sort of situation 25% cpu usage given twice the number of cores and twice the amount of ram. Our biggest problem is that we can't get enough ram into the boxes at a reasonable rate. a 6950 as configed above will set us back 22k, a 6950 with the full 64GB of ram comes in at $90k. 4GB DIMM's HURT.

0 Kudos
bevirtual
Contributor
Contributor

Speaking of configurations, did you have to use setup assistant or just the Ctrl-M configuration utility to set up your storage? I have been looking at the dell website to find what they recommend for local storage setup prior to installing ESX but I cannot find documentation on it. If Windows was installed prior to ESX, then there are some funny partitions out there and I don't know if they should be erased or left alone?

Thanks.

0 Kudos
glynnd1
Expert
Expert

TheChicco,

Why so much local VMFS? I can see some been used for ISOs etc and an MSCS VM, but not a TB.

I do like the 2900 for the extra RAM capacity without the 4GB dimm penalty, but it does come with a 5U penalty which may be an issue for some.

ngrundy,

I don't think the quad cores are a complete waste, there is an $800 cost to go from dual core 2.66 to quad core 2.66, and while the doubling in cores does not provide a doubling in CPU power it does provide about a 50% boost. Now if you do have the 50% additional RAM to make use of these CPU power then it is to a degree wasted. Though I would imagine that ESX spends less time moving VMs on and off the cores.

Dell have a short VMmark results document, not much data on it, but it uses the 2950 with dual cores & 32GB as a base line, the 2900 with quad cores & 48gb performs about 50% better (going on memory). No big shocker as it has 50% more memory & CPU resources. The surprise, as least for me was that it performed equally as well the 6850 & 6950 with dual dual cores & 64gb of RAM.

bevirtual,

I would blow away all/any existing partitions, including the Dell utility partition, they are of no use in my opinion.

When we met with our Dell reps a few month back that mentioned that Dell will be coming out with a box especially designed for virtualization. The big changes from any of there current boxes with additional NICs and an upper RAM capicity greater then ESX can currently handle - don't know if this is done with more slots or 8gb dimms, and all in just 2U.

0 Kudos
thechicco
Enthusiast
Enthusiast

Well when the project started we didn't have enough £ for a 2nd EQL. So we use esXpress to backup certain critical VM's to local VMFS in case the SAN goes 'bang'. Works fairly well albeit a bit slow.

The 2900 is a great box but rack space for us is a premium. I am now moving across to C-Class blades and a NetApp setup.

0 Kudos
Fedde
Enthusiast
Enthusiast

When we met with our Dell reps a few month back that

mentioned that Dell will be coming out with a box

especially designed for virtualization. The big

changes from any of there current boxes with

additional NICs and an upper RAM capicity greater

then ESX can currently handle - don't know if this is

done with more slots or 8gb dimms, and all in just 2U.

According to our Dell rep it will have four integrated Gb Nic.

It will also have four slots for expansion cards, for Fc Hba or extra Nic or for something else.

Also it will have 16 slots for dimms.

The only downside is possibly that it will only have 2 places/slots

for disk drives.

A possible downside is also that it is only a 2-socket machine.

/Fedde

0 Kudos
glynnd1
Expert
Expert

I don't think two drives is an issue. ESX requires very little local space in general[/i], so a pair in RAID 1 is sufficient as the rest is on the SAN.

With only 16 slot for dimms means that to max out the memory configuration we'll need 8gb dimms - I hope the 4gb dimms come down in price.

TheChicco,

That make sense, nice solution.

0 Kudos
deploylinux
Enthusiast
Enthusiast

We're also using 2900's and they've worked out great! Going forward, we do want to migrate to smaller systems but currently the 2900's are really the only option for many cases where we need lots of pcie slots or local high performance raid on each host.

The only real issues we've had with the 2900's are:

a) intel nics....there appears to be some stability issues with any virtual machines that are using the intel pcie based nics at high sustained traffic (we already applied all current patches, but they dont seem to solve the issues).

b) support requests have to be logged through dell rather than vmware since vmware licenses/support were purchased with the boxes.

c) we've seen a few cases where the broadcom nics built in to the 2900's will not recover if the switch they are connected to reboots. The interfaces have to be manually restarted.

d) the lack of keyboard/mouse ports can be a pain, our kvm seems to lockup when installing ESX unless cables are plugged into the front usb ports of the box which gets in the way of the rack bezel.

0 Kudos
dlcrouch
Contributor
Contributor

2950 Quad Core (9g) with 32 gb ram. we use 2 x 146 raid 1 mirror for the O/S. All VMs are stored on an EMC CX380 - we use raid 10 for OS and RAID 5 for data stores.

We also install a quad NIC in the remaining PCI slot to allow us the spread the network load.

We are building out our first 2 clusters this week.

0 Kudos
glynnd1
Expert
Expert

Matthew,

Could hte KVM issue you'ev seen be the same as seen here?

http://www.vmware.com/community/thread.jspa?threadID=90051&tstart=0

0 Kudos