Looking to get a new Dell R710 with 32 Gigs, 2 Xeon E5520 2.26 with 4 450GB 15K drives in Raid 10 and trying to find out if ESXi will load on it ok or if need to drop back to a PE 2950?
will be putting a VM with Exchange 2007, 2 - VM's with basic Win 2003R2 server, backup AD VM , and possibly VM with test SQL 2008 on it.
Anything (other than SAN) that you would look to get/add to this?
Think SAN is out because of pricing (looks to be $8 - $10K).
Any advice is greatly appreciated.
No problem at all s1xth - I've done the same thing before (my fault for not creating a new thread with new subject). It's such a huge help having other people to talk to and I'm grateful to you all for taking time to even reply!
The server has 4 x 73GB 15K SAS drives in RAID 5 configuration (giving approx 203GB of local storage) that are connected to a Perc 6i integrated controller. The original Dell server quote shows Perc 6i integrated controller and also in the VI Client under Health Status it lists the storage controller as PERC 6/i Integrated.
As for the firmware/bios of the server it has not been updated since purchase (approx Feb 2009). The server is located in remote data centre to where we are situated making firmware/bios updates etc more difficult.
We have never experienced any issues relating to hard drives, RAID controllers etc on this box and would really hope there is not an underlying issue with the hardware.
Again sincere thanks for your help and advice on this ... so very much appreciated.
Gavin thank you for your reply also.
Can you confirm that rebooting your HOST also "temporarily" fixed the issue and you could regain FULL control to the HOST via the VI Client and power on your VM's as per normal? (I see that you mentioned that you suspended them - did you ever shut them down completely?).
Thank you again.
Rebooting the host would correct the issue for some period of time, then it would return. When the problem occurred, I would suspend the guests (since I could not get to their consoles via the VI client at that point to shut them down), then reboot the host, then resume the guests. The actual operation of the guests was never affected, just the VI client access to their consoles, and the host managemment interfaces like log viewing etc.
I still have issues that the host status indicators (fan speeds etc.) still mostly never show in the VI Client. It was all perfect for the first couple days when we got the R710, but later it just quit working reliably.
We just bought (3) Dell R710's, 24GB 1333 RAM, (2) Xeon X5550's, and (2) 146GB 15K Drives (Mirrored). These boxes are all connected to a couple of EMC FC SAN's, hence the low hard drive space (hard drives being dedicated to ESX and Template / ISO storage).
I've only had them up and running for about 2 weeks now, but they are running like a charm - currently hosting 12 of our "core" servers (still in the process of migrating systems), barely touching their resources yet.
You should, however, be aware of a current 'pending' bug with ESX 4 (unsure if it's happening with 3.5) when hosting with Nehalem processors. All of my VMs are currently reporting higher memory usage than they're actually using (ex: one server is actually using about 300Mb of memory yet it's currently reporting 93% utilization of 1.5Gb.) This is an issue you're going to see on any box running the new Nehalem processors. I (along with many others) currently have a support ticket open with VMware and am hoping for a bug fix hopefully sooner than later. It's more of an annoyance than anything else. Since this is really our first 'major' VMware environment (upgrading from running a single host for the last year as a 'proof of concept'), I'm really interested to see some of the REAL statistics of how everything is running...
Seriously, I wouldn't let the current bug be a deterrant from the R710's. They're great boxes - they've just gotta get that bug fixed...
Is it purely a reporting bug, or is it affecting performance too?
On a side note, what network cards did you get? I am about to buy a couple boxes to w/ dual quad port Intel 1000VT's, but they look to be only supported on ESX 3.0.2?
As far as I know - it's purely reporting, no performance. I haven't NOTICED any performance problems at least. Hell - I've actually seen an increase since moving these VMs to their new servers...
As far as the NICs go, on top of the 4 embedded ones, we also got a an Intel PRO 1000PT 1GbE Dual Port NIC card that we use for our management and VMKernel networks (all 4 embedded ones are teamed and all VM traffic goes over it).
I DELLR710 install ESX4I, the installation process very smoothly. But ESXi4 install Virtual Machine (win2003 and XP), installed with the pace of people could not bear to install a virtual machine for two hours, the installation schedule has not changed. Installation media and virtual machine on the path are ESXI local hard disk.
Host Configuration: Processor: 2 Intel (R) quad-core E5520 Xeon (R) CPU, 2.26GHz, 8M cache, QPI up to 5.86 GT / s, Turbo Memory: 16GB (8x2GB) 1066MHz dual-column RDIMM memory, for 2-processor configuration, the hard disk: 6 300GB 15K RPM 6Gbps SAS 3.5 hot plug hard drives 英寸, RAID 5 for PERC 6 / I or H700.Who can help?
We are now replacing our 2950s with the R710s and as others have said, we're also getting a lot bang for a buck (or pounds over here!)
We have ESX 4 and ESXi4 working fine on these hosts. We've recently priced up the 2 x 6 core 5600 Xeon processors and they were only £500 more and we go the Low Voltage CPUs (which have a higher price premium). The 6 core processors are covered under your vSphere licensing. So your density of VMs will increase even more. Talking of density, we can pack a load of VMs onto these boxes, we've got numerous applications, IIS and SQL VMs and we're getting about 30 VMs per host at the moment and they're not even breaking into a sweat and that's a 48GB host.
You need lots of memory especially if you're considering Windows 2K8 VMs.
We've recently purchased (awaiting delivery)
2 x L5600 6-core 2.26Ghz
2 x Intel Quad port NIC
2 x 15k 3.5" SAS hard drives for local storage and ESX
Dual port HBA
Don't forget the server has quad-port on board NIC. On board has one port management, one port vMotion, one port fault tolerance.
Then the two Intel NICs have two ports for production on the LAN, two ports for external DMZ and two ports for internal DMZ. One of the spare ports on the Intel NICs is then used for a second service console for HA connectivity. (We're trying to find a way to get PCI compliance hence the extensive network connections!)
We decided to use dual-port HBAs as we then have path fault tolerance. Obviously if the HBA fails we still get downtime, but they are not hot pluggable on this server anyhow so if a single port HBA fails we reasoned the server would power off (we don't know though!) and would be out of action anyhow!
This cost us just over £8 and half G.
I'd strongly consider the 6-core processors, makes your licensing work harder and increases your VM densities and more RAM. It gets eaten!
But I also note that I've clicked the link for R710s on these forums and I see that you need ESX3.5. You may well not be able to use the 5600 processor.... Thought of ESX4? 😛