I'm in the process of converting a small network from physical to virtual. There are six (6) older Dell servers (2550-2850 models) that have single, Dual Core Intel Xeon processors (1.8Ghz.) and each server is running appx.2GB of memory. For the most part, their network just consists of file server purposes although there is an Exchange 2003 server that will be upgraded to Exchange 2010. The end users are perfectly happy with performance the way things are, so anything better would be great!
I'm going to buy two (2) new servers that will be ESXi hosts and I'm wondering if I need 2 processors per server...or would 1 processor be fine? The processor I'm thinking of using for the servers are the Intel Xeon E5-2620 ( http://ark.intel.com/products/64594/Intel-Xeon-Processor-E5-2620-%2815M-Cache-2_00-GHz-7_20-GTs-Inte... ).
My *guess* is that since these servers and processors are anywhere from 5-8 years old, I've very confident that a server with two of the E5-2620 processors would run rings around all of the existing servers, but I'm wondering if a single CPU would also be sufficient to P2V the existing servers and work just fine.
From the specs of the old servers I'm pretty sure the new hosts (which are hopefully on VMware's HCL) with a single processor will be powerful enough to handle the workload and will still have available resources for additional VMs. If I understand this correctly, you are not going to use shared storage, but place the VM's on the hosts's local disks!? In this case make sure you get a supported RAID controller with at least 512 MB battery buffered or flash backed write cache!
I agree with AP for the current scenario, but if you move to Exchange 2010, I'd consider getting a host with the second socket populated. Exchange 2010 is definitely heavier than 2003. That'd if budget allows for it, of course, but between that and unknown utilization numbers of the existing host, if the price difference is not too bad (and your vSphere license allows for it), the second socket will help you more longer term than without.
I will be using shared storage - I'm thinking of: http://www.ixsystems.com/storage/ix/iscsi/titan-212i.html but if I undertand it correctly, the performance that the end users experience is largely due to the performance of the ESXi "host" and NOT based on the performance cpability of the "target" storage...correct?
Is there a systematic way of planning (i.e., "making an educated guess") when you're trying to decide what type of processor, how many cores, etc. one needs? Surely there is, isn't there?
I'm thinking of: http://www.ixsystems.com/storage/ix/iscsi/titan-212i.htm
I don't see this system on http://www.vmware.com/go/hcl. Please make absolutely sure the system is supported for the ESXi version you are going to use to avoid trouble and possible data loss!
... but if I undertand it correctly, the performance that the end users experience is largely due to the performance of the ESXi "host" and NOT based on the performance cpability of the "target" storage
It's based on both, the processing and networking power of the host and the storage for the virtual disks (local or shared). It doesn't make a difference whether the Hypervisor (ESXi) itself is installed on disks or even an USB device or SD card, it's only the datastore (connectivity, # of disks, IOPS, ...) which is important for disk performance.
I should have been clearer - the storage appliance will run FreeNAS (it's really a SuperMicro 2U server under the hood) as it's host OS and will be used as the iSCSI target, so it doesn't matter if it's on the HCL if all it's being used for is storage for the VMs...does it???
For storage planning you can check these basics
1- While using ISCSI, jumbo frames give good performance, and the NIC should be capable of supporting
2- How many paths are there for the ISCSI storage
3- How many uplinks, that is PNICS are given to the ISCSI
4- If it is ISCSI hardware HBA, then you need to test by enabling adaptive queue throttling in the the esx/esxi
5- what is the forwarding rate or processing power of the Pswitches
6- How many Pswitches are there for the iscsi storage
7- How many spindles are there, and are you using FC SAS, or SAS 6G 15K, or NL, etc
8- Is it hardware ISCSI target or software ISCSI target
9- What is the multilpathing policy,
10- does the ISCSI storage is Active/active or Active passive
in general if you have more HDDs more IOPS, also more paths give good load balancing, and more nics/more pswitches and more ISCSI targets will load balance the traffic,
ESX performance is based on all the hardware, CPU, RAM, Netowrk,, Storage they are all tightly bound to each other. If you discard any one it will affect the entire system. So carefull planning is needed, and first get some capacity planning. check the peak load of cpu, ram, IOPS etc.... then set this peak value as a base line... then start the design. Then selec the hardware and resource allocation and its best pratices there is lot of guides there in the vmware site.
ESXi has special requirements that the storage system has to support (that's actually why there is a HCL for storage systems). If the target does not support these requirements (i.e. does not communicate with ESXi as expected) you may have issues.