Hi,
I am shortly going to start to virtualise most of my physical server revenue into virtual servers.
The hardware I've got is a Dell Blade Enc with currently 4 blades, Dell Equallogic PS5000XV 16 X 450Gb HDD - I'm going to format it RAID 10 for performance.
I'm going to be putting system's on such as Exchange, SQL, Citrix, File & Print etc....
My question is that I've heard you shouldnt put too many VM's on one LUN as you can over load it.... I've got about 3.1Tb to play with for space, not sure how many LUNS to create...
The largest systems I have are Exchange which will take approx 400Gb and the other is my largest file and print server which takes roughly 600Gb...
I plan to create seperate VMDK fles for Exchange DB, Transaction Logs, SMTP queue etc... and I'm going to break down the file server into may 150Gb VMDK files each rather than one 600Gb file...
Hope someone can give some advice...
Regards
Dave
did you complete a virtualisation assessment to check the CPU, mem and disk IO to ensure all physical boxes are suitable candidates. From the LUN planning point of view you will need to consider the block size for the vmdk's that are bigger than 256 gb. The SAN you have will achieve around 2800 iops in a 50/50 w/r profile. there are quite a few threads on here debating numbers of vmdks per LUN and whether to have all system partitions on one lun or mix and match some low I/O disks with high I/O disks, Remember to plan swap space and if you are using VCB to allow snapshot overhead in your LUN sizing.
Couple things to keep in mind...
The more spindles you have behind a datastore, the better.
You don't want to put more than 25 .vmdk files per datastore.
How many systems do you plan to virtualize?
Jase McCarty
Co-Author of VMware ESX Essentials in the Virtual Data Center
(ISBN:1420070274) from Auerbach
Please consider awarding points if this post was helpful or correct
We did have a VMware capacity plan done and all servers seemed good candidates for virtualisation
I dont think I'm going to have any VMDK larger than 256Gb to be honest.... I would have thought it was better to have more smaller ones, say a maximum size of 200Gb??
Jase,
Over three years I plan to virtualise about 40 to 50 servers but I will buy another Equallogic box before I get to that extent.....
I've got 16 disks to play with, I loose two for hot spares, and then with RAID10 ill loose half the disks for mirroring...
You shouldn't put more than 200 vmdk's per datastore....
It depends......
You are losing a lot of disk by using RAID 10. How active are your exchange/file server machines? Personally, if they've been designated good candidates for virtualization, then I couldn't imagine their I/O being huge. That would lead me to use RAID5/6 for the disks, as opposed to RAID10. You are losing more than half of your disk capacity to RAID, which seems an awful waste to me.
I'd create multiple RAID5 sets with some hot spares, and then split my disks into 200-300 GB chunks, and give exchange and the file server either their own RDM, or map the iSCSI target from within the vm itself.
-KjB
The Dell Equallogic box only allows one RAID type per whole box.... So I would have to go all RAID 5 or all RAID 10....
I have factored in that I would loose a lot of storage but hopefully I should be gaining good performance....
If I use the iSCSI initiator to connect to the SAN directly I wouldnt be able to use vmware to snapshot would I?
The performance difference you may actually see, may not be as much as you would think. But, I leave that to you to work out the numbers, personally, I would have gone with RAID5.
You are correct, if you use the initiator within the vm, you would probably get better performance, but you would lose snapshot capability.
-KjB
if you connect the initiator directly to the SAN you could gain the EQL ASM tool functionality of VSS integrated snapshots and replicas for both Exchange and SQL. I would go RAID 50 rather than 5 on the EQL's which will give you similar performance and faster rebuild times.
Ive done a number of vmware implementations on EQL on raid 50 running SQL/Exchange without any problems, in a RAID 50 config 50/50 r/w you should get 1700 IOPS. If you dont need high perofmrance why waste all that space that RAID 10 will lose you.
If I chose to connect directly with the windows iscsi initiator how would I get on with vmotion of the operating system?
Control Techniques
Emerson Industrial Automation
Ty Rheoli, 79 Mochdre Industrial Estate
Newtown, , United Kingdom SY16 4LE
Tel:00441686612515
Fax:00441686612800
Email: David.Hill@emerson.com
Web: www.controltechniques.com<http://www.controltechniques.com/>;
Control Techniques Limited. Registered Office: The Gro, Newtown, Powys SY16 3BE. Registered in England and Wales. Company Reg. No. 01236886
Using Vmotion with iSCSI is fairly straightforward. First step is to ensure that you are using a dedicated vSwitch for the iSCSI network. Then connect a second virtual NIC to your VM that has access to the iSCSI vSwitch. Your VM should now have a virtual NIC on the production network and the iSCSI network. Use the client iSCSI initiator to connect volumes in your VM. As long as the vSwitches are configured the same on both ESX hosts then there should be no issue during Vmotion as the iSCSI virtual NIC will use the correct vSwitch after the vMotion is complete.