Hi
I have 3346 GB(vraid5) on my New FSCSI storage for my virtual machines that want to move to (can't wait). I was going to allocate the whole 3346GB and present it to all my esx servers. Is there a reason I would not want to present all the storage to my esx servers? I did notice in some testing that Virtual center adds additional storage as extended , this is why I’m presenting all the storage at once to all the esx. I have fata drives that will contain the ISO images.
Thanks
Derek
You can't have a device with more than 2TB attached to an ESX. You will have to split the 3.3 TB volume into 2, at least, and after that create datastores on them.
Marcelo Soares
VMWare Certified Professional 310/410
Virtualization Tech Master
Globant Argentina
Consider awarding points for "helpful" and/or "correct" answers.
You can't have a device with more than 2TB attached to an ESX. You will have to split the 3.3 TB volume into 2, at least, and after that create datastores on them.
Marcelo Soares
VMWare Certified Professional 310/410
Virtualization Tech Master
Globant Argentina
Consider awarding points for "helpful" and/or "correct" answers.
how will this affect vmotion and HA if I have to split 3.3 TB into 2
If you are presenting this disks to more than 1 ESX, vmotion and HA will not be affected anyway.
Marcelo Soares
VMWare Certified Professional 310/410
Virtualization Tech Master
Globant Argentina
Consider awarding points for "helpful" and/or "correct" answers.
It won't affect HA or VMotion at all if all hosts will have access to both LUNs.
As already said, you have to split your array into 2 LUNs less than 2TB minus 512 each. After that you can combine them into one big datastore via extents if you want.
---
MCSA, MCTS, VCP, VMware vExpert '2009
Yes that makes sense. So I could use the vstorage to move between the two luns.
Is it recommended or best practice to combine the two luns into one big datastore via extents?
To migrate VMs from one datastore to the other you will need svmotion, yes. You can combine both in a single datastore... but I would no recommend it. If you loose one of them you have the risk of loosing the whole datastores, just to tell you one possible problem.
Marcelo Soares
VMWare Certified Professional 310/410
Virtualization Tech Master
Globant Argentina
Consider awarding points for "helpful" and/or "correct" answers.
You can make this one big storage, you just have to present it in PARTS to the ESX hosts. For purposes of performance, if you have that amount of space I ASSUME it's like 10 400GB (450GB) drives or so..
So you will want ALL 10 spindles (or the most you can) to be the underlying structure for your VM's.
ESX documentation specifically says not make more than 1 VMFS per LUN but there is nothing that says you can't make multiple LUNs per VOLUME. So you can pretty much make the volume as big as you want.. to give it as many spindles as you can, and make it a RAID 10 if possible.. but you lose 50% of you disk space, next best thing is RAID 5 (if you don't want to waste all that space).
And if this device is NAS you can simply make it an NFS store no limit on the size. so if it's a pure ethernet store, and ALL ESX connect via NFS, you will have no problem making it one big storage area.
One thing to keep in mind is that best practice is to have no more then 10-15 VM's per vmfs volume..because everytime you snapshot a VM it will lock the entire lun for a few milliseconds...but in the case of something taking a while to snap, it can cuase scsi reservation and VM performance problems.
Typically my VM's are usually around 40-60GB in total size, so I usually go for the 800GB volume sizes and try to stick to 10-12 VM and have extra space on the volumes for snaps, clones, etc
If that space is an entire lun, then remember that if you create multiple vmfs volumes, snapshots are going to lock the LUN..not the volumes.
Best to split up at the LUN level, not the Volume(vmfs) level.
Personally, I would probably split the disk into 2 seperate RAID groups (so you do not have disk contention between the vmfs volumes) and then create a seperate vmfs volume on each raid group...that should get you around 1.x TB per vmfs volume and assuming 60GB per server (Windows 2008 needs way more space then 2003 did), then that gives you about 900 odd GB of used space with 15 VM's...if the extra space 2-300GB, then thats a pretty safe ration to have incase a snap doesn't get removed and you don't notice for a while.
I run more than 15 VMs per LUN with no issues. It's not that bad - with all new tricks like optimistic locking performance impact is much less than in ESX 3.0 times.
---
MCSA, MCTS, VCP, VMware vExpert '2009