We are currently planning a deployment for ESX. I was wondering what the best strategy for vfms deployment:
1. One big LUN for let say 30 virtual machines.
Or
2. One LUN for one virtual machines.
The average size for a virtual machine will be 16 gb. I am aware of the lun size spreadsheet provided by Ron Oglesby.
We are planning to run 30 virtual machines on one esx host. One farm of esx hosts will be created with 4 to 8 esx hosts (vmotion should be possible between all esx hosts).
Massimo:
your setup makes me think of a mainframe thing called parallel sysplex
I wish I could run VMware on one of these! :smileygrin: isn't that the one with synchronized-everything, with sync'd CPUs on different machines running the same instructions at the same time so that if a machine sneezes the other will quickly take its place - even in the middle of a transaction? That's something that even HyperTransport v3.0 - which I'm sure you have already looked into - can only dream of...
If these things happen (rarley fortunately) I guess that they might be happening (certainly less rarely) on this x86 platform.
Yes, I get your point. I'm not sure how tolerant VMFS is to a server disaster during a metadata change... and I mean a disaster on the server actually doing the change and holding the lock, not the others. I guess we might end up with a lot of broken bits (pun intended).
whenever I hear about all ESX nodes bundled in a tight cluster layout I tend to feel uncomfortable.....
You got me uneasy, too. I'll consider splitting our farm into two smaller ones when we upgrade to VI3. Maybe it won't be as smooth and optimized as it could be, but you have a point: is something goes horribly wrong, we will still have the other half running... Thanks for the insight!!!
Paulo
Brad,
I like you method. I've been thinking about a similar layout. I don't like the idea of presenting one large MetaLUN and only being able to use one path on an HBA to access it at a time. I would rather be using all the paths.
This is a little off topic but..
It looks like VI3 has improved using extents over 2.5.x. ability to lose a LUN with out losing the VMFS vol. ESX server are now "aware" of VMFS changes on extents instead of having to reboot them.
It might make sense to go back and revisit extents to see if there are real improvements that make sense to use it now
Thanks for the details description and the methods behind them
While we don't have a separate LUN for every VM we do
have a seperate LUN for each Partition. For
instance:
LUNxx-Prod-C-1 (system)
LUNxx-Prod-D-1 (data)
LUNxx-Prod-E-1 (logs)
LUNxx-Prod-S-1 (swap)
Hey Paul.b (or anyone else who knows this answer)
When you set your infrastructure up like this (1 lun for all c drives, 1 lun for all d drives, etc) are you still able to vmotion from one host to another?
Respectfully,
Yes you can do all the vmotions you like. And all the other software including HA and DRS work with no issues.
Im gonna talk about this in my session at VMworld...
ADC9591 VMware ESX Server and Storage Architecture Best Practices for Performance, Backup, and Disaster Recovery
Is there anything wrong with putting the OS and Data vmdk on the same LUN? VI3 supports a folder structure does it not? Problems with putting OS and Data on a LUN inside a folder? Seems it would be easier from a management point of view, open a folder, there's all your vmdk files for said vm.
That brings me to another question. Where's the best place in VI3 to put the .vmx files?
Everything goes on the SAN or VMFS volume and by default all in the same folder
It was just yesterday that I was reading about this. I believe the vmfs2 file format is based on a flat file structure (for speed of access). Therefore, just like sbeaver mentioned above, everything is on the same playing field.
I assume that vmfs3 file format is the same.
Respectfully,
No in VMFS2 you could not create any folders beacause like you said it was a flat file system. In VMFS3 you can create folders and because of that all the vm's file are now on the vmfs partition in the VM's file folder by default
Thank you for the quick reply sbeaver. Is it still good practice in VI3 to do what m8trixdg had suggested?
He takes things to another level but there is nothing wrong with that. Unless you are really getting hammered with IO then the performace gain IMHO would not be that much. KISS always works well also and well has having different luns with different RAID groups for different performace reasons.
ESX3 by default will try to put all the VM's file in the fiile folder on one VMFS partition. You can easily split this up to multiple LUNS just know what ESX does by default to better plan and document
Oh yeah, that is what I love about IT, (learning) 10 new things a day!
Steve,
I want to make sure I get this straight, (I told Scott to let me know when his next book comes out because I am going to buy it as before the ink dries, his current one rocks!)
...in VI3, when you create a new vm, it creates one folder on the vmfs3 file structure for all the files associated with that vm?
I am not completely st8t on this yet.
Respectfully,
Yes that is correct behavoir of ESX by default
I bet the original poster has his question answered already, because its over a half year old. But I justed wanted to throw in the design I'm thinking of right now.
Normally our systems have two disks. C- for system and D- for data. I consider the C-drives as low I/O and the D- drivers as high I/O. I try to stick to the rule that a LUN holds max 16 high I/O vmdks or 32 low I/O vmdks. With C at 12Gb and D at 20Gb (average) and aiming at virtualizing 450 systems, I calculated that I would need 43 LUNs of 335Gb each. Add snapshot, swapfile and some extra free space and I set each LUN to 500Gb.
I'm planning on mixing high and low I/O vmdks on the same LUN but count a high I/O as two low I/O's and keep it at the max of 32. Per VM the C and D vmdk will not be on the same LUN. Further I will not mix DEV and Production VMDKs because of an increase in SCSI reservations.
Just my 2cents
Gabrie
I'm glad I found this thread
We are in the process of setting up 3 x VI3 Enterprise servers using DRS and Vmotion. Estimated total number of VMs that can be run on this setup based on current performance trends is around 100 VMs.
Our plan is to create datastores consisting of 200 GB LUNs and presenting them as shared LUNs to the 3 servers.
i.e. datastore1 = 200 GB = LUN1, datastore2 = 200 GB = LUN2, etc
Each VM's vmdk files will be stored together in the same folder for ease of management.
Is there any advantage / performance increase in doing this, rather than creating 1 datastore and adding multiple 200 GB LUNs as extents?
i.e. datastore1 = LUN1(200GB) + LUN2(200GB) + .....
or creating 3 datastores(1 per server) and adding more LUNs as extents to those datastores?
i.e. datastore1 = LUN1LUN2, datastore2 = LUN3LUN4
I'm trying to understand the relationship (performance wise) between datastores, extents and LUNs.
Thanks.
I'm not aware of performance difference between one vmfs per lun or one vmfs over multiple luns. Be aware however that loosing one lun, would cause the whole vmfs to fail.
But then again, they say a SAN seldom fails
Gabrie
W37b,
Rule of thumb, no more than 13 high I/O vm's per lun. You may want to consider a larger lun size. We are using Luns in the 512 gig ranges. Remember for VI3 all your files go on the lun, including snapshots, configuration, swap space etc. You can also only have 16 luns per host I think it is.
As you add more vm's to a lun you slow down. At the same time you don't want to waste SAN storage with small luns either.
Respectfully,
I definitley agree with juchestyle about watching out about the I/O on each LUN. the ratio of Luns to hosts is not 16 that is the host to LUN ratio meaning onl 16 hosts can see one LUN.
As for the size of the volumes the size should be that which is managablke for your SAN and yourself. It is different for each company and you will find that standardizing on LUN sizes definitley helps to maintain the environment.
One of the great things with VI3 is that you can move any of the files wether it is the config files, snaps or vmdk's to different LUNs so that if there is a performance bottleneck just tweak the environment.
Typically for my customer 512 is the largest VMFS I recommend and it ranges from 200 too 500 depending on my guidelines I wrote above.
Thanks,
Brad
Thanks,
Brad
Can anyone confirm that they have any single ESX (VI3) host with attachments out to both a EMC (prefer DMX) and NetApp storage systems simultaneously. And whether it is iSCSI or FC, if so?
Thanks.
Well, this thread shows how much any single "rule of the SAN" is not valid.
Everyone has different uses and performance issues.
At the shop I'm at now, we have a CX700 with 1.5TB assigned to one of the VMware farms. That's one 999.8GB volume and one 499GB volume. They are expanded within VMware to make one giant volume for the four boxes in the farm. Note that I'm adding more space now because it's full.
We have 80 virtual machines running in this space. Some servers, some workstations, all production. Performance is great.
Unless you're trying to run a file server off of a virtual machine, which I personally wouldn't do, most things aren't all that disk intensive. Even web servers and web proxies and e-mail servers, as long as they are sized correctly, aren't that heavy on the disk. Most of the VM's we have on there aren't heavy on the disk but tend to need disk space.
We have had zero performance problems.
I couldn't imagine the (admin) overhead of constantly carving out little 100GB LUNS for everything. The way I figure it, moving forward we're using all 300GB FC disks or 500GB ATA disks now, and it just doesn't make sense. The CX700 is real fast, FC disks are real fact, the machines are real fast..
We favour a mix of 500G LUNs and some smaller LUNs along with RDM after understanding what the Apps are doing.. But again rarely are two sites identicle..