I have a few questions regarding the fear of using extents.
1. Is there any documented performance hit by using extents?
-- Many LUNs with a single VMFS volume spanning all LUNs - (page 37 - http://www.vmware.com/pdf/vi3_esx_san_cfg.pdf)
2. Why the fear of losing a LUN? If all LUNs are presented on the same SAN volume you don’t typically just “lose” LUNs.
3. I think of extents like volume groups, multiple LUNs that are load balanced over the fiber adapters using multipathing, is this not a safe assumption?
This is my intended design:
4 – 100GB LUNs made into a 400GB VMDK Datastore (two LUNs assigned to each HBA)
As far as I know there is not a very great performance hit, if any. There is no striping going on or anything like that. When the first LUN fills up, you just start using the second LUN.
If both LUNs reside on the same underlying diskset, the problems you could expect are minimal. But let's say that the two LUNs reside on different physical disksets (logical disks, logical volumes, logical arrays depending on manufacturer). If you lost one of these disk arrays, there is NO telling which part of your extended VMFS is still accessible and which part is not. Because you never really know what resides where. If you went with two separate VMFS volumes, you would have lost only one of them. It's very much like using a spanned-disk array on a raid controller (which I would never do)
My question is, WHY would you ever want to put 4x 100GB LUNs together to create one single VMFS? If you loose a single LUN, effectively all of the VMFS volume should be considered lost (there is no telling what will remain working and what will not). Could you tell me more about the setup you want to create?
Message was edited by: Erik Zandboer
LUN -> VMFS typo
1) I think I have seen something in a presentation from last year's VMworld.
2) I agree with the argument. I think there can still be a problem when multiple servers access on VMFS and the extension process is not done properly. The knowledge base has at least one article on VMFS extents:
3) A multi-extent does NOT act like a volume group! It is a single file system made by concatenated partitions.
The presentation I mentioned claims that there is ONE I/O queue which goes to the whole VMFS. If the queue is full, you will not gain more performance by adding LUNs like on Unix-style volume group which can do real striping.
I agree with BUGCHK: If you try to gain performance by extending the volume, I'm sorry to say that this will NOT help. Much better to recreate the separate LUNs to a single one, if they are located on different disk arrays, you should consider to make one big disk array. More spindles, more speed. Also check your blocksize, I have tested on an EONstor for optimum blocksize. The maximum (256KB) performed best. Which did not suprise me, since the minimum blocksize of VMFS is 1MB anyway.
Please also note that for a VMFS, there is a rule of one VMFS per LUN. SCSI 2 Reservation locking in ESX locks the entire LUN not a partition, so it is best to have one LUN per VMFS only (local storage is an exception). WHen using extents, you are gathering together under one logical VMFS multiple LUNs not multiple partitions per LUN.
Removing an extent requires you to destroy the entire VMFS.
With these caveats, I do not advocate the use of extents. It is far better to use multiple LUNs with individual VMFS.
Extents are however great if all you have is small LUNs (<200GB) and want to make them one big LUN.
Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education
The idea behind the 100GB LUNs is really so I don't have to reserve all the space at once. SAN Space is quite expensive and I don't want to end up with a bunch of datastores with a bunch of free space. If you create your LUNs on two different disksets and assign them to the same VMFS, you really shouldn't be involved with VMware or SAN provisioning.
The KB article you provided states that you shouldn't add the extent to the same datastore using more than one host, but to add it with one host and rescan on the remaining hosts. I do realize that it doesn't stripe or technically offer any advantage performance wise. The idea is that you'll fill your datastore (obviously), so therefore, on a 400GB datastore made up of 100GB LUNs you'll have 200GB going out one HBA port and 200GB out another. Also, you don't have to assign all 400GB at once, you can do 100GB at a time as you migrate to VMs from Physicals.
You can't add more than one VMFS per LUN. I'm talking about adding more LUNs per VMFS. Also, once you add the extent why would you want to remove it?
Your Quote: "Extents are however great if all you have is small LUNs (<200GB) and want to make them one big LUN.", so, technically you're saying that extents are ok now.
I still haven't seen any evidence that extents are bad...
Extends are nor "bad" nor "good". Sometimes they just offer an ability you would otherwise miss out on. The REAL problem with extends is when you map everything together. You then have to possible problems:
1) If mixing LUNs with different speeds, you never know about the performance of your extended VMFS (will differ depending on where you read/write on the VMFS); 2) If a single underlying LUN fails, you basically loose ALL data instead of only that LUN.
As long as you keep these in mind, extends will usually work pretty smoothly if you fill up each underlying LUN with one VMFS only. Problem 2) is easilly solved if all of your LUNs run from a single set of spindles; indeed if something DOES go wrong, it usually hits the entire logical volume. And then is does not matter whether you used 4 LUNs or a single LUN on that logical volume to create your VMFS.
You can have more than one VMFS per LUN. It is very easy to do actually. However just not recommended. Many people want to use extents to fill up the rest of a LUN they just resized by adding a partition and then making that a VMFS and using extents to extend the VMFS to cover this partition. This is a very bad thing to do. If you are using full LUNs then it is not an issue.
Some administrators use extents as a temporary solution, hence why the removal of extents comment. It requires the VMFS to be completely deleted. If however this is a permanent solution, it is not really an issue.
When you use extents you should be aware of the caveats above. They are neither good nor bad. If you are using them in a permanent fashion then they work as advertised. I once consulted with a company that had to gather together a bunch of 4GB LUNs to make a larger LUN for ESX. This worked for them but presented a number of performance issues that Erik already mentioned.
Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education.
I think it really depends on the reliability of your SAN. Our EMC has the data so spread out across so many different disks, the probability of losing a whole LUN is very, very small. So, I go with extents to make things cleaner. If you say had a SAN where there was a greater potential for a LUN to die, then I would think keeping them separate would be beneficial.
Also, as it was noted in one of the posts, it is important to make sure your SAN Admin makes all your LUN's on the same type of storage. You don't want one on ATA and another on Fiber Channel SCSI. That would be bad. But that should go w/o saying.
It's also a personal preference. Each Admin is going to go with a different setup, which is why all the options exist. I don't believe VMWare would have the ability to add extents, or especially make it so convenient to do so, if there were going to be consequences. Anyways, that's my two cents.
Discovered a new caveat: If you installl ESX on a new box, and forget to unplug the fibers, you could end up deleting your VMFSes (or at least mark the partitions as not being there anymore). Saw this problem at a customers site last week. VMware support did great in being able to recover the lost VMFses. However, one VMFS would not function completely. Some VMs could start, others could not (address error). It turned out, that that LUN used to be constructed out of two VMFSses, extended together. It is NOT possible to retrieve this. Only the primary VMFS is recovered, the extends are lost forever.
Basically± Antoher reason NOT to use extends in my opinion. I owuld suggest that if you need to extend (a permanent extension), move the data away, destroy the VMFS and create a new bigger one, then move the data back.