Hi All,
My first question on the VMFS forums, so apologies if this one is a little basic or has been answered before (I have looked - honest!).
The question(s) I have are with regards to VMFS Extents and whether to use them or not. The environment in question is made up of HP DL585 G5 Servers with two HBA's each connecting to a DMX3 SAN. When requesting storage we are given allocations of around 135GB.
I am aware of the general pros and cons of using extents, but because of the reasonably small LUN size in the environment, I was considering using Extents to make up larger blocks of storage (1.2TB - 9 LUNs each). Problem is, that I am a little concerned about any potential performance pitfalls.
My first question is how many VM's you should have per Extent (I believe the number is between 5-16 per Datastore depending on the VM's)? Am I getting my wires crossed in thinking that if I have an extent of 9 LUNs, then I shouldn't really have more that 20 VM's on this extent?
Secondly, can anyone confirm that there is only one I/O queue for each VMFS volume (the whole extent) no matter how many LUNs I have. I take it no matter how many storage controllers and LUNs I have on the back end, it will make no difference to overall performance?
Finally, what is the impact on performance with regards to SCSI locks across multiple LUNs? I understand that if there is a file that traverses an extent then SCSI reservations will take place across multiple LUNs. Will this have any significant impact? Should I be worried about excessive SCSI reservations, reservation conflicts and bus resets making my extents inaccessible?
Thanks
Hello,
Welcome to the forums!
The question(s) I have are with regards to VMFS Extents and whether to use them or not. The environment in question is made up of HP DL585 G5 Servers with two HBA's each connecting to a DMX3 SAN. When requesting storage we are given allocations of around 135GB.
Ask for no less than 512GBs if they only give you 135GB complain as 135 is TOOO SMALL for anything worthwhile. Get the SAN team to change their allocations and refuse to use extents. They are a management nightmare.
I am aware of the general pros and cons of using extents, but because of the reasonably small LUN size in the environment, I was considering using Extents to make up larger blocks of storage (1.2TB - 9 LUNs each). Problem is, that I am a little concerned about any potential performance pitfalls.
There are no performance pitfalls when dealing with extents, There are however management pitfalls. To delete an extent you must delete the ENTIRE VMFS and not the single extent.
My first question is how many VM's you should have per Extent (I believe the number is between 5-16 per Datastore depending on the VM's)? Am I getting my wires crossed in thinking that if I have an extent of 9 LUNs, then I shouldn't really have more that 20 VM's on this extent?
You want 12-15 VMs per VMFS. Extents are avoided due to the management issues.
Secondly, can anyone confirm that there is only one I/O queue for each VMFS volume (the whole extent) no matter how many LUNs I have. I take it no matter how many storage controllers and LUNs I have on the back end, it will make no difference to overall performance?
There are multiple IO Qs, 1 per LUN but there is only 1 metadata location on the first VMFS created. This metadata is the important as when it changes the entire VMFS including all extents get locked.
Finally, what is the impact on performance with regards to SCSI locks across multiple LUNs? I understand that if there is a file that traverses an extent then SCSI reservations will take place across multiple LUNs. Will this have any significant impact? Should I be worried about excessive SCSI reservations, reservation conflicts and bus resets making my extents inaccessible?
Anytime the metadata is updated the entire VMFS (including all extents) will get locked at the LUN level. This has the same impact as if you had one large LUN. extents do not bypass the locking performed.
It is best to size out your LUNs for 12-15 VMs per LUN and use multiple LUNs for multiple VMFS locations. Extents are not recommended due to the management issues surrounding them. However, if your SAN team is not going to give you larger LUNs (I would work this by going up to the appropriate manager), then you may be forced to use extents. Just remember to remove an extent requires you to destroy the entire VMFS. Which will destroy everything on the entire VMFS. You never want your VMFS to be more than 80% full as well.
Size of VM = Size of VMDks + Size of Memory + 15MBs + 4GBs for logs
Size of VMFS = 12 * Size of VM * 1.25 (80% full value)
1 LUN = 1 VMFS
Use multiple VMFS for your host to get the most benefit and avoid SCSI Reservation issues.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
Extents are logical extensions to a file system, usually implemented as concatenations of disk partitions or logical volumes. Extents are typically used to grow a file system beyond the constraints of its logical or physical device.
For instance, in ESX, you can grow a VMFS file system by adding extents to the original LUN (or Master Extent) on which the file system is based.
In the VMFS implementation it is possible for files span multiple volumes. The implication is that a single write on a file traversing extents can result in SCSI reservations across multiple LUNs, with the associated degradation on overall performance.
In VMFS, all metadata is located on the master extent (typically the first LUN on which you created the datastore). If one of the extents goes offline, its data becomes inaccessible... e.g. entire .vmdk's, or in the worse case, those partial .vmdk's that span the extent. If the master extent goes offline, the entire datastore becomes inaccessible.
By default, VMFS will attempt to keep individual files confined to an extent. Nonetheless, considering that file data on a VMFS file system consists of more than just our defined .vmx and .vmdk's, you can see how, for example, a dynamically created swap file, or a dynamically growing snapshot file for a VM could potentially span extents.
Finally, VMFS issues SCSI bus resets rather than target or LUN resets, which can have a more global affect on the attached fabric. In a poorly designed environment employing extents and experiencing excessive SCSI reservations, reservation conflicts, and bus resets, this can result in extents becoming inaccessible.
My advice is thus to avoid using extents. If you need additional datastore space, create a new datastore. Plan storage allocation for your VM's associated data in an evenly distributed manner across multiple LUNs.
If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Regards,
Stefan Nguyen
iGeek Systems Inc.
VMware, Citrix, Microsoft Consultant
Thanks for the responses!
Texiwill, I will request larger LUNs as suggested, but I know the answer will be “How often do you plan to delete Extents?” Are there any further management issues other than deleting extents (the more reasons I have, the greater the chance of success)?
With regards the I/O queues, I had read in a previous post http://communities.vmware.com/message/769106#769106 (3rd comment down by BUGCHK) that there is one I/O queue per VMFS. I was looking to confirm whether this was true, (if so it would be another reason not to use extents).
Azn2kew, thanks for the post! I read through Tim Warden’s white paper (http://www.las-solanas.com/storage_virtualization/esx_san_performance_guide.php) and read the section on VMFS extents that you have mentioned. What it doesn’t state is how to best use Extents if you are forced have a large number of small LUNs!
From what I understand, it may be best to steer away from Extents if possible….
Hello,
Texiwill, I will request larger LUNs as suggested, but I know the answer will be “How often do you plan to delete Extents?” Are there any further management issues other than deleting extents (the more reasons I have, the greater the chance of success)?
It is more complex for both the virtualization administrator and the storage administrator. There is more to be aware. Remember 12-15 VMs per LUN max.
With regards the I/O queues, I had read in a previous post http://communities.vmware.com/message/769106#769106 (3rd comment down by BUGCHK) that there is one I/O queue per VMFS. I was looking to confirm whether this was true, (if so it would be another reason not to use extents).
In the vmkernel there is one I/O queue per VMFS, but at the hardware there is one queue per LUN.
From what I understand, it may be best to steer away from Extents if possible….
Absolutely.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
I am aware of the general pros and cons of using extents, but because of the reasonably small LUN size in the environment, I was considering using Extents to make up larger blocks of storage (1.2TB - 9 LUNs each). Problem is, that I am a little concerned about any potential performance pitfalls.
I believe, and I've failed to find a quotable source for this, that there is a limit to the number of time you can extend a VMFS parition. While you can streach a VMFS partition over 32 volumes, the act of extending can only be performed four times. So plan ahead.
In any case it does sound like the storage team need a little talking to. My assumption is that they has EMC PS carve up the DMX3 into 135GB LUNs at install time, because 135GB made sense for existing application. Bringing VMware into the mix changes all that, and bringing PS back in isn't a cheap.
Another reason is that a host can see only 256 LUNs in total. Since you want all of the hosts in the cluster to be able to see the same storage, that means that your entire cluster is limited to 256 LUNs. If you allocate small LUNs, then you're going to hit that threshold pretty quickly.
Ken Cline
Technical Director, Virtualization
TVAR Solutions, A Wells Landers Group Company
VMware Communities User Moderator
here is some finding I found about the datastore. each of ur datastore will active connected with 1 of the virtual path from the virtual HBA connection which allow your ESX host to be communicated. If you extent your LUN to bigger size example 1 TB or 2 TB each, you may end up storing a lot of VM per each datastore. That may not be a good idea for perfomance. You should always calculate and monitor, to ensure you dun hit to performance issue when you try to utlize your big storage LUN. Backend storage from SAN level, of course you have to utilize the best practice as usual to define your storage raid group, spindles, MetaLUN and etc. You may also need to plan the physical HBA connection you have from single ESX host to your fiber switches.
