VMware Cloud Community
olegarr
Enthusiast
Enthusiast

Thin Provisioning through vSphere vs Thin Provisioning through SAN; or usage both at the same time

Hello:

I’m going to use Thin Provisioning and wondering what will be more efficient…

I have SAN that allows me to allocate Thin provisioning LUNs for my ESXs and at the same time I can use Thin Provisioning through vShpere (when building/migrating VMs).

What will be more professional way to use Thin Provisioning – use it on through SAN, use it through VC or use it through both at the same time?

Thanks

Reply
0 Kudos
17 Replies
mikepodoherty
Expert
Expert

The advice we're getting from the SAN Engineers is thin provision via the SAN and not through vSphere. Of couse, they would prefer you use Thick provisioning but when we indicated we were determined to test thin provisioning, they explained that their experience supporting customers on their hardware shows that SAN based thin provisioning is better.

We anticipate testing this after the first of the year.

Rumple
Virtuoso
Virtuoso

Thin provisioning on the SAN is probably a better solution as it takes any processing load off the vmfs and esx hosts....

olegarr
Enthusiast
Enthusiast

Thank you very much for your replies...

Could be the any benefits/issues of using both solutions (Thin provisioning through SAN and through VC) at the same time?

Thanks

Reply
0 Kudos
mikepodoherty
Expert
Expert

Maybe for test systems but I wouldn't try it for production. As Rumple points out, you don't want to add that processing load to your hosts.

We're in the process of upgrading to significantly larger systems because we've found that small hosts, while they work, limit the effectiveness of HA and DRS. We'll try Fault Tolerance once multiple processors are supported.

HTH

Mike

Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

My opinion : use both. There is no real reason why you can do both in production. Many blog sites have posted that the overheard is not much esp on newer Nehalem hardware. Older hardware I would not do both, newer hardware I do.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

Guess it comes down to efficiency...

Does running thin provisioning of a thin provisioned volume actually provide any benefit at all?

I know when running backups to a dedup volume you normally disable the backup software dedup...I would expect running both isn't going to make any difference at all...

Reply
0 Kudos
DSeaman
Enthusiast
Enthusiast

According to this article, vSphere thin provisioning does not result in any performance hits:

http://blogs.vmware.com/performance/2009/11/performance-study-of-vmware-vstorage-thin-provisioning-....

If you array can do thin provisioning, then I'd use thick disks with ESX. This way you can use 'sdelete' to zeroize freed space in your VM if your SAN array supports 'zero page detection' like HDS and 3PAR to reclaim previously provisioned space. If your array doesn't feature zero page detection, I'd still use thick disks so you only need to monitor your array's disk space and not worry about VMFS space.

Derek Seaman
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

Could be the any benefits/issues of using both solutions

IMHO, there isn't any benefit of using both solution, but also there isn't any big issue of performance problem.

If the storage thin provisioning works in the right way a vmdk thin disk will use only the allocated blocks (that usually are the used blocks).

Note that there is the same problem with vmdk thin disk, when you delete some files from the guest OS... the space will not release and you need some tools like sdelete.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Andre...You bring up some good and very valid points. It just comes down to how you want to manage the infrastructure AND if your array can determine empty blocks when using thick disks on a thin provisioned volume. I will have to test my EQL PS4 out on this to see if in fact it can see zero blocks or if will allocate the entire volume.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
RParker
Immortal
Immortal

Well I have a different opinon..

A) There is NO overhead from VMFS / ESX to provide thin provision. All it does is only uses what the OS actually NEEDS, until the OS needs more, there will be NO overhead. Then what happens is the file system in the OS grows in 2GB increments until the limit is reached.

B) The SAN thin provisioning is the same, there is no difference. you allocate by block.

In reality there is no difference between the two. However, the SAN MIGHT give a slight advantage since that's where ALL the VM's reside and it can better allocate the storage and manage, but in the end a 40 GB VM thin provisioned is thin provisioned doesn't matter WHAT device did it, space is space, it takes no overhead to manage. The ONLY time it would need resource is when the file is growing, and that takes miliseconds to complete..

Reply
0 Kudos
RParker
Immortal
Immortal

> the space will not release and you need some tools like sdelete.

Or Guest OS Defrag . . . .

Reply
0 Kudos
RParker
Immortal
Immortal

> According to this article, vSphere thin provisioning does not result in any performance hits:

and this one..

http://www.virtualpro.co.uk/tag/thin-provisioning/

VMware environment on enterprise SAN technologies from the likes of EMC

or NetApp to notice minimal performance impact with thin

provisioning as SAN memory caches help take up the strain.

Reply
0 Kudos
DSeaman
Enthusiast
Enthusiast

RParker,

I would disagree with you, depending on the array in question. In general you are right, it doesn't really matter if you have the array do the TP or ESX. However, as I previously mentioned, some arrays can do zero page detection and reclaim previously allocated space and return it to the pool of unused space. Maybe you can reclaim space with TP VMDKs (storage vMotion?), but it's probably not as easy as having the array do it. For arrays with zero page detection you just need to run sdelete inside the guest VM (could schedule it too) and automatically unused disk space is reclaimed. Couldn't be much easier.

Very few arrays have this feature (HDS USP, 3PAR, IBM XIV), so the vast majority of people can't take advantage of array reclimation. Also, I don't see how a guest OS defrag does anything to reclaim space as that does not zeroize blocks.

My concern with using both SAN and ESX TP is now you need to monitor TWO disk space allocation thresholds. Why make extra work for yourself? If your SAN can do TP, then use that. If your disk subsystem can't do TP then use ESX. I really like ESX TP on DAS, as it really saves on disk space.

Derek Seaman
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Dseaman - When I deploy a VM that is TP I always set disk alarms in vCenter to alert me to high disk usage on those VM's that are TP so I know what is going on so I dont need to check it everyday, same on my SAN side.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
DSeaman
Enthusiast
Enthusiast

Yup, alarms are an excellent way to monitor disk usage. But in large enviroments with dozens or hundreds of LUNs, various arrays, etc. doing it in one place may make more sense. In smaller environments where you can easily configure both then it's probably not a big deal. The more complex you make a system, the more likely of human error. So I like to keep things simple as possible, where practical.

Derek Seaman
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

I will have to test my EQL PS4 out on this to see if in fact it can see zero blocks or if will allocate the entire volume.

I've only done some tests with a PS5xxx.

But with "normal" thick disk (not the eagerzeroedthick disk) the eql thin provision works fine.

I suppose that block size will be the same of VMFS (cause eql see only the blocks and not the filesystem over them).

But probably there is the same problem also with vmdk thin disk cause the filesystem is still VMFS (unless you are using a NFS datastore).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
Rumple
Virtuoso
Virtuoso

the only thing I could see making a difference is that with a large amount of vmware thin provisioned disks on vmfs volumes that you will be creating scsi reservations each time the disk grows...couple this with an oversubscribed vmfs volume and lots of vm's and you might see a performance impact due to too many scsi reservations

Reply
0 Kudos