VMware Cloud Community
csimwong
Contributor
Contributor

Lab Manager Chain Performance

I understand that performance will decrease as the chain length of a VM configuration increases, but let's say I have 1 single base VM configuration saved in the Library with a 0 chain length and 15 people have checked out a copy of this particular VM. Will these 15 people, working at the same time, see a performance issue since each of their checked out configurations is chained with the same base VM in the Library? Aside from each of the VM's using up the ESX host's resources, my guess is that there could some slower performance due to disk I/O since each VM is chained to the base. Would it be any different if there were 15 different VM configurations in the Library and each of the 15 people uniquely checked out one of those for themselves?

Any feedback would be greatly appreciated. Thanks.

0 Kudos
4 Replies
skishi
Expert
Expert

Chain length is NOT a big impact on VM performance. The performance impact of COW disks is complex, with potential peformance hits and gains and at times they can perform better than monolithic disks. You should test the performance of Lab Manager VMs in your specific use case-- most of the time the performance impact is negligible.

All shared files in Lab Manager are open only for read, not write. Because of this, the performance impact of concurrency is minimal. And if the storage array reads the shared files from memory instead of disk in the storage array you get a big boost in performance.

Here's a more general overview of performance of the chain of COW disks created and managed by Lab Manager: There will be a small delay on startup during metadata caching, performance hits during certain I/O operations, and potential performance gains from storage array caching.

Metadata cached into memory on VM startup describes the sparse structure of the COW disk-- which file to hit to get which data. Since this metadata can get quite big, we only cache a portion of it. When a VM does a virtual SCSI read and we hit the metadata cache, each virtual read will result in a single physical read. If we have a cache miss, however, we will first have to do a metadata read before doing the actual data read. Hence, we may do more than one physical read per virtual read if a VM is reading from a diversity of disk sectors.

Another potential performance impact of COW disks is copying and SCSI locking during write. When a new block on the disk is written to, it is copied to the COW disk file. The sparse COW disks will grow as data is written to the disk (in 16MB chunks), and the files must be locked as they are being grown. This is not seen with normal ESX VMs which have a monolithic base disk. These impacts will be most apparent when writing to a diversity of disk sectors or during continuous writes causing the COW disks to grow rapidly.

Then there is a potentially large performance benefit to using COW disks due to storage array caching. Many storage arrays will cache commonly used files in memory and when using COW disks, and commonly used/shared COW disks may be read from memory and not disk. Lab Manager is a use case that can take strong advantage of this benefit.

0 Kudos
swamy
Enthusiast
Enthusiast

if the VM is on local disk, does it provide any benefit ?

I know that in real time most of the people use SAN or iSCSI for this but what if my VM is on local disk ?

0 Kudos
skishi
Expert
Expert

From an application standpoint there is no benefit to local disk. In fact, it will create a maintenance nightmare as you grow your ESX server pool.

From a performance standpoint the answer is"it depends" (as you can expect). Its safe to say, however, that in a well-architected iSCSI or FC implementation you will likely get better performance than local disk. Storage arrays provide a lot of benefits from purpose-designed hardware, distribution of data across spindles, better data path management, cacheing, etc.

0 Kudos
oreeh
Immortal
Immortal

FYI: this thread has been moved to the Performance forum.

Oliver Reeh[/i]

[VMware Communities User Moderator|http://communities.vmware.com/docs/DOC-2444][/i]

0 Kudos