Is there anything else that can protect a datastore from running out of space besides the persistent warning? Something like a soft limit to hit that won't allow it to overrun the hard limit.
We have a customer that has had run away jobs (usually log files) blow up his space.
Thanks
Don't promise a datastore space it doesn't have. This is a problem with people who use thin provisioning everywhere and run into major over commit issues. Follow good practices like ensuring you're never 100% committed to allow for snapshot overhead and you can't run into that problem.
Monitoring - either by software or by human.
If a datastore is running out of space because of log-files not behaving themselves then there is only one answer:
- fix the problem at the root -or throttle the logging if you find out that you can not suppress a flood of false alarms.
Yes.... you can group your Datastores into "Datastore Cluster" and when enabled vSphere can automatically migrate single VMs or even vDisks from one DS into another where more ressources are available.
But for me it looks like that just avoiding thin provisioning is the way to go in your situation.
Regards,
Joerg
