25TB isn't much on a per host basis at all if you're talking about multiple large file servers (>4TB), multiple Exchange Servers or multiple database servers. As it's a per host limit i.e. combination of all VM's running on a host, not a per VM limit, it is a very small limit indeed. When using Oracle RAC on vSphere the recommended configuration is to use VMFS, this means that all hosts in the RAC cluster will be sharing all VMDK's on the VMFS and therefore all data for the RAC must be less than 25TB, and you wouldn't for example be able to run other VM's on those hosts. These days 25TB is nowhere near enough, especially when a single VMFS volume could be 64TB.
I'm not arguing there's not a use case for 25TB of active VMFS per host as I too work with many terabytes of data on the hosts. What I meant is more along the lines of it seems like VMware is assuming people are using RDM or NFS for situations where that much data is required. If this is such an issue, why would VMware not have a resolution? I agree they aren't accounting for the mostly rare in the grand scheme of things cases for >25TB per host of active VMFS, but maybe they have a good reason for it or they are just assuming that people aren't using VMFS for file servers of that size(which isnt' good practice).
The default heap size has been further increased in ESXi 5.0 Patch, ESXi500-201303001 to 640MB, which should allow for 60TB of open virtual disk capacity on a single ESX/ESXi host.