VMware Cloud Community
knuter
Enthusiast
Enthusiast

VMFS5 max heap size?

Hi all,

Does anyone know what the max heap size for VMFS5 is or if it shares the maximum for VMFS3?
I've found this KB article: http://kb.vmware.com/kb/1004424 but I'm not sure if it is just for VMFS3 volumes or both VMFS3 and VMFS5.

Tags (4)
Reply
0 Kudos
22 Replies
aravinds3107
Virtuoso
Virtuoso

Welcome to the Community,

In the KB you have mentioned heap size for VMFS5 maxmium is mentioned as 256MB and default is 80 MB


In ESXi 5.x, the maximum heap size is 256 MB. This allows a maximum of 64TB of open storage.

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful |Blog: http://aravindsivaraman.com/ | Twitter : ss_aravind
Cooldude09
Commander
Commander

on same kb it is mentioned In ESXi 5.x, the maximum heap size is 256 MB. This allows a maximum of 64TB of open storage.:)

If U find my answer useful, feel free to give points by clicking Helpful or Correct.

Subscribe yourself at walkonblock.com

knuter
Enthusiast
Enthusiast

Thanks Smiley Happy

Yes, for ESXi 5.x and VMFS3 I understand the max heap size is 256MB.
So you're saying that "VMFS3.MaxHeapSizeMB" should be thought of as "VMFSx.MaxHeapSizeMB" since it applies to both VMFS3 and 5?

Reply
0 Kudos
Cooldude09
Commander
Commander

u got it Smiley Happy

If U find my answer useful, feel free to give points by clicking Helpful or Correct.

Subscribe yourself at walkonblock.com

Reply
0 Kudos
knuter
Enthusiast
Enthusiast

Great! Thanks Smiley Happy

Reply
0 Kudos
Cooldude09
Commander
Commander

feel free to mark it as Helpful or Correct, if you found it useful Smiley Happy

If U find my answer useful, feel free to give points by clicking Helpful or Correct.

Subscribe yourself at walkonblock.com

Reply
0 Kudos
aravinds3107
Virtuoso
Virtuoso

Yes, When you select the advance parameter for VMFS on your host you should see this parameter.

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful |Blog: http://aravindsivaraman.com/ | Twitter : ss_aravind
Reply
0 Kudos
deebsr
Contributor
Contributor

Sorry to bump this older thread but I have some questions about this issue.

Checking the same KB article it seems that Vmware has updated it.

http://kb.vmware.com/kb/1004424

As well as here: http://www.boche.net/blog/index.php/2012/09/12/monster-vms-esxi-heap-size-trouble-in-storage-paradis...

It now states that the max is 8TB by default and can be increased up to 25TB! 25TB per host ( which translates to per VM ) is a major limitation.

To me I just don't see Vmware thinking that this limitation is ok as most "monster" VMs ( like Exchange & DB servers ) could easily exceed this limit.

I notice that this article http://virtualkenneth.com/2011/02/15/vmfs3-heap-size-maxheapsizemb/

Explains that the block size on the VMFS changes these limitations. Is this still true in ESXi 5.0? Could we just setup a new VMFS data store as a VMFS3 with a 8MB block first then just upgrade it to VMFS5  ( which would suposedly keep the 8MB block size ) to get around the 25TB limit?

Thanks for your help

Reply
0 Kudos
smishch
Contributor
Contributor

Hi, all.

I have the same question. I found a way to create VMFS-5 datastore with 8mb block size and extent with more than 2tb (create vmfs-3, upgrade, increase size). But would it help with limit of 25tb per host on esxi 5.0? Fast internet search doesn't give me an answer.

It would be nice to get answer in this thread. Thanks anyway.

Reply
0 Kudos
deebsr
Contributor
Contributor

Definitely seems that there has been an update to the information regarding this heap size issue. Only problem is that the level of detail is somewhat vague.

To me this is a major issue as 25TB is a very small amount of storage that one host (which also trasnlates to the max for one VM ) can support. I just don't see how Vmware could be happy with this issue. Unless there is more info about it that we just don't see.

They say its 25TB of "active" storage. So is that meaning 25TB max of toal reads/writes happening at any given time? In other words you can have say 100TB of vmdks but only 25TB can be read/write to at once?

Reply
0 Kudos
deebsr
Contributor
Contributor

Just to add. Found this article: http://longwhiteclouds.com/2012/09/17/the-case-for-larger-than-2tb-virtual-disks-and-the-gotcha-with...

So it sounds like this is the limit. 25TB thats it!

At this moment it seems to me that Hyper-V 2012 does not have such a low limit like this. Hopefully Vmware will get their act together quick and resolve this problem!

Reply
0 Kudos
smishch
Contributor
Contributor

Not quite. I'm testing 8mb block size vmfs-5 (upgraded vmfs-3). I observe such thing.

1. After starting of VM on vmdk host reserve some of memory from heap.

2. After you start writing/reading disks host reserves some more memory (for unmapped blocks).

3. If you have just 1 VM this limit may be larger (i have got about 40tb writed on vmfs-5 with 1mb block)

4. After host mapped all what he could (memory over) VM got stuck on IO

So you can link to VM any amount of vmdk's, but while accessing them you will get "out of memory" (when you accessed about this limit of 25tb). You can reset or migrate to another host (seems to be working) VM to release reserved memory and start reservation from the beginning.

With upgraded 8mb block vmfs-3 to vmfs-5 it's looks better. I have already accessed about 45tb of data and host still used about 34mb of heap. So it's can be the way to solve this limit. But i don't know how VMWare Support would react on such construction :smileygrin:

Reply
0 Kudos
deebsr
Contributor
Contributor

Nice work!

So it looks like this limitation is definitely the "active" storage IO in use. Woudl this be correct? I wonder if the host has some sort of way to release this heap memory as the blocks become dormant again?

On my test system ( acually its a system that we have yet to put into produciton ), I setup a VM with about 25TB using the VMFS5 1Mb block size.

I did not see a single error or problem in any of the logs described in the Vmware KB. I would assume based on that KB that I should see issues with the default of 80MB and using 25TB of storage.

I hope Vmware will release an update to that KB clarifing a bit further when this issue might come into affect.

Reply
0 Kudos
smishch
Contributor
Contributor

First of all, it's looks like 1 host with 256mb head can handle 1 VM with about 40tb of data (if you run more VMs this limitation can be decreased to 25tb) as i see from my tests.

>So it looks like this limitation is definitely the "active" storage IO in use.

I think "was accessed" is a better description.

>I wonder if the host has some sort of way to release this heap memory as the blocks become dormant again?

I don't know, but things what i expirienced on a working system brings me thought that No, EXSi doesn't clear heap by itself, but you can clear it by

1. restarting VM on any host (including the running one)

2. migrating this VM to another host (target host starts to full his heap, source release his)

>I did not see a single error or problem in any of the logs described in  the Vmware KB. I would assume based on that KB that I should see issues  with the default of 80MB and using 25TB of storage.

You may try to look on heap usage

memstats -r heap-stats | grep "\(vmfs\)\|\(size\)"

on ESXi.

name      dynGrow lowerLimit upperLimit reserved numRanges dlMallocOvdh       used      avail       size         max pctFreeOfCur     maxAvail pctFreeOfMax lowPctFreeOfMax
                              vmfs3            1          0         -1        0        20         2448  266193312    2243120  268436432   268436432            0      2243120            0               0

At this point my VM was already dead Smiley Happy

So, i tried to kill VM on vmfs5 with 8mb block, but as i expected, after accessing of 72tb heap usage is still about 50mb.

Reply
0 Kudos
MichaelW007
Enthusiast
Enthusiast

Just be aware that there are limitations around VAAI when you're using a VMFS3 that has been upgraded to VMFS5. With 1MB block size VMFS5 will support up to 25TB active per host, as in my blog article. This is due to the number of 1MB blocks and the heap size, so yes with a larger block size more storage could effectively be supported. It is common practice to reformat datastores as VMFS5 to ensure VAAI effectiveness. Like with a lot of things there is no one right answer or only one solution. Take a look at my article on longwhiteclouds and provide some feedback, especially around the alternatives that I've proposed which will workaround the problem. I'm interested in your thoughts on this and also the argument to support >2TB per VMDK.

Reply
0 Kudos
bpadams52
Contributor
Contributor

In your ESXi host>Configuration Tab>Advanced Settings>Mem, does anyone know how the Mem.AdmitHeapMin value will affect VM's that are recieving "Cannot Allocate Memory" errors when powering on?  The description of this value is "Free heap space required to power on virtual machine" with min 256 max 10240 and default 1024.

I have +25TB presented to each host which results in these error messages caused by heap size issues.  Will lowering this value be a work around to the memory allocation errors?

Reply
0 Kudos
MichaelW007
Enthusiast
Enthusiast

Lowering that value won't help. The only solutions that are currently available are documented in my blog article - http://longwhiteclouds.com/2012/09/17/the-case-for-larger-than-2tb-virtual-disks-and-the-gotcha-with...

Reply
0 Kudos
bpadams52
Contributor
Contributor

Is there a command or any way to view VMFS heap usage and availability statistics?

Reply
0 Kudos
PhillyDubs
Enthusiast
Enthusiast

Try this from the CLI

~ # vsish
/> cat /system/heaps/vmfs(hit Tab key to auto fill the rest of this)/stats   (hit Enter)

This should give you an output of the VMFS heap memory

In regards to 25TB not being enough...that is an awful lot of storage that isn't NFS or RDM.

VCP5
Reply
0 Kudos