VMware Communities
stevecs
Enthusiast
Enthusiast

vmware SVGA memory size (>8GB) and General Performance Improvement Question (host mem bandwidth?)

 

I have/use VM's as my primary desktop (my host is ubuntu 22.04 or windows server 2022 and 2019).   I am a *VERY* heavy multi-task worker and have numerous applications and VM's running.   On a single VM I have the screen spanned across 5 2560x1440 displays (in vertical mode) so 7200x2560 resolution to the guest (s). 

Even with 8GB of sVGA memory I see issues where not everything can be stored in shared graphics on the guest.  It's close, and depends on the applications I'm running.   Does anyone know if you can override this to the guest to provide more sVGA memory?

Second but related question.  Since this is 'shared graphics memory' on the guest, I am assuming here that this is stored in the host's RAM and not directly mapped to the Host's GPU?  I.e. the host may map this to the gpu at a later point but, the guest/hypervisor stores this in the host's main memory?

Reason why I'm asking this is, IF the hypothesis is true, would changing the host hardware to use a platform that has more memory bandwidth help in performance for graphics on the guest?   i.e.  Opposed to a consumer platform of 2 channel memory, going to a server/workstation platform with 4-12 channel memory help here?  As the memory bandwidth would be greatly increased.  

Has anyone done any testing with this?

0 Kudos
2 Replies
banackm
VMware Employee
VMware Employee

5 * 2560 * 1440 = 18432000 pixels, so at 4-bytes a pixel, that's only ~70MB ?  So we shouldn't have a problem just fitting the screens in.

When you say you see issues, what kinds of problems are you seeing specifically?

But no, our virtual graphics device doesn't support more than 8GB of graphics memory at a time.  The Guest and Host have some ability to swap things in and out of the GPU, so with multiple applications it should be more of a soft limit than a hard limit.

As far as how much does memory bandwidth help, it's very workload dependent, so it's hard to say generally.

0 Kudos
stevecs
Enthusiast
Enthusiast


Umm. There are these things called 'Textures'. 🙂 so it's more than just pixel space. Sorry, couldn't resist a snarky response.

Anyway, what I'm seeing is that textures do not have enough space to load in for detailed CAD, rendering, and game development work. Native function on 24GB or more cards I don't see this. This was similar back when I had to move from 4 and 8GB native cards to ones with larger memory. Since workstation *appears* to be using the host's physical ram as a shared memory buffer that then ?maps that? to the host's gpu figure it would be possible to allow a larger reserve space for that mapping. I mean it is relatively easy to have a host that has 256GB or more ram in it so to dedicate say 64GB ram to a guest (32GB for shared memory) for an example, is not really a hardware issue.

So was just trying to see if there was a way remove or expand the software limit.

0 Kudos