That configuration will change how we draw the final contents into the window, but not how the actual graphics rendering is done.
So on your card, the rendering is still being done on your GPU with Vulkan, and then when we have to draw into the window we will use X11 for the last step (usually called "Presentation", hence the config option name).
The cost is proportional to the display size and the frame rate of the workload, so how much of a performance hit that is will depend on your system and workload. Applications that are running very graphics heavy workloads and already getting low frame-rates probably won't notice at all, whereas as the graphics workload gets lighter and the resolution/FPS increases, there will start being more of a difference between the two modes.
On my system, I can't tell the difference between the two paths up until my graphics workload gets too high and my host GPU gets saturated, and then my performance falls off a cliff. So if your application has a frame-rate limiting option, you could try lowering the FPS and ironically your performance might improve. But the difference between the two paths might be greater on lower-end GPUs?