I have never seen any performance tests for SvMotion for something like this to be honest. I doubt it will matter much either. I would probably recommend to keep it simple and go for a larger VMDK. vSAN will slice it up during placement as required.
I appreciate the response. One test I just tried was how long it would take to make a snapshot of the VM's running with similar OS configuration (256G RAM, 4vCpu) but differing in number of VMDK's. One with a single 4TB VMDK and another with 4 VMDK's which total about 4TB (.4TB, 1.1TB, 1.2TB, 1.3TB) and the snapshot times where about the same (Execution Time: 1 h 44m 54 s for the single VMDK system and Execution Time: 1h 43 m 51 s for the system with 4 VMDK). In this case it looks pretty consistent which is good but this is only one test case.
This brought about one concern as to why it would take nearly 2 hours to create a snapshot (each VM was pretty much idle and the only two VM's running on the 2 node cluster). From what I (I'd consider myself a VMware or VSAN noob) could tell it appeared that my bottleneck was in networking (seemed during the snapshot the network utilization [combined transmit-rates and receive-rates] per host was about 43,000 - 53,000KBps when one snapshot running and around 90,000 for two). Would this be a tuneable somewhere?
I also came across the following post by Cormac (https://cormachogan.com/2016/02/19/vsan-6-2-part-4-iops-limit/ ) which gives an example of one of the items I was looking for, i.e. that there are potential IOPS limits per VMDK which may mean that multiple VMDKs might be better to allow for tuning them individually.
Another potential which I am still looking into is that the Linux VM itself might be tuned for specific I/O (minimally by choosing a different scheduler [i.e. noop vs deadline] but I'd like to know if there are others) and as I understand it those tuneables are set per block device which might also lend itself better to using multiple VMDK's. As one example, we typically us LVM (Logical Volume Management) to create separate volumes for the Oracle data files and their resultant RMAN backup files and in this case the volume with the data files would be a mixed read/write and more random I/O while the volume with the RMAN backup files would probably be more writes and sequential. So my thought was that if I had separate VMDK's for these that I might be able to tune them differently in the VM OS (even though the underlying block devices are all handled by VSAN). Am I off the mark here?
I like the idea of keeping it simple but if there is a way to avoid some future pitfalls now before the systems goes to production I'd like to identify them and understand what the trade-offs are that we might be making.
Thanks again for any insight into this.