Has any one met with research regarding comparison in IOPS performance between virtual machine and physical one.
I am interested in information about 3 configurations (assuming OS is storage that is not tested):
1. IO from psychical OS to drive.
2. IO from virtual Os to drive that is part of datastore.
3. IO from virtual Os to drive that is passthrou to virtual machine.
I found information about CPU and memory usage but i didn't find straight answer to my questions
Theoretical answer is also appreciated.
Hello,
IOPS (input/output operations per second) is the standard unit of measurement for the maximum number of reads and writes to non-contiguous storage locations.
IOPS is frequently referenced by storage vendors to characterize performance in solid-state drives (SSD), hard disk drives (HDD) and storage area networks.
IOPS numbers are affected by the size of the data block and workload performance, it's unlikely that vendors will use standardized variables when listing IOPS. Even if a standard system were used to determine IOPS, with a set block size and read/write mix, that number means nothing unless it matches up to a specific workload.
As conclusion, it is not related to Physical/Virtual OS.
Please consider marking this answer "CORRECT" or "Helpful" if you think your question have been answered correctly.
Cheers,
VCIX6-NV|VCP-NV|VCP-DC|
Hi
I know what IOPS are
I just wonder is virtual layer somehow affecting, reducing number of it when comparing performance of virtual platform and psychical one.
I think that there are should be some delay because of Virtual Machine Monitor or VMKernel. I just want to confirm or deny my logic.
To create a VM you need to select one out of the following options: | Overhead of VMFS-filesystem | IO-Performance benchmarksput your figures here | Overhead for scheduling resources | Overhead for virtual hardware inside VM | Overhead for NTFS filesystem | Reacts to power- failures like a physical ... | Readable when VMFS layer fails | |
---|---|---|---|---|---|---|---|---|
VM uses a thick VMDK stored on VMFS | small | ? | small | small | significant | almost as good as Windows | requires precautions | |
VM uses a thick VMDK stored on VMFS via iSCS | small | ? | small | small | significant | almost as good as Windows | requires precautions | |
VM uses a thin VMDK stored on VMFS | small - very large | ? | small | small | significant | ESXi | no | |
VM uses a thin VMDK stored on VMFS via iSCSI | small - very large | ? | small | small | significant | ESXi | no | |
VM uses a thin VMDK stored on NFS | none | ? | small | small | significant | like a NFS-server | yes | |
VM uses RDM-VMDK | very small | ? | small | small | significant | Windows | yes | |
VM uses a separate SCSI-device in pass through | none | ? | small | small | significant | Windows | yes |
Folks use IO-benchmarks in the hope that it helps to decide the question:
What VMDK-type should I use in my case ?
Hope my little table helps to put the IO-benchmark-results in to perspective.
I highly recommend to decide this questions with the highest priority given to the column: Reacts to power-failures like a physical ...
IO-benchmarks may be useful - but dont ignore the other factors I listed.
The smaller the environment is the more relevant the other factors are.