As with most performance tests, the answer is: "it depends".
What blocksize are you using? Are you using filesystem or raw volume(s) as the Iometer target? How many path(s) are you using?
Obviously, it depends, but my question is not to help me with my settings.
What I would like to hear is what kind of SAN, how many disks, shelves, back end loops, front end loops, block sizes, etc that you are using if you are getting near 200MB/s or better.
The basic guest VM in our environment is a windows 2003/2008 x32/x64 server with an O/S .vmdk and a Data .vmdk in the same folder on the same lun with a single 'active' path. We do have standby paths. I can get multiple vm's or a single vm with multithread i/o to go past 250+MB/s over a single active path to the SAN. That is why I am asking about the single thread I/O in your environment.
My goal is to determine if single thread I/O near or better than 200M/s is possible with a windows guest vm using vmdk's on a vmfs3 volume, and if so, what does it take to get it there?
Well, I don't have a SAN with shelves or loops. I have a SAN with two ISEs and fabric. I just measured single-stream performance from a Dell 1950, 3.5U4, Windows 2003 R2 VM, single path, single HBA (Qlogic 24xx) to a single NTFS volume with single-threaded/queue depth=1 using IOmeter @ 256 KB blocksize and got 182 MB/sec.
Sorry to hear on your similarly configured VM perf is 90 MB/sec.
thanks, that tell me its possible to get there. Anyone else out there. got any additional info.
BTW, yes on iomter we also used the 256K blocksize in our tests as well so thats good to know.