During my recents performance tests I measured that any Win Guest could access its VMFS Volume that resides on one single SAS HDD on the WINHOST with sequential IO rate of approx 40 MB/s read and write sequential. This throughput can also be achieved if I connect to may RAID6 NAS from Thecus.
If the read access can benefit the 256MB read/write cache (with BBU) the throughput reaches up to 250 MB/sec or higher values.
Normally in physical servers I have installed one DC as Fileserver with two logical SCSI HDDS via LSI MegaRAID. The second Server is the TS that supports all the remote users and some local XPprof workstation are the third part, which access the shared volume that resides on the DC as a fileserver.
So if I desin an equal arrangement in virtual environments the datathroughput (measured with CristalDiskMark or IOmeter) from the TS to the shared volume of the DC shows much lower values (about 20 MB/sec) through 1 GbE devices.
Until now I didnt find a way to rise the datatroughput to higher values that are similar to the 250 MB/s. I tried to install several HW versions in VM Server 1.04 and VMserver 2ß1 or now RC1 and I tried several installations with vlance e1000 and even vmxnet.
I tries to access my DC from virtual WinXP guests or from virtual WIN2003 TS. I tried to use jumbo frames and tried to configure several trunking scenarios. I had installed the original Broadcom Dual Port Server NIC and I installed an additional Intel Pro 1000 DualPort PCIe card to simulate all diffenent types of TCP optimizations in real equipment to my NAS boxes or in the virtual system.
1: Has anyone set up a connection from a WIN guest1 to another WIN guest2 that can almost reach maximum possible throughput from a shared volume of one WINguest ? via 1 GbE ?
2: Has anyone reached higher throughputs to a shared volume of a virtual NAS (like FreeNAS) that also rersides on the same virtual server ?
3: Has anyone reached higher throughputs with virtual 1GbE devices on Virtual Server utilizing any of the new features as Broadcom's TCP offload engine (TOE) or Jumbo Frames ?
4: Has anyone reached higher throughputs using ESX Server that has implemented all the above mentioned TCP performance features or even Intel® I/O Acceleration Technology (Intel® I/OAT) on 1 GbE virtual devices ?
5: Is it possible to use 10GbE (with ixgbe driver) virtual devices in an ESX server ONLY for communication between two VMs that are residing on the same virtual server to reach higher throughputs WITHOUT having installed a physical 10GbE device ? (If 10GbE traffic is not necessary in physical wired systems)
6: Is it possible to implement 10GbE virtual devices even in VMware Server v2 or Workstation v6.5 if no 10GbE connection has to be established to physical world only to overcome bottlenecks in the TCP stack concerning the above mentioned file sharing ?
FujSie RX300 S3 2xQC 1,6GHz 10GB LSI 6x144GB SAS 10k / WINHOST64 / VMserver20RC1 / WinDC32 WinTS32 WXPpro Knoppix