Scenario 1: Windows XP Pro VM created using local storage for OS vmdk file. A volume on Compellent SAN presented to ESX server iSCSI software adapter and added to ESX storage. VM has 2nd hard drive/vmdk file on aforementioned SAN volume for performance testing. (For this particular test, this is the only vmdk file on the volume.)
Scenario 2: Windows XP Pro VM created using local storage for OS vmdk file. A volume on Compellent SAN presented to Microsoft iSCSI initiator running inside VM.
Using an IOMETER and a particular test configuration file that I obtained from another performance-related thread, I obtained the following results:
"Max Throughput - 100% Read"
ESX iSCSI = 1,258 IOPS
MS iSCSI = 3,062 IOPS
"Max Throughput - 50% Read"
ESX iSCSI = 1,423 IOPS
MS iSCSI = 2,453 IOPS
Is it normal to see such a performance difference between the ESX iSCSI Software Adapter results and the MS iSCSI initiator? Might I have an ESX configuration issue that needs to be resolved to increase performance?
Any insight would be appreciated.
I have been trying to run some tests before migrating some of my disk IO heavy systems to ESX, comparing running MS initiator in the guest, and raw mapped disk from ESX. I have some difficulties in getting any numbers I trust. Some tests report no differences, some report a gigantic difference - among the last IOmeter and sqlio, with the MS initiator performing best. Of course the guest CPU load is higher.
Do anybody have any data regarding this. Paul says this is known behaviour, but others are sure it should be the other way around - see http://www.vmware.com/community/thread.jspa?messageID=545525򅋵
I have been suspecting that maybe the ESX initiator just does not push the iSCSI storage as hard as it could. When i try to do some heavy IO on my guests and monitoring with esxtop I see very nice response times and low queues, but the total number of IOs are not as high as I expect - and as high as I know the storage can deliver with a physical windows server attached. How does this corespond with others experiences?
I'm not sure, but it's probably the settings used in this thread that ascheler uses:
Myself, I just created an access definition using 8KB transfers, 67% read, 100% random.
This is known behaviour (guest iSCSI faster than ESX
iSCSI) and I've been told the performance of ESX
iSCSI should be much quicker in the future.
That is correct, Netapp even recommends to boot the OS from the ESX iSCSI client
and run the application from MS iSCSI client for scalability.
Not exactly an ideal setup...
We are aware of the scalability issues, and are working hard to improve both the iSCSI client and the vmkernel IP stack.
Perhaps we should have waited to release iSCSI support until it had gone through some more tuning,
but iSCSI was high on the feature request list from costumers.
The performance difference is there, but for most usage you wont notice.
Anders, thanks a lot for your answer. Most of my heavy IO servers (file server and exchange) is already running the MS iSCSI client, so I think at first I will keep them running the MS iSCSI client when I migrate.
I know this is an old thread, but what is the latest info on the ESX iSCSI initiator. Has it been beefed up, and now comparable to using the Microsoft iSCSI initiator inside Windows 2003 Server? We are still trying to decide which way to go. (We don't have Vmotion or DRS).