3 Replies Latest reply on Apr 13, 2009 3:48 PM by mcowger

    VMware ESX 3.5 and SAN performance

    jwnchoate Enthusiast

      Currently we are using Compellent with software iSCSI. Up to now, its been sufficient for 52 vm's on 5 8 core hosts.


      Up to now, our I/O across each host is smooth. We run approximately 5,000 kB/s - 12,500kB/s at peak times. At most times I/O runs 2-3,000kB/s or less. For software iSCSI this has been sufficient.



      It was decided by management that they wanted the benefits of Virtualization for a very busy accounting system based on QAD/Progress. Currently this server is a 4 core w/hyperthreading host, so there are 8 simultanious threads. (hyperthreading has been determined to help on this system) The storage is a straight 10 disk array. Raid 10 with 8 of the disks for the db's. it runs on a 3G SCSI connection.



      Sofware iSCSI is on 1G ethernet and the Compellent SAN is directly attached to the Cisco back end switch on the blade chassis. We are configured in such a way that there is no other traffic but I/O from host to Compellent.



      From testing we determined that under an extreme load such as a db reindex, backup or some kind of massive I/O test the VMhost will max out somewhere around 60,000 kB/s. The VM guest itself can show I/O of nearly 200,000kB/s internally. The guest comes to a crawl and lots of %iowait time is spent waiting for the system to catch up. On the production box, day to day running is no near 10,000kB/s yet, but the DBA's are freaked out on the back-end admin work they have to do.



      Obviously, we're overrunning the software iSCSI's capability.



      What I need is to hear from you lucky soul's who use 4G Fiber and HBA's. Its clear they are much faster, but I would like to know what kind of I/O can I do with it. Bossman wants some hard info before he ponies up the big pile of greenbacks. Lets assume the SAN can handle it and its the iSCSI that is the bottleneck because it is. If you are using Compellent, even better!