Our current setup is as follows.
8 blades -- HP ProLiant BL480c G1
Dual Quad core xeon processors -- total 8 cores per blade
24gbs of ram per blade
Per blade (4) gig network cards
2 hbas per blade
Currently we connect via fiber channel to the Netapp 3070 -- 4gig fiber connections the the netapp. we also use a nfs mount as a file store.
Issues we are having. we have an indexing process that is CPU and RAM intensive we are noticing a huge performance hit when we have a VM built over the fiber channel we have noticed a performance increase in i/o when we have the vms built over nfs.
I am looking for a vmware stress tester and all i can find is windows apps. nothing that really helps me when we run only centos.
We are also running into alot of issues with our clone speeds. over fiber channel it takes us roughly about 1.5 hours to clone a 12gb box. over NFS it takes about 1 hour. These speeds to me are insane. I have worked at a place prior to this that didn't have this issue you could clone a box in a matter of mins... which is the point of VMware... to beable to rapidly deploy vms.
My questions :
1. how to solve the slow clone speeds
2. where to find a virtual machine stress tester that can show performance reports.
3. which is more best practice nfs or FC. we notice some serious slow downs over both.
any help is always appreciated --- any questions please ask..
A few items to set you off in the right direction:
It appears that your performance problems are hardware related. If you're getting good speed on one connection (NFS) and not the other (FC) I'd check the hardware first. I recommend that you look at the device and kernel latencies in esxtop (DAVG and KAVG) to see if you're seeing unusually high latencies on the FC device. This is detailed in VMware's performance analysis methods paper.
It's easy to perform storage stress testing on your system to help identify the problem. On Windows guests we use Iometer, a . If you're unable to create even one Windows guest on your system try aiostress.
NFS and FC protocols are equivalent in performance from an ESX Server perspective. This was documented in our recent performance comparison of protocols paper . Of course the FC links can sustain a higher total throughput.