We're trying to move away from direct attached storage for anything on our enterprise where it's feasible. We were confident that our vi setup would be okay but now I'm not sure, backups that normally took a few hours are taking 12+ to complete and the guests are going unresponsive during the backups. I guess I'm having trouble figuring out where the bottleneck is or if we're just pushing the limits of what nfs hosted guests are able to do.
NFS can be a solid storage protocol, when deployed properly. Can you provide much more detail here?
How many ESX 3.5 hosts?
How much direct attached storage is in each ESX host and how is it configured?
What is your NFS server running on? Is it on the HCL?
How is the NFS server built out - disks (types, speed), RAID levels, network connectivity (links, speed), number of IP addresses or aliases used to access?
How are you connecting to the NFS server from your ESX hosts - vSwitch setup, link speeds, dedicated or shared network, physical switch setup, speeds, etc?
How were the "old" backups that took a few hours to complete done and how are the new 12+ hour backups being done?
2 ESX hosts
the only direct attached storage on each host is the boot drive (mirrored 146 gig drives I believe)
nfs is an emc datamover, I'm pretty sure it's on the hcl (same parent company) but I'll get my storage guy to check
should only be one ip address for the nfs, we are accessing it via an alias
I'm going to have to get some more info on the last two from my storage and esx guys.