Let the games begin! I am sure Mustafa at VMware would argue for VMFS.... :smileygrin:
Who uses NFS? Who uses VMFS? What arguements can be made for each?
I actually make use of both in our environment. We use FCP/VMFS for all of our production and the more conventional test environments, and make use of NFS for our lower performance requirement type stuff. In production, that'd be our templates/iso volume, which we replicate across multiple datacenters, and in test, for VM storage. I don't necessarily think there's a righ or wrong here, but what is best for a particular environment. The NFS implementation in the vmkernel in ESX 3.0.x leaves a lot to be desired from an I/O performance standpoint, so you don't want to run any of your high visibility stuff on it, as many disk-related performance problems can and will be attributed to using NFS.
I use both.
I use NFS on Netapp filers and to be honest I have no complains.
Performance its great, We use snapshots on the filer so we don't have to worry about the VCB, Disk IO is great and we can resiz on spot.
I cant say that there is a big difference in performance in between VMs that I have in NFS and the ones which are running in VMFS.
The only main reason that I see there is the thin-provisioning...
I admin in a prototyping lab and we are always cloning out VMs for work...
Building a prototyping environment in VM is a bit different than building up a corporate infrastructure...
I still need mail servers, domain controllers, etc... but in addition, I get to have large groups of VMs from individual efforts that have to be kept around and kept operational, should the staff have to reinvestigate a problem they were prototyping against/etc...
As a result, I have LOTS of thick vmfs disks lying around on my SANs, just eating space and being completely idle/powered off.
Before Converter came out, moving these VMs off and converting them back to thin disks was a PITA... Now that Converter is available, it's easy to move the disks off to cheaper storage, and they get written out in thin format as an added bonus.
With NFS, I could keep them on "cheap" storage all the time and not have to shuffle things around.
I also don't agree with his SAN performance #s, as I'm easily pushing 90MB/sec average in/out over 2GB fiberchannel from my ESX hosts to the SAN.
On average, my VMs perform disk operations faster than my physical servers, with access to a LOT more files (I run multiple distributed filesystems in a few clusters).
I replied on his site, we'll see if he approves my comment and replies with his testing methods. I fear he had some form of RAID plaiding going on which might have impacted his results.
I see a lot of talk and reports about using ESX with NFS on "specialized" appliances such as the NetApps. But what about using NFS exports on a "regular" Linux server? Does anyone know how that might affect performance as compared to running the VMs off of a NetApp device or local storage? Obviously there is going to be some degradation in performance compared to local storage, but how great? Small businesses shouldn't miss out on the benefits of Vmotion and DRS just because they can't afford the higher-end appliances. Or has VMware overcome that need for shared storage to use Vmotion?
Its supported on a few regular OSs - but not many. Its performance is entirely dependent on the underlying hardware....a pizza box with 2 drives will perform very differently than a pizza box backed by a 700 disk FC SAN
Vmotion still (and probably always will) require shared storage.