VMware Cloud Community
huff-n-puff
Contributor
Contributor

VMFS on NFS

Hi

We have just got ESX 3.5 with a NetApp SAN.

Can we use an NFS presentation from the NetApp to put the VM files on, so that it can support vMotion, HA and DRS?

I believe to support this we need a VMFS volume, but dont know how to access the NFS in this way.

Any guidence appreciated, or pointer to guides.

Thanks

Tags (1)
Reply
0 Kudos
11 Replies
weinstein5
Immortal
Immortal

Yes you can use a NFS to house your vms and as long as the NFS datastore is shared between your esx hosts you should be able to vmotion, drs and ha - you do not need VMFS for this functionality - in fact you cannot place a vmfs datastore on a NAS device -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
JBradford
Contributor
Contributor

NFS storage is not like iSCSI or fibre channel because it can't be formatted as VMFS because it is being hosted by a non VMware server - in your case NetApp. Basically you will need to create your NFS export share on the NetApp and make sure that ESX has rw access. You will also need to create a VMkernal port on an existing virtual switch or create a new switch that has access to the VLAN your NAS is using. Finally, you need to create the new datastore from the VI client. From the server configuration tab you need to select storage (SCSI, SAN, and NFS) and click Add Storage and plug in the fields with the needed information (NFS IP address, folder, etc).

You may not want to use this for high I/O servers as NFS is not as fast as either iSCSI or SAN storage, but it is great for low priority VMs. As for vMotion, HA and DRS goes, I've never personally tried that. I would think that you could do it as long as all ESX hosts have access to the NFS share. I'd have to say test before deploying though! Smiley Wink

Reply
0 Kudos
huff-n-puff
Contributor
Contributor

Ok you have confused me some what I have read many documents saying that NFS is in fact better than the other choices as it is much lower latency, better scalability etc..

But that aside, We do have NFS visible to all of the hosts, and we can put the VM files on it, but vmotion did not work when we tried to manually migrate from one host to another.

If you can't use NFS for the actual VMs then there is little left other than ISO and other files.

Am I just asking the wrong question?

Reply
0 Kudos
khughes
Virtuoso
Virtuoso

Are the NFS shares presented to the ESX hosts exactly the same? Same mount location with the same case sensitive datastore name? I have used NFS for our R&D area and was able to vMotion around using the NFS share, so it works.

Also make sure you have vmkernel connection on both ESX hosts to talk to the NFS shares.

As for the NFS vs iSCSI/FC, I would go with iSCSI or FC just about any day for performance purposes.

  • Kyle

-- Kyle "RParker wrote: I guess I was wrong, everything CAN be virtualized "
Reply
0 Kudos
weinstein5
Immortal
Immortal

What error do you receive when you try to vmotion? Do you make it through the validation? Storing your vm on an NFS datastore will not prevent it from vmotioning - confirm the questions kyle has asked -

In terms of NSF vs. SAN - with the underlying disks/hardware being the same - FC SAN should give you the best perfomance for your shared data storage

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
qmcnetwork
Enthusiast
Enthusiast

Just my 2 cents, but we've had great performance with NFS vs iSCSI. In fact, we just switch over a small VI3 environment (3 ESX) servers from an iSCSI target to a Solaris NFS share. The ESX servers are in a HA/DRS cluster and we vmotion/svmotion between NFS shares and local storage just fine.

With just a single 1Gb NIC we see up to 76MB/s Reads and 45-50MB/s Writes. With NetApp arrays my understanding was that the NFS performance was faster in most cases with iSCSI, plus you get the added benefits of sparse disks, being able to backup files directly without the need of the service console mounting VMFS and support for more VMs per storage volume because of less I/O contention that comes with VMFS.

I agree that for now at least it's probably not where you want your Tier 1 production VMs, but it's hard to argue with the value and convienence.

Jonathan

Reply
0 Kudos
huff-n-puff
Contributor
Contributor

All, I will check the settings, and report back. As I am in the UK I will have to wait until tommorrow.

One thing though, I did have to set up the NFS storage on each host machine individually, is that the correct method or should it be done another way?

M

Reply
0 Kudos
qmcnetwork
Enthusiast
Enthusiast

Yes. I believe you define the NFS storage on each ESX server.

You just need the host, share name and description. Keep these uniform between systems.

Jonathan

Reply
0 Kudos
weinstein5
Immortal
Immortal

That is correct - in needs to be configured for each esx host

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
team-ip-service
Enthusiast
Enthusiast

Hi,

I guess the myth about iSCSI/FC (VMFS) is faster (best choice) then NFS will drop in future. Smiley Happy

I think that most VMWare guys prefer FC, accept iSCSI and don't believe in NFS.

But our experience show that NFS (on NETAPP) is as fast as iSCSI and more flexible then VMFS Datastores aso. And in fact NFS could be faster then VMFS, cause of the single IO Queue per VMFS Lun.

As always it depends on your requirements.

Markus

Reply
0 Kudos
team-ip-service
Enthusiast
Enthusiast

Hi huff,

just a short brainstorm.

NFS with netapp is a good choice.

I was on a NetApp VMWare course and at this time we had already NFS in place. After this course I knew, that we've made a good decision.

If you like to use NETAPP Snapshots you will lose more diskspace with VMFS Luns (iSCSI/FC) then with NFS. At least if you stay with the defaults. I can't remember if you can adjust fractional reserve (Default 100%) for Luns.

The single IO Queue with LUN's, will lead you to smaller LUN's (not so much VM's per lun) but you just can have 256 LUNS per esx. If you use a Cluster you can just have 256 Luns per cluster.

It took the trainer about 3 days to express most of the advantages and disadvantages of NFS, so I would be a fool to believe I can summaries this in a post.

I just want to express, ask your self what is important for you, look which of the protocol full fill your requirements and then decide what to use.

You could also simply jump into the water, as we did, and simply use one Protocol. We started with iSCSI and moved to NFS.

Pro NFS

  • Thin Provisioning

  • Better Snapshot with NetApp

  • Flexible with storage planing. Extend / Reduce Vol Size on the fly.

  • No single IO Queue

  • Not so much VOL's . With VMFS you are supposed to make more smaller VOLS, cause of Single IO Queu.

Con NFS

  • VMWare Snapshot creation/deletion slower

  • Security, especially if you use the wrong setup on the netapp. But even with this I' the opinion that NFS is not as secure as FC.

Markus

PS

http://viroptics.blogspot.com/2007/11/why-vmware-over-netapp-nfs.html

Reply
0 Kudos