rickardnobel
Champion
Champion

Overhead with software iSCSI vs NFS?

Jump to solution

If using NFS to store your virtual machines VMDK-files, could anything be said about the CPU overhead for this, compared to software iSCSI?

If using FC or hardware iSCSI it seems most of the work could be offloaded to the HBA, but what about NFS which must be done in software? Will it be simpler than iSCSI since the Vmkernel does not have to handle lower level block access?

My VMware blog: www.rickardnobel.se
0 Kudos
1 Solution

Accepted Solutions
vmroyale
Immortal
Immortal

Hello.

The difference is very minimal. Check out the Comparison of Storage Protocol Performance in VMware vSphere 4 white paper.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com

View solution in original post

0 Kudos
4 Replies
vmroyale
Immortal
Immortal

Hello.

The difference is very minimal. Check out the Comparison of Storage Protocol Performance in VMware vSphere 4 white paper.

Good Luck!

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com

View solution in original post

0 Kudos
rickardnobel
Champion
Champion

Thanks. I was a bit surprised that the CPU overhead was higher with NFS than software iSCSI. I would have imagined that the block level access should be more demanding actually?

There is a lot of information about multipathing iSCSI, but very little information about NFS configuration. Could that not be done or is it not interesting in filebased access?

(I can't even find in which documentation the NFS config is). Smiley Happy

My VMware blog: www.rickardnobel.se
0 Kudos
vmroyale
Immortal
Immortal

Thanks. I was a bit surprised that the CPU overhead was higher with NFS than software iSCSI. I would have imagined that the block level access should be more demanding actually?

There is a lot of information about multipathing iSCSI, but very little information about NFS configuration. Could that not be done or is it not interesting in filebased access?

It's interesting, and it can be accomplished with nic teaming and ip hashing on the ESX(i) side. It seems that most of the really good information around NFS and VMware is supplied by the storage vendors. NetApp's TR-3428 is a classic example.

(I can't even find in which documentation the NFS config is). Smiley Happy

What information there is about NFS is mostly tucked into the ESXi Configuration Guide.

Brian Atkinson | vExpert | VMTN Moderator | Author of "VCP5-DCV VMware Certified Professional-Data Center Virtualization on vSphere 5.5 Study Guide: VCP-550" | @vmroyale | http://vmroyale.com
rickardnobel
Champion
Champion

It's interesting, and it can be accomplished with nic teaming and ip hashing on the ESX(i) side.

Is it common that the NFS side has several IP addresses so that the load could be spread by the IP hash policy? Or is it easy to publish several IPs on the NAS side to get this if wished?

It seems that most of the really good information around NFS and VMware is supplied by the storage vendors. NetApp's TR-3428 is a classic example.

Thanks, that seems to be a very interesting document.

What information there is about NFS is mostly tucked into the ESXi Configuration Guide.

Strange, not even two pages on the subject, as to the 100-page documents on iSCSI and FC.. Of course the config is quite simple from the ESX side, but a little more would be nice. Smiley Happy

My VMware blog: www.rickardnobel.se
0 Kudos