Hi Guys,
A few questions:
1. For those of you who are familiar with NFS storages, would you recommend to use HBA cards, to offload ESX host CPUs from TCP/IP traffic,
or regular NICs will do just fine?
2. Did anybody do actual comparrison and witnessed difference when using HBAs?
3. What HBAs would you guys recommend for DELL 2950s, accessing NFS luns on NetApp 3040? Make/ model / aprox price please.
4. iSCSI hbas and NFS hbas are same thing?
Credits will be given for helpful answers! :smileycool:
Thanks,
Alex
I re-ordered the questions with answers.
1. For those of you who are familiar with NFS storages, would you
recommend to use HBA cards, to offload ESX host CPUs from TCP/IP
traffic, or regular NICs will do just fine?
NFS is different than iSCSI. Normally, no cards are required for NFS, but it does require vmKernel ports, and good design offloads the vmKernel to a different NIC. Give that, you might want to add an Intel Quad port from Dell with you buy your servers. I have a FAS 3040, and with Dell 2950s, and we just use the Intel Quad ports we bought with the Dells. We have 10Tb of NFS storage, spread across 10 ESX nodes (5 cluster pairs), and we haven't seen any performance issues.
Given NFS storage, it is easily added to an ESX server, has advantages with NFS (the Netapp leverages very well with NFS) vs iSCSI for management and overhead offload. For most situations, I prefer NFS.
4. iSCSI hbas and NFS hbas are same thing?
No. An iSCSI HBA offloads the iSCSI from the ESX, and makes ESX think it has DA storage. You can use TCP Offload (Usually built into a NIC) with ESX, which will improve over the wire performance. I have seen a small increase in throughput with a decrease in ESX overhead on a loaded system using this. But this was under test loads. My production systems aren't loaded heavy enough to see this.
Now, given that, I have never heard of a card, billed as an "NFS offload" HBA, but I don't know everything that is out there.
3. What HBAs would you guys recommend for DELL 2950s, accessing NFS luns on NetApp 3040? Make/ model / aprox price please.
I don't use iSCSI hbas, so can't help with that. I have used iSCSI as built into the ESX kernel, and have had no issues. We were using Dell 2950 using Intel Quad port NIC bought with the servers, and a Dell AX150 iSCSI array, and had no performance issues.
2. Did anybody do actual comparrison and witnessed difference when using HBAs?
I haven't, but the offload for iSCSI is pretty well documented out there.
-
-Andrew Stueve
I re-ordered the questions with answers.
1. For those of you who are familiar with NFS storages, would you
recommend to use HBA cards, to offload ESX host CPUs from TCP/IP
traffic, or regular NICs will do just fine?
NFS is different than iSCSI. Normally, no cards are required for NFS, but it does require vmKernel ports, and good design offloads the vmKernel to a different NIC. Give that, you might want to add an Intel Quad port from Dell with you buy your servers. I have a FAS 3040, and with Dell 2950s, and we just use the Intel Quad ports we bought with the Dells. We have 10Tb of NFS storage, spread across 10 ESX nodes (5 cluster pairs), and we haven't seen any performance issues.
Given NFS storage, it is easily added to an ESX server, has advantages with NFS (the Netapp leverages very well with NFS) vs iSCSI for management and overhead offload. For most situations, I prefer NFS.
4. iSCSI hbas and NFS hbas are same thing?
No. An iSCSI HBA offloads the iSCSI from the ESX, and makes ESX think it has DA storage. You can use TCP Offload (Usually built into a NIC) with ESX, which will improve over the wire performance. I have seen a small increase in throughput with a decrease in ESX overhead on a loaded system using this. But this was under test loads. My production systems aren't loaded heavy enough to see this.
Now, given that, I have never heard of a card, billed as an "NFS offload" HBA, but I don't know everything that is out there.
3. What HBAs would you guys recommend for DELL 2950s, accessing NFS luns on NetApp 3040? Make/ model / aprox price please.
I don't use iSCSI hbas, so can't help with that. I have used iSCSI as built into the ESX kernel, and have had no issues. We were using Dell 2950 using Intel Quad port NIC bought with the servers, and a Dell AX150 iSCSI array, and had no performance issues.
2. Did anybody do actual comparrison and witnessed difference when using HBAs?
I haven't, but the offload for iSCSI is pretty well documented out there.
-
-Andrew Stueve
Hello,
As of this moment VMware VI3 does not support an NFS-HBA... Not even sure one exists as generally you use normal ethernet cards for NFS.
VI3 supports very few iSCSI HBAs. iSCSI and NFS are definitely not the same.
Best regards,
Edward L. Haletky
VMware Communities User Moderator
====
Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.
CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354
As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization
Thank you both, for the great info!
Answered my questions on what is needed moving forward towards implementing VI with NFS.
Good Day,
Alex