VMware Cloud Community
thomasross
Enthusiast
Enthusiast

virtual networking design with NFS storage

We are installing an ESXi4 cluster using HP Proliant 380 G6's , two 1 GB adapters and two Intel 10 g adapters.

The servers will be connected to Nexus 5000 switches.

And the storage will be NFS datastores on two NetApp filers

We plan on using the 1 GB adapters for the Mgmt Network and vmotion in an active/standby configuraiton each on its own vlan.

The 10 g Intel adapters will handle the VM ports groups and the IP storage vmkernel port(s).

We plan on using a vSwitch for the service console and vmotion and vDS for everything else.

The vDS will be connected to the two 10 g adapters. And the vSwitch to the two 1 GB adapters.

We want to use Originating Port ID for teaming on both the vSwitch and vDS with an active/standby configuration. We'll have the VM port groups using one 10 g port and the IP storage using the other.

We'll have 7 datastores to start with , but that soon double or triple. We'll be accessing bteen 5-10 TB of data.

I understand that per host there will be one tcpip connection for each datastore.

My question is....is there a benefit, in using multiple vmkernel ports for IP storage each on thei own subnet or vlan? Will I get better throughput if I use more than one vmkernel and additional subnets or vlans?

Any other advice will be greatly appreciated.

Thanks in advance!!

Tom

0 Kudos
4 Replies
jbruelasdgo
Virtuoso
Virtuoso

this can point you in the right direction : http://virtualgeek.typepad.com/virtual_geek/2009/06/a-multivendor-post-to-help-our-mutual-nfs-custom...

please provide point if the answer is helpful/correct

kind regards

Jose B Ruelas

http://aservir.wordpress.com

Jose B Ruelas http://aservir.wordpress.com
Stu_McHugh
Hot Shot
Hot Shot

Check the vendor's recommendation in the last post. I have a Netapp SAN and in my configuration for which they recomend a round robin configuration on the NIC's connected. Also be careful on the sizes of your NFS volumes too. If you make them to large then yo could get lots of file locking but too small make it mroe difficult to manage. Back to the recommedation documentation!

Stuart

-


Please award points to any useful answer.

Stuart ------------------------------------------------ Please award points to any useful answers..
thomasross
Enthusiast
Enthusiast

Thanks, Jose,

This has tons of great information and will be very helpful in desiging the storage. But I'm still puzzled about how many vmkernel ports I should use for NFS storage.

Should I use multiple NFS IP vmkernel ports if all the NFS IP traffic is going out one vmnic?

The NIC adapter is an Intel 10g NIC and it is connected to a Nexus 5000 switch. All the VM Port traffic and NFS IP storage will be going out the two vmnics associated with this 10 g adapter.

The vendors recommendations are conflicting. Cisco recommends etherchanneling which requires IP Hash. Intel recommends an actve/standby configuraiton with all the VM Port groups tied to one vmnic and the NFS IP storage to the other which of course requires non-etherchannel and Port ID. NetApp shows how to configure the virtual networking using either etherchannelling with IP Hash or non-etherchannel with active/standby, but assumes you have teo vmnics fully dedicated to storage... I don't.

The best practice guide for this adapater recommends sending all the VM Port group traffic out one vmnic and the IP storage out the other, using active/standby on the vmnics. I understand that, and understand this also eliminates etherchannelling.

What I do not know is will I be better off using more than one vmkernel for the IP storage?

The storage consistes of two NetApp FAS's contianing at the moment 7 NFS datastores, each averaging about 1 TB.

The NFS storage section in the latest NetApp/VMware best practices recommends using either etherchanneling with one NIC, IP Hash and one vmkernel port or non-etherchannel using two nics configured for active/standby, Port ID and two vmkernel ports on differnet subnets. But, assumes you have two vmnics dedicated for storage.

But NetApp does not provide an answer for my question.

I will use non-etherchanneled active/standby, Port ID, and tie the VM Port groups to one vmnic and the NFS storage to the other. This will provide network segmentation, failover and should have adequate bandwidth

My question remains unanswered by all vendors.... Should I have more than one vmkernel for the NFS traffic if all the NFS traffic is going out one vmnic to two FAS's and multipe datastores ??

Thanks again!

Tom

0 Kudos
thomasross
Enthusiast
Enthusiast

Stu,

Thanks, Stu.

I understand. I know etherchannelling NFS storage will be a close equivalent to a SAN round robin configuration.

My delimna is etherchannel would work great if I had two vmnics dedicated for NFS IP storage. But I do not. I have one Intel 10 g adapter that will be used for both VM port group traffic and NFS IP storage traffic.

Why not use etherchannelling for both VM port group traffic and NFS IP storage traffic you ask?? Because the first shot at this crashed and burned using this configuration and the 10 g vendor (Intel) recommends active/standby with one vmnic dedicated to VM port group traffic and the other to NFS storage traffic.

Please see my previous post. Cisco and Intel offer conflicting solutions. And, NetApp assumes I have two vmnics dedictaed to NFS IP storage. But I do not.

My question lies unanswered.... one vmkernel for NFS storage or multiple vmkernels?

Thanks, again!. I appreciate your response.

Tom

0 Kudos