thomasross
Enthusiast
Enthusiast

Load balancing with NFS?

If I have two NetApp filers with four 1 TB NFS datastores each and say 9 ESX4 servers that need to access both filers, can I use a separate vlan for each datastore? What are the pros and cons of this?

Would I be better off to put each filer on its own vlan and just maintain two vlans?

I understand in both cases I still have one TCPIP connection for each NFS datastore. But in the first option I could use one vmkernel port per datastore. So, each host would have 8 vmkernel ports dedicated to NFS

Thnaks!

Tom

0 Kudos
3 Replies
AntonVZhbankov
Immortal
Immortal

>can I use a separate vlan for each datastore? What are the pros and cons of this?

No need. Just separate NFS traffic from VM traffic by putting it into "Storage VLAN"


---

MCSA, MCTS, VCP, VMware vExpert '2009

http://blog.vadmin.ru

EMCCAe, MCITP: SA+VA, VCP 3/4/5, VMware vExpert http://blog.vadmin.ru
0 Kudos
RussellCorey
Hot Shot
Hot Shot

Lets assume you've got 4 physical NICs per ESX host for the sake of the discussion and that we aren't using IP hashing.

Now you could create 1 VLAN per datastore (requiring you to use 802.1q trunking) but this limits your NIC failover options a bit. What we've done with a couple of our customers (as outlined in NetApp TR-3749) is use 1 VLAN with multiple subnets inside of it. Since they don't need to route anywhere (don't want to route storage traffic) this should be sufficient.

Then you create 4 vmk ports per ESX host on 1 vSwitch. This vSwitch will have 4 nics bound to it (in this example vmnic1, vmnic2, vmnic3, vmnic4)

Then for each vmk you use a portgroup override as follows:

vmk0 active: vmnic1 standby: vmnic2, vmnic3, vmnic4 IP: 192.168.1.10 mask: 255.255.255.0

vmk1 active: vmnic2 standby: vmnic1, vmnic3, vmnic4 IP: 192.168.2.10 mask: 255.255.255.0

vmk2 active: vmnic3 standby: vmnic1, vmnic2, vmnic4 IP: 192.168.3.10 mask: 255.255.255.0

vmk3 active: vmnic4 standby: vmnic1, vmnic2, vmnic3 IP: 192.168.4.10 mask: 255.255.255.0

On your NFS server you create 4 interfaces with an IP address in each of the above subnets and export your volumes. When you go to mount; you'll mount round robin across those IPs. Something like this:

192.168.1.254:/vol/vol1

192.168.2.254:/vol/vol2

192.168.3.254:/vol/vol3

192.168.4.254:/vol/vol4

Repeat for other filer/NFS server. What this will do is force NFS to use specific NICs allowing you to load balance without IP hash and maintain failover. When you lose say half of your network cards; they'll all just share on the remaining 2 interfaces until things return. At which point you should be able to fail back.

Using a VLAN per gives you isolated broadcast domains but will require you to use 802.1q VLAN tagging on your vmkernel NICs and on your NFS serve (you'll need to use trunk ports.) This is because you're going to want to ensure that you still have failover/availability. It also adds more complexity to your network.

IP has is another option but you have to use etherchannel in static mode which means losing a NIC could effectively down the entire group.

Make sense?

0 Kudos
thomasross
Enthusiast
Enthusiast

Thanks.

That makes a lot of sense.

Tom

0 Kudos