caustic386
Contributor
Contributor

Opinions on iSCSI vs NFS with VNXe 3100 & Catalyst 2960-S?

Jump to solution

I'm setting up my first SAN, a VNXe 3100 with 2 NICs per SP.  We chose a stack of Cisco 2960-S for network-level redundancy, and during the planning phase I had chosen to use NFS.  VNXe does not support deduplication with iSCSI (does anybody?), and the performance is about equal for this environment.  It is 3 hosts with about 15 production VMs; I expect a utilization rate of about 20% per host based on statistics gathered from the current production environment.

The glitch is, the 2960-S is limited to 6 port-channels, even in a stack!  In the beginning, my plan was straightforward enough, and a sales engineer confirmed it.  Create port-channels on each host for vMotion/HA, VM traffic, and storage traffic.  Each of these would be 2 gigabit NICs, in a dedicated VLAN.  But, now that I only have 6 port-channels to work with, what's the best solution?  I'd like to go with NFS if possible, but I can't figure out a good way to provide high availability and/or load balancing at the network level (yes I know the effectiveness of IP hashing is debatable in a port-channel).

In the past, I have setup iSCSI multipathing in test environments with good results, but it is a little more complex than I'd like to get for such a small environment, and we'll lose dedupe. 

Back to the original question - is it possible to do NFS, highly available, without link aggregation?  I'm referring to every piece of the stack - host, network, and SAN.  Is there another method you would recommend, and if so, why?

A few thoughts I had:

Would it be better to put the vMotion/HA NICs on access ports with 1 NIC in standby, and use the port-channels for NFS instead?  Once the environment is fully migrated, I expect vMotion will only occur during failures and maintenance periods.

If I assign an IP to a NFS store on SP A, and it fails, will SP B remain passive until a failure and then assume control of that particular IP/share?  Or will the NFS store show up twice in my datastores list?

Thanks for your input!

0 Kudos
1 Solution

Accepted Solutions
chriswahl
Virtuoso
Virtuoso

Here is my bad attempt at taking a picture to help visualize this Smiley Happy

I had to redact the names and IPs

MGMT uses vmnic1 as a primary, vmnic5 as a backup. It's on VLAN 125

vMOTION uses vmnic5 as a primary, vmnic1 as a backup. It's on VLAN 126.

both vmnic1 and vmnic5 are trunked at the physical switch level to allow both 125 & 126 VLANs.

network_mgt.png

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators

View solution in original post

0 Kudos
6 Replies
chriswahl
Virtuoso
Virtuoso

Hi there,

Create port-channels on each host for vMotion/HA, VM traffic, and storage traffic

No need for port channels on vMotion, HA, or VM traffic. It doesn't really do anything beneficial, and adds a lot of complexity. Just throw them in different VLANs for broadcast segregation and you're good to go.

I'd like to go with NFS if possible, but I can't figure out a good way to provide high availability and/or load balancing at the network level (yes I know the effectiveness of IP hashing is debatable in a port-channel).

In the past, I have setup iSCSI multipathing in test environments with good results, but it is a little more complex than I'd like to get for such a small environment, and we'll lose dedupe. 

Back to the original question - is it possible to do NFS, highly available, without link aggregation?  I'm referring to every piece of the stack - host, network, and SAN.  Is there another method you would recommend, and if so, why?

I wrote an article on this very topic. Basically, the answer is no because of the way VMware works. You either port channel (etherchannel) or throw in some Nexus 1000Vs and use LACP. There is a third option using vifs and load based team, but I found it to be too complicated.

http://wahlnetwork.wordpress.com/2011/06/08/a-look-at-nfs-on-vmware/

Would it be better to put the vMotion/HA NICs on access ports with 1 NIC in standby, and use the port-channels for NFS instead?  Once the environment is fully migrated, I expect vMotion will only occur during failures and maintenance periods.

If I assign an IP to a NFS store on SP A, and it fails, will SP B remain passive until a failure and then assume control of that particular IP/share?  Or will the NFS store show up twice in my datastores list?

I tend to use a pair of NICs in a team, with 1 port group for Management and the other for vMotion. each portgroup should have the opposite uplink as a primary, with the other as a standby. (So for example: vmnic0 is primary for vMotion but standby for Management, and vmnic1 visa versa). There's no advantage to having them on a port channel.

Your storage should be configured with takeover - if SPA fails, SPB should be teamed to take over the IP of that failed controller and the session will be restored going to SPB. You will not see the datastore twice - VMware will only attempt to reach the datastore by IP, it has no idea what SP is servicing the volume.

Hope this helps.

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
caustic386
Contributor
Contributor

Thanks for that insight - it gives me quite a bit to think about.

No need for port channels on vMotion, HA, or VM traffic. It doesn't really do anything beneficial, and adds a lot of complexity. Just throw them in different VLANs for broadcast segregation and you're good to go.

I think I might be getting confused when we discuss port groups vs port channels.  How can they be in different VLANs when configured as access ports, yet still rely on each other for primary/standby configurations?

I tend to use a pair of NICs in a team, with 1 port group for Management and the other for vMotion. each portgroup should have the opposite uplink as a primary, with the other as a standby. (So for example: vmnic0 is primary for vMotion but standby for Management, and vmnic1 visa versa). There's no advantage to having them on a port channel.

When you say 'team', are you referring again to the active/standy configuration?  I'm confused because you say that there's no advantage to having them in a port-channel, even though you have them teamed.

Also, I plan on implementing jumbo frames for storage and vMotion traffic - I don't think you can set jumbo frames per VLAN, only per vSwitch?  So that idea, as originally described, might not work out.

0 Kudos
chriswahl
Virtuoso
Virtuoso

Port channel is a term used for etherchannel (switch ports that are bonded together into one MAC). Port group is a VMware term Smiley Happy

I think I might be getting confused when we discuss port groups vs port channels.  How can they be in different VLANs when configured as access ports, yet still rely on each other for primary/standby configurations?

Trunk the ports on the physical switch, then use VST (virtual switch tagging) on the VMware port groups. This just means to assign a VLAN to a port group, easily set in the GUI - set VLAN type to "VLAN" and the ID to the VLAN #. If you have a lot of VLANs you can do whats called "pruning" on the trunk (so that it only accepts specific VLANs).

When you say 'team', are you referring again to the active/standy configuration?  I'm confused because you say that there's no advantage to having them in a port-channel, even though you have them teamed.

Also, I plan on implementing jumbo frames for storage and vMotion traffic - I don't think you can set jumbo frames per VLAN, only per vSwitch?  So that idea, as originally described, might not work out.

Team is the policy on the vswitch or port group to determine how the uplinks are combined. This is by virtual port ID, mac address, IP hash, or load based. It's also where you set the failover order (active, standby, and unused).

I wouldn't bother with jumbos, I tried them on both my NFS 10GbE and 1GbE networks and saw no benefit at all, and had all sorts of performance problems until I turned it off. Smiley Happy

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
chriswahl
Virtuoso
Virtuoso

Here is my bad attempt at taking a picture to help visualize this Smiley Happy

I had to redact the names and IPs

MGMT uses vmnic1 as a primary, vmnic5 as a backup. It's on VLAN 125

vMOTION uses vmnic5 as a primary, vmnic1 as a backup. It's on VLAN 126.

both vmnic1 and vmnic5 are trunked at the physical switch level to allow both 125 & 126 VLANs.

network_mgt.png

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators

View solution in original post

0 Kudos
caustic386
Contributor
Contributor

This raises an interesting point - we are using the "old fashioned" virtual switch.  Are distributed switches required for the config you're describing?

0 Kudos
chriswahl
Virtuoso
Virtuoso

No I just don't have any vSwitches to show you. Smiley Sad

VCDX #104 (DCV, NV) ஃ WahlNetwork.com ஃ @ChrisWahl ஃ Author, Networking for VMware Administrators
0 Kudos