unsichtbare
Expert
Expert

NSX Design for IP Storage (iSCSI)

Jump to solution

Hi all,

I was just considering the implications of VMware NSX on IP Storage (iSCSI) Design.

If we create three clusters: Edge, Mgmt., and Compute, each in its own rack:

  • Each cluster/rack is associated with (2) 10Gb Leaf Switches
  • There are two (2) 10Gb Spine switches for the installation
  • Each Leaf switch is connected to each spine switch (Leaf port 47 to Spine switch #1 and Leaf port 48 to Spine switch #2)
  • 10Gb iSCSI SANs are provisioned in the Mgmt. cluster/rack and connected to the 10Gb Leaf switches in the Mgmt. cluster/rack

How then do we facilitate IP storage for the ESXi hosts which are not located in the same rack as the SAN?

Let's say we define VLAN 99 for IP storage (per Figure 4 - NSX Design Guide v2.1), and follow recommendations that "VLAN trunking on the link between leaf and spine is not allowed" (pg. 65 - NSX Design Guide v2.1), then the inevitable conclusion is that the SAN would not be accessible from the Edge or  Compute cluster/rack.

How can we define an exception that will allow us to trunk VLAN 99 (no gateway, no compute, not rout-able)  to the Spine, so IP Storage is accessible from all ESXi Hosts?

THX in ADV!

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
1 Solution

Accepted Solutions
hs77
Enthusiast
Enthusiast

Here you have to use IP routing for inter rack communication configured at ESXi host level.

For Example on rack 1 VMKernel Interface for storage is 10.77.1.10 in Vlan 77.

on TOR leaf switch we terminate this VLAN as SVI with IP of 10.77.1.1

For IP Storage Inter Rack Communication we will add following command at ESXi host level on Rack1 Hosts:

esxcli network ip route ipv4 add -n 10.77.0.0/16 g 10.77.1.1

( Note VMkernel Interface for hosts in Different racks will be in Different Subnets. So in rack 2 VMKernel Interface for Storage will be 10.77.2.10)

(Source Design Guide v2.1, page 79-80)

View solution in original post

0 Kudos
12 Replies
hs77
Enthusiast
Enthusiast

Here you have to use IP routing for inter rack communication configured at ESXi host level.

For Example on rack 1 VMKernel Interface for storage is 10.77.1.10 in Vlan 77.

on TOR leaf switch we terminate this VLAN as SVI with IP of 10.77.1.1

For IP Storage Inter Rack Communication we will add following command at ESXi host level on Rack1 Hosts:

esxcli network ip route ipv4 add -n 10.77.0.0/16 g 10.77.1.1

( Note VMkernel Interface for hosts in Different racks will be in Different Subnets. So in rack 2 VMKernel Interface for Storage will be 10.77.2.10)

(Source Design Guide v2.1, page 79-80)

View solution in original post

0 Kudos
RussH
Enthusiast
Enthusiast

Hi,

I think you have two options:

1)     Connect the iSCSI target to a leaf, put a gateway on the target and then route the iSCSI traffic across the network (after adding vmk routes on the esxi hosts as per the design doc). You'll need to ensure you have sufficient spine/leaf interconnections for the storage traffic you expect.

2)     Connect an interface from the iSCSI target to each of the leafs that need iSCSI.

unsichtbare
Expert
Expert

Thanks for the replies.

It seems like using VLAN's to aggregate iSCSI traffic would provide more paths. I am struggling with why VMware specifies no VLAN Trunking between Leaf and Spine.

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
hs77
Enthusiast
Enthusiast

It will be great if vmware can publish a deployment guide for design Guide 2.1 which uses leaf spine L3 architecture.

0 Kudos
NimishDesai
VMware Employee
VMware Employee

Humm the design guide do cover leaf-spine architecture. What specifics you have in mind?

Nimish

Sent from my Android phone using TouchDown (www.nitrodesk.com)

0 Kudos
hs77
Enthusiast
Enthusiast

Nimish it will be great if you can publish config details for each TOR leaf switch as well as spine switch in case of L3 between leaf and spine.

It dosen't matter whether switch is Arista, Juniper, Cisco or Cumulus. There is lot confusion around this.

The best option is if you can have a deployment guide covering config for each Compute Cluster, Management Cluster, edge cluster Plus all config details of leaf as well as spine switch.

This will help us communicate clearly with our Networking Team.

This will also help NSX team to understand how routing from computer cluster to Storage System is happenning.

0 Kudos
NimishDesai
VMware Employee
VMware Employee

That is a fair point. We can put an addendum to cover the config best practice.

thx

Nimish Desai | desain@vmware.com | +19193062270 VMware Inc.

0 Kudos
unsichtbare
Expert
Expert

I think my question at this point is: If you do use L3 between Leaf and Spine, does it break the NSX configs?

I am concerned because NSX goes outside IEEE 802.1 specifications fro both frame size (as I am aware of them) in frame size and header content.

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
NimishDesai
VMware Employee
VMware Employee

Hi,

Short answer is it does not break anything. Each ToR will get its own subnet/VLAN for all the traffic you want to carry. The VLAN has local significance, except for the VXLAN traffic you have to use same VLAN id for all the L3 TOR.

The MTU requirement is 1600 (actually little less) for all the links in the fabric, thus for consistency 1600 is a recommended minimum MTY

HTH

Nimish Desai | desain@vmware.com | +19193062270 VMware Inc.

unsichtbare
Expert
Expert

Cool! Thanks for the input.

But what if we want/need to go outside VMware recommendations and use VLAN (L3) to route iSCSI traffic across TOR?

I do understand what VMware says, but since the VLAN tag is embedded in the VXLAN Encapsulated frame; than if, due to external design requirements, we were to need to route iSCSI traffic between TOR (Spine) with 802.1Q, does that break NSX?

THX again!

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos
NimishDesai
VMware Employee
VMware Employee

Hi,

Not sure I follow your comment. However, just to be clear. NSX does not care for any other traffic you are provisioning via VMK. However, with L3 topology(AKA each ToR is L3 boundary and VLAN terminates at ToR) you will need to submit a RPQ request for vMotion and iSCSI. Keep in min on iSCSI traffic there are other restriction imposed by vShpere and limitation of MPIO etc. It would be good idea to consult your local VMware contact on design and support requirements.

thx

Nimish Desai | desain@vmware.com | +19193062270 VMware Inc.

0 Kudos
unsichtbare
Expert
Expert

When using SVI and adding routes to each ESXi host, it stands to reason that a route will have to be added to the SAN?

Does anyone know offhand the method to make an EMC VNXe 3300 conform to this design?

THX

+The Invisible Admin+ If you find me useful, follow my blog: http://johnborhek.com/
0 Kudos