VMware Cloud Community
mlubinski
Expert
Expert

multiple racks network design (storage)

Hi,

I am struggling with proper design for future multi-rack network design for storage (iSCSI with MPIO).

I would like to hear opinions/advises from people that have such things implemented, and know pretty well details of their design and it's pros/cons/limitations. That will really help.

I attached picture which sits in my mind, but I am not sure if it's correct in design... So here are assumptions:

1. There will be one rack with storage server (or maybe 2 storage servers acting separately, no HA). Switch will be 1Gbit managed (probably Cisco 2960G or some managed HP switch)

2. each rack with ESXi hosts (we want to start with 2x host racks) has its own managed "storage" switch 1Gbit Cisco 2960G or HP switch.

3. Each host has 2x pNIC connected to storage switch for iSCSI. MPIO will be defined from host shell to work with iSCSI (later on).

4. storage server (iscsi target) will have 4x 1Gbit nic (n1, n2, n3, n4) configured in following way: n1 & n2 -> left rack hosts, n3 & n4 -> right rack hosts. n1=172.16.0.200/24, n2=172.16.1.200/24, n3=172.16.2.200, n4=172.16.3.200

5. Each ESX in left rack would have following vmk configuration: vmk1: 172.16.0.1->30, vmk2: 172.16.1.1->30

6. Each ESX in right rack would have following vmk configuration: vmk1: 172.16.2.1->30, vmk2: 172.16.3.1->30

7. on network switches (in places marked 2 & 3, and of course on opposite side for "right rack") I think there should be portchannel defined (to allow 2-3Gbit throughput)? If yes, in which exactly places should that be defined, and can portchannel consist these 3 ports (between racks) to have enough throughput?

8. If point 7 is correct, then how to make sure, that iscsi traffic that will come from ESXi hosts will utilize ALL portchannel links (and not only 1)?

These are my concerns...I hope that someone can give some nice ideas or corrections regarding that setup. I want to create nice setup which will not be limited by 1Gbit uplink between racks. I guess on ESX side I don't have to configure anything special to make it working?

Thank you in advance for your answers and comments.

Marek

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
Reply
0 Kudos
8 Replies
AndreTheGiant
Immortal
Immortal

Which kind of iSCSI storage do you have? Have you check first vendor recomendation?

Some arrays need a dual (isolated) network for iSCSI... others a flat one.

The final switches configuration depends also by this kind of requirement.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
mlubinski
Expert
Expert

Andre,

I was planning to use Starwind iSCSI. This vendor in guides for MPIO says that you only need to configure each MPIO nic with IP from different subnets (like 192.168.0.1/192.168.1.1), so I was thinking that then in my points it would be correct, so first 2 nics on starwind for left rack, 2 other for "right" rack, and if more racks would come, then i would add dual nic to starwind to give it 2 more ip's.

do you have any comments on that?

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

In this case you do not need the central switch (for iSCSI).

You can have half NICs storage connected to the left switch and half to the right one.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
mlubinski
Expert
Expert

yes, that's true as well, but if you have more than 2 racks, and they are in longer distances (like 20-30 meters), then cabling is "bitch"... so I want to think about something nice Smiley Happy don't you think it's possible with using "central" storage switch?

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

Of course you can use a central switch... in this case a VLAN partition coud be useful.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
mlubinski
Expert
Expert

yes, but how about multiple eth links between rack and central switch? how should be portchannel defined, and how about IP's design? Because I read that with portchannel you need to really match IP's "on both ends" because loadbalancing mechanisms on switches use XOR algorithms... So I was wondering if there is anyone who did such thing, and could share his thoughts/ideas on that.... because otherwise if no etherchannel is used, then real limit is 1gbit link between rack and central switch...

anyway, thank you Andre for answer.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

A channel across physical switches is "transparent" for VMs, or iSCSI target and initiators.

Check your switches documetation to see with solution is recomented (some call LAG, other Etherchannel).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
mlubinski
Expert
Expert

yeah thanks, I know it's transparrent, but I was hoping, that someone who has such thing implemented already (and is managing it) could share some good ideas on how to design IP's for proper load balancing between each port in etherchannel. As calculating XOR for each ESX is kinda time consuming :smileygrin:

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
Reply
0 Kudos