VMware Cloud Community
J0S3M
Contributor
Contributor

Brand/model switch for ISCSI storage network

I'm building a small single rack (but dense and powerfull) DC with HPE Proliant DL Servers, MSA 2062  ISCSI Storage Array SANs.and VMWare on top.

Looking for best value for the money switches for the ISCSI SAN Network and ToR switches for VSphere management and VM Networks.

Any Brand/model suggrstion (prioritizing quality over money). Things should run smoothly. 

Thanks for reading me.

Reply
0 Kudos
8 Replies
stadi13
Hot Shot
Hot Shot

Hi @J0S3M 

I would go with the HPE SN2010M switch for iSCSI traffic. This is the best quality to money model in the HPE lineup. You can also connect the hosts for vSphere Management and VM networks to them. They are two switches racked into a 1U rack mount kit. Physically they are  independent switches.

Regards

Daniel

J0S3M
Contributor
Contributor

Thanks @stadi13 I actually took a look at SN2010M, great value. But since we are talking about 16 servers, 4 MSA SANs. I guess I would need a denser model and perhops use breakout cables.

I will also have to change the SAN to the SFP+ ports controller model since I was considering going all T to save some cabling money.

But I will think carefully about this option. Honestly, this is my first project of this kind so and as you should know, options are almost infinite.

Thanks again for your help.

Tags (1)
Reply
0 Kudos
stadi13
Hot Shot
Hot Shot

Hi @J0S3M 

yes, you would run out of ports with that huge amount of servers. As far as I understand you look forward to connect the hosts by base-t and the storage will remain SFP+ (HPE MSA is not available with SFP+ ports. MSA SAN controller is compatible with SAN and SFP+ module. You can switch the mode on the array). Is this correct?

Regards

Daniel

J0S3M
Contributor
Contributor

No I'm just digging, I still can change whatever I want. So I can move away from Base-T. MSA 2062 models R0Q81A and R0Q82A both have 2 controllers each one with 4 SFP+ ports.

In theory I could use 4 to 1 splitter cables like this JG329A to connect each 4 ports of the 4 first controller of each 4 SANs 😂😬 to each of the 4 40Gbps QSFP+ ports of the first SN2010M switch and do the same for the second fabric.. so the numbers would match perfectly. Then I would use the remaining 18 QSFP28 ports of each switch for the hosts... of course I could not add more SANs and I'd be able to connect up to 18 hosts.

Thats just the theory, this should pass throgh compatibility check.

Reply
0 Kudos
stadi13
Hot Shot
Hot Shot

Hi @J0S3M 

We connect the MSAs always with two ports per controller for iSCSI. So you are already at 16 ports. I have seen the MSA2062 is also available in a Base-T configuration (R7J70A). I was not aware of this.

For your design - the most important point is, if you are using a dedicated iSCSI network with dedicated switch or if you will combine this ports to the same ToR switches.

So MSA will have 2x 1Gb for Management, 2x 10Gb (only 10Gb for Ethernet Traffic available) per MSA Controller - by 4 MSA its 16 iSCSI Ports. Per ESX Host you will have 2 ports for Network & Mangement and 2 ports for iSCSI. This means you need 32 network ports for network & management (excl. iLO and MSA management) and 48 10Gb iSCSI ports.

Regards

Daniel

J0S3M
Contributor
Contributor

Thnxs again @stadi13 Daniel.

From what I read If you asign all your disk groups to a single Storage Pool so that only one Storage controller is handling all the traffic and the other is standby I guess 2 ports per controller is fine, but if you use both Pools to squeeze all the IOPS from the SAN (full of SSD + 10k HDD) I dont know if 2 port per controller is enough. 

I'm more incline to use 2 huge  HPE SN2410M 25GbE 48SFP28 8QSFP28 ToR switches (great value) and converge everything there while upgrading at the same time to 25Gbe. Of course I will need to add some 1/10 Gbps access switches.

The thing is that I'm new to DCs this size and the fact of throwing all the traffic to a single switch (specially storage) bothers me. I'm aware that SN2410M are absolute beasts but I cannot tell from any experience. 

Another point of concern is security. It feels somewhat risky to me mixing (though VLAN separated) Dev Adm,  VSphere Adm. and storage traffic with external (Internet) traffic in a single physical device. Although I might be wrong since we are in the HCI era.

Reply
0 Kudos
stadi13
Hot Shot
Hot Shot

Hi @J0S3M 

Yes, regarding IOPS you can do the calculation here and see when you need both controllers: https://ninjaonline.ext.hpe.com

Regarding iSCSI traffic you should use two seperate switches so there is no single point of failure.

Regards

Daniel

J0S3M
Contributor
Contributor

Thanks @stadi13 

Already used ninjaonline tool for MSA...awesome tool, very much recommended.

Regarding traffic separation. Yes I will use couple SN2010M just for storage net.

Regards

PS: Using 1 to 4 QSFP breakout cables for MSA connection so port count is OK.

 

Reply
0 Kudos