VMware Cloud Community
marcin8
Contributor
Contributor
Jump to solution

HP NC360T vs NC380T for iSCSI SAN connection (HP 2012i DC)

Do you think it will be big difference in performance if I use NC380T rather than NC360T?

According to "A "Multivendor Post" to help our mutual iSCSI customers

using VMware" I should keep it simple and "In general, use the Software

Initiator except where iSCSI boot is specifically required".

Do you have any suggestions about switches which I shall use? I was advised HP-ProCurve 2510G-24 - but not sure. People mentioned Dell 52xx series. Has anybody use them (HP/Dell) in working environment.

Thanks!

0 Kudos
1 Solution

Accepted Solutions
BenConrad
Expert
Expert
Jump to solution

According to KB 1006143 (relased 6/2008) some TOE NICs are supported but the TOE offload feature is not yet implemented.

With that said, I think you should go with (2) 360T's (for redundancy) or just go with (2) NC110T's.

The downside to SW iSCSI is that you are limited to 1Gb/s per target IP address. If your SAN has multiple target IP's you can get > 1Gb/s using multiple LUNs but you can't control the load balancing. With HW iSCSI HBAs (like Qlogic) you spend about $700/HBA and you can then offload all processing to the HBA. You can also perform per-LUN load balancing manually.

As for switches, the 2510 does not have a very large packet buffer size (384KB). You should consider switches with larger packet buffers like the HP 2900, it has 13MB of buffer. Or, pick something in between. The more buffer space you have the less issues you may encounter with dropped packets and flow control issues.

Ben

View solution in original post

0 Kudos
4 Replies
BenConrad
Expert
Expert
Jump to solution

According to KB 1006143 (relased 6/2008) some TOE NICs are supported but the TOE offload feature is not yet implemented.

With that said, I think you should go with (2) 360T's (for redundancy) or just go with (2) NC110T's.

The downside to SW iSCSI is that you are limited to 1Gb/s per target IP address. If your SAN has multiple target IP's you can get > 1Gb/s using multiple LUNs but you can't control the load balancing. With HW iSCSI HBAs (like Qlogic) you spend about $700/HBA and you can then offload all processing to the HBA. You can also perform per-LUN load balancing manually.

As for switches, the 2510 does not have a very large packet buffer size (384KB). You should consider switches with larger packet buffers like the HP 2900, it has 13MB of buffer. Or, pick something in between. The more buffer space you have the less issues you may encounter with dropped packets and flow control issues.

Ben

0 Kudos
marcin8
Contributor
Contributor
Jump to solution

Ben,

thanks for your reply. I've forgotten to mention that I'm going to use 2x NC360T. I had limited budget (bought new DL 380 G5, SAN and switches) so that is why I picked up those NICs rather than hardware HBAs.

I have 9 x 300GB HDs for that SAN. Do you think it's better to create one big LUN (RAID5) and have one target or split them into two (for example 2x 4 drives RAID5 + 1 spare)? According to that what you wrote, the two targets would allow me to have >1Gb/s but no load balancig. May I use 2ports for load balancing if I create only one big LUN? What is the better choice in terms of performance? I remember about 2TB limitation of VMware.

Thank you for your help,

Marcin.

0 Kudos
BenConrad
Expert
Expert
Jump to solution

Regarding the targets, this is what I was describing:

Create LUN1, the SAN puts that LUN on it's IP 10.1.1.100

Create LUN2, the SAN puts that LUN on it's IP 10.1.1.101

Create LUN3, the SAN puts that LUN on it's IP 10.1.1.100

Create LUN4, the SAN puts that LUN on it's IP 10.1.1.101

If your SAN is able to spread out the LUNs over it's own IP addresses the S/W iSCSI will be able to load balance. If the SAN puts all 4 LUNs on 10.1.1.100 you will be stuck with using only 1 link @ 1Gb/s.

The RAID layout is up to you. Multiple LUNs spread across multiple spindles gives each LUN the capability to perform more I/O. But if all the LUNs are constantly busy then you have to figure out if you want LUNs spread across large #'s of disks or smaller #'s of disks. As for storage allocation, 500-800GB VMFS volumes are fairly standard.

Ben

0 Kudos
marcin8
Contributor
Contributor
Jump to solution

Thanks Ben.

0 Kudos