tyanni
Contributor
Contributor

Moving From iSCSI to FC - Experiences?

Jump to solution

We currently have a rather old iSCSI SAN (NetApp FS270) that we are looking to move off of. I've got to make a decision now on whether to move to a newer iSCSI SAN or migrate to a Fiber Channel one. Has anyone here done a similar move, and if so, did you notice a big difference with the new FC SAN? Did you notice better performance? What were the reasons for moving to FC?

Please note that one of the big reasons we are considering doing this is that we are looking into clustering our SQL servers, and this is not yet supported with iSCSI. Since we have to purchase a new SAN anyway, it potentially makes sense to go with FC since it has a larger feature set from a VMware perspective. Additionally, performance with iSCSI has been rather poor, although we would do a lot of performance optimization if we stuck with iSCSI and would probably move to 10GBe.

Thanks!

Tim

0 Kudos
1 Solution

Accepted Solutions
kac2
Expert
Expert

i agree with a lot of these guys. I would stick with an ethernet based solution. IMO, it's going to be the future. 10GbE will make your life a whole lot easier as well. With FC there is a lot of overhead cost with HBA, switches, rip and replace for upgrades, etc. Seeing as how you have this good budget, it would be a smart move to make the jump to 10GbE and start cleaning up that mess of cables.

View solution in original post

0 Kudos
8 Replies
AndreTheGiant
Immortal
Immortal

Storage performance depends by a lot of thinks, so it's not true that FC is always better than iSCSI.

With vSphere the iSCSI stack works well, also with software initiator.

So probably the best question is which features do you want, which kind of workload do you have and what's your budget Smiley Happy

In some cases iSCSI could be cheaper than FC.

For the clustering question, it's true that the VMware document support only the FC case, but you can use a software initiator inside the nodes and this is supported from Microsoft (a cluster build with iSCSI) and from VMware point of you have only two VMs with an additional iSCSI network.

Andre

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
tyanni
Contributor
Contributor

As always, our budget isn't big. But I expect we are looking at around 100k for the SAN plus relevant FC switches if necessary. If we went with iSCSI, we could probably get away with spending a lot less.

Our workload/environment is as follows - 3 physical hosts, approx. 100 VMs. Several highly utilized web servers, along with a few highly utilized database servers. Looking at doing virtual desktops and applications in the near future also.

Tim

0 Kudos
AndreTheGiant
Immortal
Immortal

The environment is quite common.

Probably a good iSCSI solution is enogh (have you see also the Datacore solution?).

With a similar case (4 node and 100 VMs) I've moved from a FC solution (CX300) to a iSCSI solution (Equallogic PS6000XV+PS4000E) with no performance issue.

I suppose that the web servers have a high load, but not high at storage level.

For the DB you have to see if a good number of SAS 15k in a RAID1+0 configuration could be enough...

For VDI, a good storage with a good cache could be very useful. Deduplication could be very important if you plan to use only View Enterprise (without the Composer).

Are you consider also a site disaster recover solution with storage replication?

Andre

Andre | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
raadek
Enthusiast
Enthusiast

Hi,

Bear in mind that NetApp FAS270 is a really old & really slow box! Smiley Wink So, again - I don't believe your dilemma is between iSCSI & FC.

You should see vast improvement if you move to e.g. FAS2040 with 15k SAS drives & this should fit easily into your budget. One of the benefits of sticking to NetApp is you can: a) fairly easily migrate LUNs from old box to the new one (e.g. using temporary SnapMirror licences), b) leave FC vs. iSCSI decision for later, as you can connect any of this protocol to the same LUN & c) explore NFS as another interesting connectivity option.

Regards,

Radek

kac2
Expert
Expert

i agree with a lot of these guys. I would stick with an ethernet based solution. IMO, it's going to be the future. 10GbE will make your life a whole lot easier as well. With FC there is a lot of overhead cost with HBA, switches, rip and replace for upgrades, etc. Seeing as how you have this good budget, it would be a smart move to make the jump to 10GbE and start cleaning up that mess of cables.

View solution in original post

0 Kudos
tyanni
Contributor
Contributor

Okay, I think I screwed up the point assignment, but hopefully everyone can forgive me Smiley Happy First time really using this. Anyway, thanks for all of the help - will probably stick with iSCSI and go with a different clustering solution for SQL.

0 Kudos
kac2
Expert
Expert

instead of clustering, you can always go with enterprise+ licensing and using Fault Tolerance. just a suggestion.

Kendrick Coleman

www.kendrickcoleman.com<http://www.kendrickcoleman.com>;

twitter: @KendrickColeman

0 Kudos
raadek
Enthusiast
Enthusiast

OK, I actually forgot about your plan to cluster SQL - good point.

It's an interesting story though - FC is the only supported protocol, yet if you browse Communities you can actually find a number of people saying that they run VMs in Microsoft Clusters over iSCSI & they have no issues (other than not running a formally supported config).

On top of that there is a 1000 years old argument - why bother with guest-level clustering if you have HA? (OK, OK, I know - frequent patching in most critical environments can justify this)

Regards,

Radek

0 Kudos