VMware Cloud Community
vmproteau
Enthusiast
Enthusiast

ISCSI or Fiber Channel

I understand the basics of this argument and although it is as much a religious debate as anything else, I'm looking for guidance on a new environment.Our environment has always been Fiber Channel and I have limited experience with ISCSI. I'm designing a new environment and I think the nature of the proposed migration will dictate NOT changing from what we know and use successfully.

That being said, I am tyring to envision what an ISCSI solution might look like. In our case it would be an HP (Lefthand) P4000. Here are some environment estimates. I realize determinations can't be made solely on this information but, I wonder if any of these make one better or rules one out.

  • 20-30-Hosts
  • 1000-2000 VMs
  • 200TB storage
  • Workload and application profiles generally unknown
  • Multi-site SAN replication-potentially SRM
  • Possible Cisco Nexus
  • Possible VMWare View VDI
  • Possible vCloud Director and vCloud Service Request Manager
  • There will also be a physical server presence for high end DB servers etc.

In all likllihood we would have both at some point. Then production server workloads could be moved over and tested thoroughly to make these determinations. I just can't see a complete forklift move to new technologies in either direction.

0 Kudos
7 Replies
mlubinski
Expert
Expert

i think that iSCSI would work very good for your plan. as far as I know p4000 has already VAAI support, so you could easily place 100 VM's on one lun (thus reducing management overhead), so you would end up with 20 datastores Smiley Happy

would you have it (hosts) placed in one rack or splitted between multiple racks? Do you plan to use 1Gbit or 10Gbit network?

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
idle-jam
Immortal
Immortal

with such 200TB requirement and so many VMs i believe that FC is the way to go .. as 10Gbe might not be cheap and also 1GBe iSCSI is too little for you ..

0 Kudos
AndreTheGiant
Immortal
Immortal

Your numbers are very high... (expecially 1000-2000 VMs)

Do not use a "low enterprise" storage... or you will have performance issue.

In your case you need a high enterprise storage (like for example Compellent, with a lot of disk, and I suggest also some SSD for VDI) or a lot of mid-size storages (like Equallogic storage).

But, IMHO, I suggest to talk with at least two different vendors (or consultant) and ask for a complete project.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
mlubinski
Expert
Expert

i can't agree with that. cost of fc infrastructure is the same as 10g

and basically if he plans his network wisely (with mpio) then it

should run very fast Smiley Happy even for 2k vm. of course i am assuming that

lefthand will have multiple 1gbit interfaces (and not only 1)

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
bulletprooffool
Champion
Champion

I'd definitely go FC here - unless you have 10GB networks for the iSCSI.

I'd also consider my consolidation ratios - you are looking at up to 100VMs per host (average) - you're either going to need VERY powerful hosts, or low spec VMs.

In addition - 200TB accross site replication is not going to be very easy to manage (if there is a high rate of change)

Sounds like an awesome project to be involved in though!

One day I will virtualise myself . . .
0 Kudos
mlubinski
Expert
Expert

why would you go for FC? I mean what is real difference? Most FC implementations are based on 2Gbit FC HBA's, so basically you have "only" double throughput than iSCSI 1Gbit. and if you take 4x 1Gbit in MPIO, then you overcome FC storage with way less cost (especially that he said he uses lefthand p4000). I am always curious why should I use FC storage instead of iSCSI/NFS if I don't saturate my links? Do you have other countermeasures which should point to use FC storage (other than "you must have FC because it's enterprise" - that's bullshit Smiley Happy)

I would say: if your links 1Gbit to storage system ARE NOT saturated in more than 70% ~ 600-700Mbit, then I would never go with FC.

[I]If you found this or any other answer useful please consider the use of the Helpful or correct buttons to award points[/I]
0 Kudos
vmproteau
Enthusiast
Enthusiast

And so ends the June 2nd 2011 ISCSI-FC Peace Summit. Actually this was more civilized than many of these discussions although, there were reports of troops amassing at lubinski's borders Smiley Happy These were all helpful and if given the proper amount of time we would want to give ISCSI a thorough look but, due to time constraints and a mandate to "do no harm" to our existing tenants we'll likely go with what we know which is Fiber Channel.

We're an HP shop so, it looks like it will be 3PAR. As far as VMs per Host bulletprooffo… you're right and the high end VM estimates may require additional Hosts. Hosts will probably be Proliant DL380 G7 (dual hex core) with 160-192GB of memory.

I would love to bring in some ISCSI at some point because I think there are use cases for both. My laymen opinion is that it appears that ISCSI has caught up where a "properly implemented" ISCSI 10GbE solution could handle the majority of workloads from a performance perspective. It is still true that iSCSI lags slightly behind FC/FCP mainly due to overhead required to encapsulate SCSI commands within the general-purpose TCP/IP networking protocol. So for extremely high transactional I/O FC still has the edge.

I've said it before but, I've yet to find a better user community forum than VMWare's. Thanks again for all the helpful comments.

0 Kudos