hillda01
Enthusiast
Enthusiast

FC or iSCSI???? - I just can't decide.....

Jump to solution

Hi,

I currently use ESX in conjunction with local SCSI storage and Disk Attached Storage.

We are looking to grow our ESX environment and pretty much go completely virtual and I am at the position of either going FC or iSCSI and I really cant decide..

We are considering going Dell/Equalogic iSCSI or Dell FC also looking at HP iSCSI or HP FC...

I'm really concerned with the performance of iSCSI over FC - I've never dealt with either FC or iSCSI but I do know FC to be faster...

We will be virutalising our Citrix servers which home roughly 30 to 40 users on each box - we have about 8 Citrix servers, an Exchange server that hosts around 900 mailboxes, also we've got quite a few SQL servers... and a number of file / print services. We will be using ESX 3.5...

Hope someone can point me in the right direction...

I'm in contact with Dell and they are going to get a VMWare architect to contact me but I wanted indepandant advice on it...

Cheers

Dave

0 Kudos
1 Solution

Accepted Solutions
Nick_F
Enthusiast
Enthusiast

Yep I wouldn't make your decision based on performance (unless you were doing something that demanded high bandwidth such as video streaming). We already had the FC infrastructure in place so it wasn't a whole lot more expensive to use FC than iSCSI and I was more comfortable with FC. If you don't have FC in place then given the cost of switches etc. I'm not sure you could justify using it over iSCSI.

View solution in original post

0 Kudos
27 Replies
dmaster
VMware Employee
VMware Employee

For this kind of configuration i will suggest using FC.

hillda01
Enthusiast
Enthusiast

What are your reason's for recommending FC over iSCSI?

0 Kudos
vheff
Enthusiast
Enthusiast

Hi Dave,

I think the choice between FC or iSCSI depends on the following factors:

1) Cost

2) Skills / expertise (remember someone has to support it)

3) Performance

If cost wasn't an issue and I was simply given a choice then I would go fibre channel. Why....? Speed. Pure and simple. I recently implemented a new HP SAN and that supports 4GB FC, so if you are likely to be comparing performance (especially when virtualising Citrix) then this might be your best option. I currently use Dell / EMC by the way, but also HP (MSA).

I mention skills, because some organisations seem to adapt better to iSCSI if they already have skilled Cisco / networking people for example. In most cases, introducing FC will require fibre patch panels, and new fibre switches... but for me that is the fun bit Smiley Happy

Let me know how you get on!

Ray

dmaster
VMware Employee
VMware Employee

Mostly performance, FC protocol is designed for heavy loads.

ESX does now also have support for 10Gbit iSCSI, and FC is now 4Gbit, but most storage experts i met are convinced that unless the speed difference FC will soon have faster cards available and the overall performance of FC is the best..

also most storage array's for FC san have more memory cache available then iSCSI..

it's better to store small environments or test vm's with low performance on a iSCSI target.. or just use it for storing iso images or backups.

0 Kudos
Texiwill
Leadership
Leadership

Hello,

Actually FC is now at 8GBs. Also remember that with IP based protocols there is a 30% overhead so at most you can get 700mbs out of a 1GB link. The same holds true for 10G, so at most you may get 7Gpbs. There is no such overhead for FC. If you can afford 10G then performance wise I think 8GB FC vs 10GB iSCSI is a wash. But I have no hardware on which to test that premise.

Performance as always will depend on load, configuration, and # of spindles available for each. Since most people do not have 10G pNICS, or 8GB FC. FC is still faster than a 1G pNIC.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

CIO Virtualization Blog: http://www.cio.com/blog/index/topic/168354

As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2022,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
williambishop
Expert
Expert

Enterprise class, I always recommend FC. I know plenty of people who spread the joy of iscsi, but I've not ever seen it faster. On the other hand, FC does require in-house talent, and the devices aren't exactly cheap. But if absolute performance is your goal, FC.

Ita feri ut se mori sentiat
0 Kudos
hillda01
Enthusiast
Enthusiast

Hi... Thanks all for your responses...

From the fact everyone so far has said FC - I think i'm going to go FC as welll.... I had concerns with the speed of iSCSI and didnt want to end up with an expensive coffee table if I went iSCSI.

Ray,

What FC kit do you have specifically - I see HP are promoting their MSA2000FC device for a reasonable price...

I agree with you that the FC bit is fun... I dont currently have any experience regarding FC but hope to learn fast :smileygrin:

0 Kudos
Nick_F
Enthusiast
Enthusiast

I'd second the FC vote to but more due to simplicity, I just find it much easier to deal with than iSCSI even though I have 10 years IP experience and only a year FC.

In terms of performance I'd say that will be more down to how you organise the SAN disks (e.g. RAID type, number of spindles, number of VMs per LUN etc.), the Gb rating is fairly meaningless in most scenarios IMO as you usually get response time and contention issues rather than bandwidth bottleneck.

Dell have an interesting doc on iSCSI vs FC performance for ESX though which basically concludes iSCSI performs as well and recommends it.

0 Kudos
hillda01
Enthusiast
Enthusiast

Thanks again for the responses...

I've got a conference call with Dell this afternoon to discuss iSCSI and FC....

I still think FC will be best as I want the best performance...

Cheers

Dave

0 Kudos
Nick_F
Enthusiast
Enthusiast

Attached the doc I mentioned if you've not already seen it

0 Kudos
hillda01
Enthusiast
Enthusiast

Thanks for that doc - it makes for interesting reading....

It say's that iSCSI is not a mile apart from FC in performance...

Now I'm really stumped!

0 Kudos
Nick_F
Enthusiast
Enthusiast

Yep I wouldn't make your decision based on performance (unless you were doing something that demanded high bandwidth such as video streaming). We already had the FC infrastructure in place so it wasn't a whole lot more expensive to use FC than iSCSI and I was more comfortable with FC. If you don't have FC in place then given the cost of switches etc. I'm not sure you could justify using it over iSCSI.

0 Kudos
williambishop
Expert
Expert

But FC isn't a lot more expensive than a dedicated iscsi network either. The switches are comparable, the hba's are comparable....Unless you're using software initiators, you can build an FC network for an equitable amount of money. To me, it comes down to personnell. FC is it's own beast, and licensing for some FC gear can get steep. But you can do it if you shop around.

Ita feri ut se mori sentiat
0 Kudos
Chamon
Commander
Commander

If you are still thinking about iSCSI take a look at LeftHand Networks Hardware/Software just as a comparison to the big guys.

0 Kudos
hillda01
Enthusiast
Enthusiast

Our current server infrastructure roughly is made up of the following..

Exchange 2003 with 900 mailboxes largest one being 1Gb

18 Citrix servers.

15 SQL servers.

10 to 15 file and print servers....

0 Kudos
MalcO
Contributor
Contributor

You don't need dedicated switches for ISCSI, configure VLANs to separate the traffic.

0 Kudos
jeremypage
Enthusiast
Enthusiast

It always amazes me how many people are ready to shell out extra cash for FC before really thinking it through. For the VAST majority of ESX implementations you're spindle bound for IO...so the pipe you push that IO through really makes much less of a difference. Before you start worrying about 1gb versus 2/4/8 you may want to take a look at the amount of IO you're generating on your physical servers first.

FWIW we're running just under 400 VMs on NFS over 10gigE, works like a charm and makes BCR/DR a breeze, not to mention I can clone a running server in a matter of seconds. Good stuff.

0 Kudos
williambishop
Expert
Expert

A lot of environments would use a dedicated set of switches, including ours. With 2100 vm's and counting, but 30,000 devices using IP....I'm pretty certain I would have a separate setup instead of vying for cycles....A lot of networks are going to have qos enabled, sure, so that voip and what have you, are guaranteed bandwidth. But almost every medium and large environment keeps separate data/storage networks.

Ita feri ut se mori sentiat
0 Kudos
williambishop
Expert
Expert

While I don't doubt your statement, in larger environments, or heavier IO environments, spindle count can be really high, and let's not forget that most decent storage systems have a lot of cache(I have 100 gigs cache on one array), which means you can FAR exceed the maximum spindle count.

FC is not ridiculously expensive. You can get the switch for 10 grand or less, and hba's for about a grand. TOE iscsi hba's aren't cheap, and a decent cisco ethernet switch and licensing isn't going to be much cheaper.

On the other hand, I believe that everything will go to ethernet sooner or later, so unless performance is the absolute goal, I would recommend going iscsi....But I would pretty much never use the same switch fabric for my desktops and my iscsi implementation...for reasons stated above.

Ita feri ut se mori sentiat
0 Kudos