VMware Cloud Community
TahoeTech
Contributor
Contributor
Jump to solution

OPENFILER w/ ISCSI? Any performance hit compared to a Commercial ISCSI SAN?

Experimenting with VMWare ESXi and want to setup a SAN to store all the Virtual Machines and storage. Is there any performance issues if using OPENFILER compared to a commercial ISCSI SAN? Asisde from HCL supported hardware -- is there any speed issues? I plan to use the Hardware Failover and Load Balancing but need a SAN to accomplish this. On a budget so just wondering if all else aside, will an OPENFILER solution be any slower than a commercial ISCSI SAN...

Reply
0 Kudos
1 Solution

Accepted Solutions
nick_couchman
Immortal
Immortal
Jump to solution

Well, maybe...

I'd agree with this if you were running other stuff on the same machine as Openfiler, but if you've dedicated a machine to serving out iSCSI volumes via Openfiler, then you should get pretty good performance - maybe even comparable to "hardware-based" iSCSI implementations. I have a PE2800 with a single processor and 4GB of RAM that is my Openfiler iSCSI SAN/NAS head. The performance is just fine - may not be quite as good as some of the commercial solutions, but performs very well.

Furthermore, a lot of the commercial solutions are software-based, not hardware based. The small-time, SOHO ones are "firmware" based, but the iSCSI is still usually done in software - a lot of times via a Linux daemon on the box. Sure, that's the only thing it's doing, but, just because it's software doesn't mean it's going to perform poorly.

I use Openfiler in a production environment - no problems. You can also purchase support for Openfiler, which makes it even more production-viable. Don't knock it, just because it's software, just because it's open source, or just because it's Linux. There are plenty of people out there using Microsoft Windows Storage Server in production - why not Openfiler??

View solution in original post

Reply
0 Kudos
26 Replies
kooltechies
Expert
Expert
Jump to solution

Hi,

Definately technical reason is Openfiller is a software implementation which doesn't have hardware based targets. While a commercial iSCSI SAN will have hardware based targets. Moreover these Openfillers are good for training purposes should not be used in a production environments.

Thanks,

Samir

P.S : If you think that the answer is helpful please consider rewarding points.

Blog : http://thinkingloudoncloud.com || Twitter : @kooltechies || P.S : If you think that the answer is correct/helpful please consider rewarding points.
Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

What if I were to use ISCSI HBAs in the OPENFILER appliance?

I am mainly concerned about the SPEED of the VMs running off of an ISCSI SAN... We do have an older SCSI U160 SAN but I think ISCSI would be faster than U160?

Will VMs run fast enough off of ISCSI storage?

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

Well, maybe...

I'd agree with this if you were running other stuff on the same machine as Openfiler, but if you've dedicated a machine to serving out iSCSI volumes via Openfiler, then you should get pretty good performance - maybe even comparable to "hardware-based" iSCSI implementations. I have a PE2800 with a single processor and 4GB of RAM that is my Openfiler iSCSI SAN/NAS head. The performance is just fine - may not be quite as good as some of the commercial solutions, but performs very well.

Furthermore, a lot of the commercial solutions are software-based, not hardware based. The small-time, SOHO ones are "firmware" based, but the iSCSI is still usually done in software - a lot of times via a Linux daemon on the box. Sure, that's the only thing it's doing, but, just because it's software doesn't mean it's going to perform poorly.

I use Openfiler in a production environment - no problems. You can also purchase support for Openfiler, which makes it even more production-viable. Don't knock it, just because it's software, just because it's open source, or just because it's Linux. There are plenty of people out there using Microsoft Windows Storage Server in production - why not Openfiler??

Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

Nick, thanks for the helpful response... What kind of NICs do you have in your OPENFILER system? Are they ISCSI HBA or regular Gigabit nics? I am looking at using a dedicated machine for Openfiler.

What kind of switch should I use in an ISCSI environment? I was looking at the HP 1800 series (24G) . It is a Gigabit switch and supports JUMBO FRAMES... I saw a post back around Jan 2, 2009 that said that it is BETTER to use Flow Control than Jumbo Frames? Is that accurate?

Can anyone reccomend a good 24 port Gigabit switch in the $200-300 range? It does NOT need to have Gbics.

Thanks again...

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

iSCSI HBAs in the Openfiler machine will definitely help - if you can get the iSCSI Offload Engine and/or TCP Offload Engine running under Linux. I believe Qlogic has drivers for Linux for their iSCSI HBAs that allow them to be used as targets and make use of the IOE/TOE capability, but that's something you'll have to do some research on. Generally speaking the Linux community has not been receptive to the idea of TOE - you can read all about it in a few places out there on the Internet.

As I said in my other post, I use iSCSI on Openfiler for several of my production volumes, one of them being an ESX datastore. I have a bonded interface that has two GigE controllers as members for the iSCSI traffic, and another one for management and NAS (NFS) traffic. Assuming your Openfiler machine has enough RAM to do some decent caching and buffering, you ought to be okay. You may have to look around for kernel and sysctl parameter tweaks that allow things to function a little faster, but it should work. You'll probably have to do your own tests to determine if it's "fast enough."

As far as U160 goes, that would probably be okay, assuming the correct RAID level. With U160 I would try to do a RAID10 or something similar on that - you're going to want to make sure the drive writes occur rather rapidly.

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

I use Intel Pro/1000 Server NICs in my Openfiler machine - no iSCSI Engine or TOE, just standard GigE NICs.

Our switch is a Foundry Networks EIF48G switch - not the best one Foundry ever made, but it works just find, supports JUMBO frames, etc. I don't do any flow control on that switch or anything like that - pretty standard config (haven't spent a lot of time optimizing for iSCSI traffic).

I think you're going to have a hard time finding a decent GigE switch in that price range. Dell sells some pretty good ones, but they're not up to par with the Cisco, Foundry, Extreme, etc., switches.

Reply
0 Kudos
Jackobli
Virtuoso
Virtuoso
Jump to solution

As I said in my other post, I use iSCSI on Openfiler for several of my production volumes, one of them being an ESX datastore. I have a bonded interface that has two GigE controllers as members for the iSCSI traffic

Nick, sorry for dropping in, but did you really achieve more than one GBit on iSCSI by bonding?

I am running in issues while bonding between on esx host and one fileserver. All I ever read is, that network bonding for aggregating bandwith does not work in a one to one situation. All my tests (using iPerf) did confirm that.

I am using Intel GBit nics too and a HP ProCurve 1800-24.

Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

I am online ready to order the HP 1800 when I saw your post... Is the 1800 24G switch a decent "budget" quality switch? I also saw DELL has a 2724 that can be had from $150-280 on ebay...

You know i just thought of something else too... You mentioned ESX. I will be running ESXi --- Can I run JUMBO FRAMES in ESXi? Its turned on in the SAN and can be turned on in the switch - BUT where do you turn it on in the ESXi host?

Reply
0 Kudos
dilidolo
Enthusiast
Enthusiast
Jump to solution

I heard 1800 has very small buffer that makes it no good for iSCSI. I have a 2724, the only thing missing is 802.3AD but it supports static mode.

What's more important here is NIC and spindles in Openfiler. ESX is not bandwidth hungry, IO/s is a lot more important.

Jackobli
Virtuoso
Virtuoso
Jump to solution

I am online ready to order the HP 1800 when I saw your post... Is the 1800 24G switch a decent "budget" quality switch? I also saw DELL has a 2724 that can be had from $150-280 on ebay...

I tried a Netgear first, but was not lucky with it. Never got bonding and VLAN working, noisy fan too.

The HP has no fan (turned off by HP). Configuration is done by browser AND it works not only with IE 6 but also firefox (and other browsers). I heard pros (no noise, easy admin) and cons (failures) but HP does give lifelong support (or had this option).

You know i just thought of something else too... You mentioned ESX. I will be running ESXi --- Can I run JUMBO FRAMES in ESXi? Its turned on in the SAN and can be turned on in the switch - BUT where do you turn it on in the ESXi host?

Jumbo frames have been discussed here... Not working, working... but there are rumors, that they did not bring more performance. Perhaps it depends on the situation too.

I am awaiting some more disks here to do more tests. I tend to go with NFS on a Linux server. Should making backup and clones easier. But I ran into that one (ESXi) to one (Linux NFS) situation, where bonding does not really boost performance.

I will see if I can run more benchmarks saturating the network.

TahoeTech
Contributor
Contributor
Jump to solution

dilidolo,

can you elaborate a little more... I found the 2724 for $100-$200 on ebay. Is it the better choice for a switch in my scenario? Are the buffers better than the HP? Should I possibly get separate switches? 1 for LAN traffic and 1 for

iSCSI traffic? Or will the 2724 support VLAN w/ JumboFrames, leaving

the default VLAN alone (no jumbo frames) for regular lan traffic?

ALSO --- Can you elaborate a little more on I/O's I am still new to the world of openfiler. What is a good nic to run in openfiler, and what are spindles?

Really appreciate the help!

Reply
0 Kudos
dilidolo
Enthusiast
Enthusiast
Jump to solution

I think it's an Ok switch, of course not as good as cisco. I don't know your usage so I can't say much. I use port based VLAN and tag based VLAN, works fine for me, you need a router btw. I didn't use JB as I didn't find too much difference.

I have 6 disks in OF, 4 in RAID-5 for VMware, export through NFS and iSCSI, typical performance is about 60-70MB/s write, 70-80MB/s read, good enough for me. Spindle is just disk. The more disks you have, the more IO/s OF can provide.

Use Intel NIC, even desktop ones are not bad. I don't know what box you will be using to host OF, modern hardware is good, SATA and NIC dont' go through PCI bus. Old hardware will be very slow as they all share PCI bus.

Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

dilidolo,

I plan to use the following:

CPU: Opteron 248 (2.2Ghz)

RAM: 4GB PC2100

Motherboard: http://www.tyan.com/archive/products/html/thunderk8spro_spec.html (onboard BroadCom BCM5704C dual-channel GbE (2 LAN PORTS)

Raid Controller: 3WARE 9550SX-16ML (16 port SATA2 Raid Controller)

Disks: 16 x 500GB Maxtor MaxLine Pro 7200 3.0GB SATA

Can I expect decent results with this setup, or should I go with an INTEL Nic Card? I will have 2 extra PCI-X slots that I can fit NICs in...

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

I'm not sure about bonding on ESX(i), but on Openfiler (Linux) it works great - looking at traffic stats I see the traffic distributed. I'm not sure that I've ever pushed any single connection over 1Gb/s, but I've definitely pushed multiple connections to the point where the traffic was greater than 1Gb/s. Also, this usually only happens in bursts - despite buffering and caching on both the Openfiler machine and the RAID arrays, it doesn't take long for things to slow down to disk speeds.

Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

I am still looking for advice on a budget switch choice. I know I should have separate LAN and iSCSI switches, but currently can NOT afford it. I plan to segregate LAN/iSCSI traffic with VLANs until I can afford a 2nd switch. Which of the following is the better choice?

DELL 2724: 24 port Gigabit managed switch (Frame Buffer per Port???)-- $230

HP 1800-24G: 24 port Gigabit Managed Switch w/ 500KB Frame Buffer per Port --- $350

Are there any other Rackmount Gigabit switches I should be looking at in this price range? I would also consider 2 unmanaged switches if the lack of JumboFrame and Flow Control tuning on the switch won't hurt me performance wise...

Reply
0 Kudos
dilidolo
Enthusiast
Enthusiast
Jump to solution

I have TYAN 2892 at home, same broadcom 5704 nic, works great, I was able to get over 960mb/s connection speed. You can try the onboard 5704, and if you enconter issues, put Intel nics in.

I also used 9550s a lot, good card.

Reply
0 Kudos
KellyOlivier
Enthusiast
Enthusiast
Jump to solution

I bought a D-Link DGS1224T smart switch for my lab. It 's gig and supports vlans and 802.3ad which was what I wanted. It was 250 bucks from newegg. It also supports jumbos. I use openfiler too for my cluster vmfs, but I don't see how people could use openfiler in production since you can't vmotion the openfiler vm---chicken and egg scenario.

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

I don't run Openfiler in a VM, I run it on a dedicated machine with some Fibre-attached storage...

Reply
0 Kudos
TahoeTech
Contributor
Contributor
Jump to solution

I have TYAN 2892 at home, same broadcom 5704 nic, works great, I was able to get over 960mb/s connection speed. You can try the onboard 5704, and if you enconter issues, put Intel nics in.

I also used 9550s a lot, good card.

So the Broadcom 5704 works well! Bummer because from everything I was reading, INTEL NICs were the "reccomended defacto" so I ended up ordering a PCI-X DUAL Intel Pro 1000MT card... I suppose have 4 NICs won't be a bad thing? Maybe I could set both sets (the dual broadcom, and dual INTEL) up in "teaming" and have 2 teamed nic connections to my iSCSI SAN? Would I see any benefit from "teaming" each set of NICs I have?

When you say 960mb/s do you mean Megabyte or Megabit? Either way that is fast! I hope I can get 3/4 of that!!!

Reply
0 Kudos