VMware Cloud Community
TahoeTech
Contributor
Contributor
Jump to solution

OPENFILER w/ ISCSI? Any performance hit compared to a Commercial ISCSI SAN?

Experimenting with VMWare ESXi and want to setup a SAN to store all the Virtual Machines and storage. Is there any performance issues if using OPENFILER compared to a commercial ISCSI SAN? Asisde from HCL supported hardware -- is there any speed issues? I plan to use the Hardware Failover and Load Balancing but need a SAN to accomplish this. On a budget so just wondering if all else aside, will an OPENFILER solution be any slower than a commercial ISCSI SAN...

Reply
0 Kudos
26 Replies
TahoeTech
Contributor
Contributor
Jump to solution

I don't run Openfiler in a VM, I run it on a dedicated machine with some Fibre-attached storage...

Wait this is getting better by the minute... I was planning on using a Rackable Systems 16 Drive enclosure (3U) w/ a Promise Sata Raid PCI-Express card... 16 x 500Gig Maxtor Sata Drives --- CAN I ALSO pop in a SCSI card and use an older Promise VTRAK I have (Vtrak 15100) and add another 15 drives!!! That would be awesome but I don't know if the external SCSI enclosure would give me "AS FAST" of speed of using the internal drives? But maybe for EXTRA NFS storage that would be worth digging the Vtrak out of storage???

And just to double check --- OPENFILER will support vMotion as long as it is a dedicated machine, and not running on a ESX/i Host? Correct?

Reply
0 Kudos
patrickds
Expert
Expert
Jump to solution

And just to double check --- OPENFILER will support vMotion as long as it is a dedicated machine, and not running on a ESX/i Host? Correct?

To ESX it would just be an iSCSI SAN, so no problem for vmotion.

If running in a VM, you'd have to run it on another shared storage or local disks, and i don't think you'll get decent performance.

Reply
0 Kudos
patrickds
Expert
Expert
Jump to solution

I don't run Openfiler in a VM, I run it on a dedicated machine with some Fibre-attached storage...

Why this setup?

If you have fibre-attached storage you could use this from your ESX directly and get better performance.

Reply
0 Kudos
Jackobli
Virtuoso
Virtuoso
Jump to solution

So the Broadcom 5704 works well! Bummer because from everything I was reading, INTEL NICs were the "reccomended defacto" so I ended up ordering a PCI-X DUAL Intel Pro 1000MT card...

The Intel Pro's are a good decision as the Broadcom (57xx series) are. Both are made for server class / production environment.

I suppose have 4 NICs won't be a bad thing? Maybe I could set both sets (the dual broadcom, and dual INTEL) up in "teaming" and have 2 teamed nic connections to my iSCSI SAN? Would I see any benefit from "teaming" each set of NICs I have?

As I wrote earlier to Nick, there are situation, where you can't aggregate throughput. Have a look here. There are some ascii pics at the bottom that explain, how aggregation works.

But if you set it to failover with one onboard and one nic on the card, you get more security for network problems at least.

Reply
0 Kudos
Nadrek
Contributor
Contributor
Jump to solution

At home I use an SMC SMCGS24-Smart switch, 24 ports plus 4 GBIC slots. It's the usual datacenter noise level, and is right in the under $300 price range. I have yet to actually try VLANs or Jumbo frames, though with my recent foray into iSCSI and ESXi, it looks like those will definitely be experimented with soon. Mine's been running for some time without issue, but also without significant load. SMC hasn't updated the firmware in quite some time, and the unit is still available new.

Reply
0 Kudos
Smosschops
Contributor
Contributor
Jump to solution

Can anyone help please? I'm getting very slow performance from my Openfiler storage. I'm running ESX as a VM on Workstation with 1024MB RAM & the Openfiler appliance with 256MB of RAM. I'm using a HP GB NICs in all physical boxes (rebadged Intel GB NIC). The physical box running VMWorkstation is running a Q6600 CPU with 4GB RAM. The physical switch is a Procurve 1800 (24). When I run hdparm on the ESX VM I'm only getting buffered disk reads of 10MB/sec. (It was worse at 6MB/sec, until I added another vCPU to the ESX VM). If I run hdparm on the Openfiler appliance I get at least 33MB/sec, sometimes 60MB/sec. Is anyone running ESX VMs with Openfiler & getting better performance?

Thanks

Steve

Reply
0 Kudos
nick_couchman
Immortal
Immortal
Jump to solution

Two things:

1) Search for this in the forum - it's been discussed before.

2) If you still can't find an answer, start a new thread about it - don't "hijack" this one. You'll get better help, anyway, if you start a new discussion, especially since this thread is marked as "answered."

Reply
0 Kudos