VMware Communities
VMTN_Admin
Enthusiast
Enthusiast

OpenFiler OpenFiler

0 Kudos
13 Replies
jasonboche
Immortal
Immortal

OpenFiler 2.2 iSCSI has been working flawlessly for many weeks in my test lab. VMotion, DRS, HA, etc.

VCDX3 #34, VCDX4, VCDX5, VCAP4-DCA #14, VCAP4-DCD #35, VCAP5-DCD, VCPx4, vEXPERTx4, MCSEx3, MCSAx2, MCP, CCAx2, A+
0 Kudos
estradar
Contributor
Contributor

OpenFiler 2.2 iSCSI has been working flawlessly in my Production Environment. VMotion, HA using 3 ESX Servers 4TB of storage. Combination of rsync and robocopy for DR replication. Getting ready to bring up second site.

0 Kudos
mcadmin
Contributor
Contributor

One issue that I find annoying with OpenFiler and iSCSI is, when you modify your iSCSI config by adding or removing a volume. The iet.conf file reverts making everything Lun 0, so you have to manually copy back in another filer or actually edit the file itself.

Joe

Joe Suma Consultant PSO Americas Southeast Region VMware, Inc. VCP, MCSE, MCSA, MCP, CCA, A+, ITIL http://www.linkedin.com/in/jsuma
0 Kudos
rcs2k
Contributor
Contributor

I am setting up a similar lab but I'm having trouble with creating volumes in Openfiler 2.2. The physical disks show up fine in the web admin page; however, when I click on "create new volume" no physical volumes where found. The phiysical disk is present when I run "fdisk -l" at the command line.

Do you have any suggestions.

Thanks

0 Kudos
johnwilk
Contributor
Contributor

Can you point me at your guide for using rsync with Openfiler? I want to replicate data across sites for DR also.

0 Kudos
andy_mac
Enthusiast
Enthusiast

2.2 & 2.3 are both solid performers. iSCSI works well and performs flawlessly for all the usual applications as well as VCB/vRanger.

NFS and CIFS performance is incredible - The more memory (cache) you have the faster it goes...

0 Kudos
Snr_Whippy
Contributor
Contributor

Hi estradar,

Just out of interest what kind of VM Servers have you been hosting on the box? IIS / app servers / SQL or oracle?

I have seen sql server running on it using NFS without any issues so far but its hard to tell how much its being pushed.

What kind of spec is the hardware for your openfiler system 10k or 15k drives?, memory? cpu?

Have you managed to push it past its limit?

0 Kudos
ebowser
Contributor
Contributor

Just wanted to give another thumbs up on Openfiler.

Here's our config:

5 x Dell 2900 ESX Servers running ESXi 3.5 Update 3. Software iSCSI initiator on integrated Broadcom GB NIC

OpenFiler array:

White Box - 2 x Quad Core Xeon, Intel 5000 chipset motherboard, LSI 8 channel SATA/SAS RAID card. Bonded onboard GB NICS (802.3ad) dedicated for iSCSI traffic, add-in NIC for management.

2 x 80 SATA for OS/swap on motherboard SATA ports running software RAID1

8x750 SATA storage drives in two separate RAID10 arrays on the LSI card.

VMFS3 - 2 extents in one datastore giving a total of 2.73TB. Formatted with 2MB block size.

We run about 120-140 Virtual machines, use VCB and storage VMotion. There is a good mix of small Exchange servers, MS-SQL, mySQL (both Linux & Windows), PostgreSQL (again, Linux & Windows), Application/Terminal servers, Web, DNS, Postfix, Qmail, and Sendmail servers. Absolutely no storage bottlenecks so far.

Yes, this is all on SATA, not SAS, and it runs like a top. We tested several different systems we have here as OpenFiler servers before going into production, and I would highly recommend it as long as you are running multiple cores and a hardware RAID card. We only tested iSCSI, so I can't say how NFS performs.

0 Kudos
gippnet
Contributor
Contributor

Are you really booting 120-140 machines from one OpenFiler server thats attached on 2 Gbit?

I have spent a couple of days reading about OpenFiler and linux ISCSI in particular and I see lots of people saying that the gigabit network is a real bottleneck....

But if i understand you right, this is great news!!!

Do you know if OpenFiler performance increase with more cpus or is it single threaded?

Thanks for sharing!

/mathias

0 Kudos
ebowser
Contributor
Contributor

The bottleneck at this point is the disks, so we've just ordered hardware to build another one and will be splitting the load. More spindles = more performance.

Average network utilization doesn't usually surpass 300Mbit,so I'd say that 2Gbps is not a bottleneck at all.

We're usually 90% idle across all 8 cores. It seems like the iSCSI target server has two threads that use CPU cycles, and the Linux bonding driver also consumes some CPU cycles. If I had to guess, performance wouldn't increase past 4 cores.

I also hear a rumor that VMFS really performs better on a 256 stripe size, so we will be testing this on the new array before going live.

Good luck, and let me know how things turn out for you!

0 Kudos
gippnet
Contributor
Contributor

Thanks for the reply.

I am about to order new hardware for a new SAN and a couple of Host machines for virtualization.

It's really hard to get decent information regarding ISCSI/FC and sans in particular it's so much religion Smiley Happy

Hearing your words about network utilization is really good news for me of course it all depends of what you are doing on your servers Smiley Happy

The CPU information is also very helpful.

I am trying to decide how much memory and how much CPU power ill need in the SAN server. I will only have about 2TB in this SAN.

This is a really good post about ISCSI and VMware: http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

0 Kudos
ebowser
Contributor
Contributor

Like I said, I think 4 cores should be sufficient. We have 4GB RAM in that box, mainly because we couldn't use it anywhere else. Right now there is less than 400MB being used.

I've never tried running ESX/ESXi itslef on unsupported hardware... our Dell 2900's are workhorses though and reasonably priced.

0 Kudos
gippnet
Contributor
Contributor

Thanks! I really appreciate your information.

I'll drop a post here when we have ordered and tested our setup.

We will deficiently buy PE2950s, we have only god experiences with both 1950s and 2950s they are like you said workhorses...

Cheers!

0 Kudos