VMware Cloud Community
natedev
Contributor
Contributor

Using Solaris 10 x86 as an iSCSI target

As an experiment, I'm going to (try to) setup a Dell PowerEdge 1950 with a 2.6 GHz Quad-core processor, 4 GB of RAM, an Adaptec RAID 3085 HBA (has Solaris 10 x86 drivers), Dell PowerVault MD1000 (w/15 x 750 GB SATA II drives), and Solaris Express, Community Edition build 61 and see what kind of iSCSI target it would make for ESX 3.0.1.

One of the things I'll have to decide is whether to use the drives in the PowerVault as a JBOD and setup RAIDZ or leverage the Adaptec's hardware to do RAID 6 or possibly 60. I'd think that performance would be much better if I let the Adaptec do the job but it seems like ZFS does a nice job of monitoring the drives in an array for problems (and correcting issues if/when they arise).

Is anyone successfully using Solaris Express w/ ZFS as an iSCSI target for ESX 3.0.1? Any thoughts or advice regarding my upcoming attempt?

Reply
0 Kudos
5 Replies
Dave_Mishchenko
Immortal
Immortal

Given that you're using SATA II drives I would go with RAID 10 if that's an option so that'll you'll have better performance. As well I would stick with hardware RAID.

natedev
Contributor
Contributor

Yeah, it's kind of a balancing act between performance and reliability. I'm worried about the SATA II's longevity. These are the Seagate Barracuda ES 750 GB drives which are theoretically more reliable than the ones designed for desktop units but they're expected to have a much shorter shelf life than a SAS drive (and they were all from the same lot). Since I have 15 of them I was thinking I might setup 14 of the drives in a RAID 60 with a single hot spare. Not sure how much slower that would be than RAID 10 but it would be a little safer.

Reply
0 Kudos
GCR
Hot Shot
Hot Shot

Hello,

Not with solaris, but with redhat yes.

I used iscsitarget sourceforge driver

http://iscsitarget.sourceforge.net/

Cheers

Reply
0 Kudos
natedev
Contributor
Contributor

Found some very interesting benchmarks of RAIDZ on ZFS versus various hardware RAID configurations:

http://milek.blogspot.com/2006/08/hw-raid-vs-zfs-software-raid.html

http://milek.blogspot.com/2006/08/hw-raid-vs-zfs-software-raid-part-ii.html

These results suggest getting a non-RAID HBA and using software RAID on ZFS will give you better performance and save you some money.

Reply
0 Kudos
VADA-john
Contributor
Contributor

I know this is an old topic. However i have a good deal of experiance developing and supporting zfs based iscsi targets.

As you have already pointed out you are better off using ZFS to control raid. This is primarily due ot the ZFS use of ARC. The ARC uses all of your memory over 1GB as read/write cache. most HBA cach is limmited to 128mb or 256mb

Also there is a web based gui for managing your zfs that lets you create your targets much more easily. The biggest problem with this solution is verifing that all of your hardware will work with Solaris.

Also I generally use the iscsi initiators of the os in the vm as opposed to using esx... however that is because I move VM's to VMworkstation and VMserver boxes alot as part of my testing. that way I do not need to re-establish the targets on each box.

as for your drives I would definatly maintain at least one online spare. and keep your stripe sets to 5 or less disks.. as discussed in the other threads sata drives have higher failure rates and you do not want to lose everything due to a double disk failure...

John

Reply
0 Kudos