VMware Cloud Community
qwkhyena
Contributor
Contributor
Jump to solution

Open Source iSCSI Targets; should I use IET or SCST??

I've just recently gotten IET to work w/ my ESXi vmhost but I've seen numerous posts recommending SCST over IET while I was debugging my current setup. Is this still accurate? I also see there's an STGT out there too but most folks stay clear of it due to poor performance compared to SCST & IET.

Is SCST difficult to get running and are there any tricks I should know about ahead of time (example: IET really needs to use BLOCKIO vs. FILEIO and you really want to change the /sys/block/<device>/queue/scheduler to deadline vs cfq. I saw a HUGE jump in performance by changing this alone!)

Lastly, I'm using 6 Gb nics on the SAN w/ 5 in a bonded interface using ALB load balancing. Is it better to break up the nics and assign them static IPs w/in the same subnet so my ESXi box using MPIO w/ multiple vmks can have more paths to the target?

I've read Chad's excellent post here btw. Smiley Happy! It was extremely helpful and if I ever meet the guy, I'm going to buy him a beer!

Thanks for your help!

-Jeff

Just for giggles, here's my current setup:

Switch: HP Procurve 2810-24G. Does Jumbo frames & flow control. Sadly, like most have already figured out, not at the same time. Currently using just flow control on the SAN vlan ports.

vmhost: ESXi 4.1 using 4 Gb nics w/ 3 of them for SAN use. I created 1 vSwitch w/ 3 vmks like Chad said. I'm using RoundRobin for my PSP w/ type = bytes and bytes = 11. I got better performance w/ this versus RoundRobin w/ type = iops and iops=1. Currently have 3 paths to target. With IET, I've seen throughput as high as 210MBps w/ two Win7 VM Guests running HD tune at the same time.

SAN: Using an Adaptec 3805 RAID card w/ 8 - 500 GB Samsung HDs. OS is OpenSUSE 11.2 w/ iscsitarget software (it's IET just don't know the version.) Has 6 Gb nics w/ 5 Gb nics in a bonded interface. I'm seriously stressing the added latency of the bonded inteface if my vmhosts boxes using MPIO can do the roundrobin on the iSCSI initiators!

Reply
0 Kudos
1 Solution

Accepted Solutions
mreed9
Enthusiast
Enthusiast
Jump to solution

I definitely have not done much work with nic bonding on a linux target. I don't think an advantage is really gained in this scenario with having the nics bonded to load balance incoming traffic as that logic would be taken care of by the esx host when MPIO is implemented on the host.

View solution in original post

Reply
0 Kudos
4 Replies
mreed9
Enthusiast
Enthusiast
Jump to solution

From the research that i've done SCST offers better performance due to it being hooked into the kernel and not running in user space. IET I believe still has an issue with properly handling storage accessed by multiple esx host simultaneously due to an issue with how it handles scsi reservations. I'm definitely not an expert but from what i've read installing SCST requires recompiling the kernel which i've found to be a little difficult. Here is a link for some information on SCST.

http://scst.sourceforge.net/index.html

qwkhyena
Contributor
Contributor
Jump to solution

Thanks mreed9 for your advice. I'll definitely start working w/ SCST (and doing kernel recompiles doesn't really scare me too much. I once played around with the LinuxFromScratch project for half a year just to get a better understanding of kernel compiles and linux in general.)

I do have another question however; have you done much w/ nic bonding on a linux target before? I'm really nervous about adding any latency to my SAN because the bonding interface has to make a judgement call on whether or not to offload the incoming traffic to another nic or let it stay on the primary nic. I'm only bringing this up because it appears that if you're using MPIO correctly on the initiator, you should be able to add all of the target nic static IPs manually somehow and simply allow the vmhost to create even more paths to the target. Sadly, I don't know enough to even talk intelligently about what I'm asking for here!

Also, some useful info you may want to forget as soon as possible, your call, the reason I'm using OpenSUSE 11.2 for my iSCSI target was because of better implementation of nic bonding then CentOS/RHEL. I'm primarily a CentOS/RHEL "whore" as one would call it but CentOS/RHEL did something to the nic driver I was using on the motherboard which made ALB bonding totally borked up as of version 5.1 to present Smiley Sad.

Cheers!

Reply
0 Kudos
mreed9
Enthusiast
Enthusiast
Jump to solution

I definitely have not done much work with nic bonding on a linux target. I don't think an advantage is really gained in this scenario with having the nics bonded to load balance incoming traffic as that logic would be taken care of by the esx host when MPIO is implemented on the host.

Reply
0 Kudos
qwkhyena
Contributor
Contributor
Jump to solution

Agreed. I just don't know how to set it up on the target side of things (to allow for multipathing that is.)

Reply
0 Kudos