VMware Cloud Community
s1xth
VMware Employee
VMware Employee

How is iSCSI fast enough for VM access??

This may be a stupid question (no question is ever stupid! ha)...But I have always worked with local storage and SANs connected via HBA's, but how is iSCSI fast enough for virtual envirnments? Since they are essentially connected via copper 1gig connections, how is that enough bandwith between hosts? I have been doing alot of research on it (my envirnment hasnt moved to iSCSI...yet) to get more information. I just set up an Openfiler setup with three hosts in a cluster and set the openfiler up to use iscsi...everything works well (my test envirnoment is on a 100MB switch, gotta pick up a new gig switch along with vlan;ing my connections)....just wondering if someone could explain to me a little more about how ISCSI is able to handle the traffic so well...thanks!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
16 Replies
jbruelasdgo
Virtuoso
Virtuoso

with the right tunning and configuration, iSCSI can be a pretty good option

check this: http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-cust...

hope it points you in the right direction

regards

Jose Ruelas

Jose B Ruelas http://aservir.wordpress.com
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

wow what a great article...gotta read this...it will help me with my iscsi research and testing...thanks so much!!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
rmrobert
VMware Employee
VMware Employee

1 Gbit networking is pretty fast. Thats 125MB/s which is faster than about all spinning disks. For latency, your disk seeks would be much slower than the network overhead. Many of our customers are using 10Gbit networking as well, which eliminates any chance that the network is the bottleneck even with super fast SSDs.

Many people get buy without iSCSI just using NFS- whenever I run VMs around work, I always use NFS and haven't had any problems, although I'm sure this is less performant than proper iSCSI/San.

Reply
0 Kudos
jbruelasdgo
Virtuoso
Virtuoso

if the info is helpful, please close the thread

regards

Jose Ruelas

Jose B Ruelas http://aservir.wordpress.com
Reply
0 Kudos
williambishop
Expert
Expert

Generally, it's best not to close threads. The topic usually continues, as there may be data given you haven't considered yet, and the number one rule is you never make up your mind until you have all of the data in hand...and even then you have to be willing to change your mind when new data runs contrary to what you believe, or want to believe.

I won't even get into the religious war that is iscsi vs. everything else. Some people love it, some people hate it. For me it's never the bandwidth, it's always the latency.

--"Non Temetis Messor."
Reply
0 Kudos
s1xth
VMware Employee
VMware Employee

Thanks guys....I am just doing a lot of research before my move to shared storage and if I want to go with iSCSI or HBA SAN (md3000).

Thanks!

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
williambishop
Expert
Expert

Just run the numbers. Do a search on this site for the unofficial storage performance thread, don't forget to look at the spc2 numbers for the different arrays.

Personally, I already have fiber, so I prefer to stick with it since latency is ultra low and the performance is better(how much of that tcp packet is payload again?)...but it makes sense in a lot of cases to use iscsi.

--"Non Temetis Messor."
AndreTheGiant
Immortal
Immortal

MD3000i is a nice entry level enterprise storage solution.

Be only sure to use at least 2 different disk group, and to buy the 2 controllers versions.

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
s1xth
VMware Employee
VMware Employee

Andre....what do you mean when you say 'use two different disk groups?'.

http://www.virtualizationimpact.com http://www.handsonvirtualization.com Twitter: @jfranconi
Reply
0 Kudos
AndreTheGiant
Immortal
Immortal

Andre....what do you mean when you say 'use two different disk groups?'.

MD3000i, AX, CX and other storage have 2 controllers that can work active/active but NOT on the same LUN.

For these reason a good idea is have at least 2 LUNs and assign them to different controller.

Another good idea is (on this type of storage) different physical disk group for each LUN (just to separate I/O operations).

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
jasonlitka
Enthusiast
Enthusiast

I've got an MD3000i. It's an excellent starter system for those that don't want to roll their own with something like OpenFiler. I've got mine filled with 300GB 15K RPM drives in RAID 10 for OS disks & high-priorty storage and then have an MD1000 attached to it filled with 1TB 7.2K SAS drives in RAID 6 for lower-priority (Read: "User's MP3s") storage.

Jason Litka

Jason Litka http://www.jasonlitka.com
Reply
0 Kudos
Chuck8773
Hot Shot
Hot Shot

We have been using SW iSCSI for two years now. The first year was great, the first half of the second year was not. We used EqualLogic SATA arrays. I spent a ton of time researching disk IO metrics and we replaced the SAN that servers the system volumes with EqualLogic SAS arrays and performance was incredible. Prior to upgrdaing to SAS, rebooting most of the VM's at the same time, about 200, resulted in VM's booting up in 40-50 minutes. This created many issues for those VM's as services did not start. After upgrading to SAS, the same type of massive reboot resulted in VM's booting in about 5 minutes.

A few specs:

SATA SAN Random reads were around 800 IO/sec.

SAS SAN Random reads are around 4000 IO/sec.

These are both over a single Gb link with ESX 3.5. With My testing on a 7 SATA disk array in our test lab, I see an increase in MB/sec sequential reads from 100 MB/sec to 240 MB/sec when going from single path in 3.5 to 3 mpio paths in 4.0. The IO/sec did not change but I am attributing that to the 7 SATA disks. Both tests resulted in about 1000 IO/sec.

Hope this helps. In my view, SATA may work well when using MPIO. In our experience with a single path, SATA was not able to support to load of booting many VM's.

Charles

Charles Killmer, VCP4 If you found this or other information useful, please consider awarding points for "Correct" or "Helpful".
Reply
0 Kudos
jayctd
Hot Shot
Hot Shot

Updating

You addressed the point I was going to make more accuratly then I did

##If you have found my post has answered your question or helpful please mark it as such##
Reply
0 Kudos
onsik
Contributor
Contributor

Here are some info about MD3000i

Personally I would say it is good entry product,

Please also consider that SAS drives can work with 10K or 15K speed, while SATA with 7.2K

Reply
0 Kudos
jayctd
Hot Shot
Hot Shot

I would like to mention that equallogic will re-certify used hardware

We personally have not and unsure of the price but you can pick up a used PS300 for like 10000 and get it certified with full warranty (we have not re-certified but we have older PS100's which we have continued to warranty)

They are good about treating older hardware with just as much effort as brand new hardware

Jered Rassier

##If you have found my post has answered your question or helpful please mark it as such##

##If you have found my post has answered your question or helpful please mark it as such##
Reply
0 Kudos
touimet
Enthusiast
Enthusiast

Hello Chuck8773,

What iSCSI target are you using to get those performace numbers? What tool are you using to capture that data (IOMETER?) I ask because I've been working with a MD3000i (SATA drives in RAID 0) & NexSAN SATABeast (SATA drives in RAID 0) for a while trying to get performance numbers like what you are getting. I set it up in multiple configuations and always get sucky results (~20-40MB/sec). Here are my test cases which involve no other VM's running on the MD3000i or NexSAN:

1. ESX 3.5U4 connected to a Dell 6248 switch connected to MD3000i (both with & without jumbo frames)

2. ESX 3.5U4 crossover to MD3000i (both with & without jumbo frames)

3. ESX 4 connected to a Dell 6248 switch connected to MD3000i

4. ESX 3.5U3 crossover to NexSAN SATABeast

Every combination above results in very poor performace. In the case of the NexSAN I definitely know it's not the storage device because I setup a Microsoft Windows 2008 with the iSCSI software Initiator and was getting sick speeds! (~120-140MB/sec ) It seems like I am missing some magically advanced iSCSI performance tweak on the ESX hosts. What could I be missing?

Desperately seeking speed,

Todd

Reply
0 Kudos