VMware Cloud Community
rettingr
Contributor
Contributor

ESX and iSCSI

Hi,

we are interested in setting up a vmware esx based infrastructure with a iSCSI SAN as the storage unit but Dell recommended not using iSCSI for the moment because of vmware is not supporting

jumbo frames

TCP-offload

Dell SAN Management Software

and some other things.

Does anybody has some experience he can share with vmware ESX and iSCSI SAN.

Our idea was going for two Dell Power Edge 2950 and AX 150 iSCSI SAN.

Do you have other ESX - iSCSI SAN installations you can recommend from different vendors like HP or FSC?

Thanks

Rolf

0 Kudos
20 Replies
happyhammer
Hot Shot
Hot Shot

Rolf

whilst the vmware initiator does not support jumbo frames the QLogic HBA's 4050 and 4052(dual) cards do. you can use these iSCSI HBA's to connect to your iSCSI storage. These cards will also do the tcp processing.

We have an Equallogic PS100e iSCSI array connected to 2 Dell 2950 servers and Cisco 3750g switch and have been impressed with the performance

0 Kudos
BUGCHK
Commander
Commander

I cannot recommend HP iSCSI storage for VMware ESX.

The EVA iSCSI option simply does not support it - they have only recently qualified Apple MAC OS X in addition of Windows and Linux.

The MSA1510i is a big disappointment to me. VMware ESX has only recently been qualified, but it is single-controller only, because it is still an Active/Passive controller device - you even need a proprietary DSM for Microsoft's Windows MPIO which is incompatible with any other solution.

I talked with our EqualLogic[/url] partner manager this morning and he told me that they have quite a number of partners in Germany.

0 Kudos
lightfighter
Enthusiast
Enthusiast

ISCSI works fine!!!

We have 16 servers connected to a Netapp

0 Kudos
christianZ
Champion
Champion

jumbo frames

Only with QLA405x (iscsi hba) and some switch models (not by all models

possible) - but more important is "Flow control" - you need here iscsi certified switches (e.g. hp 2848, cisco 37xx or higher)

TCP-offload

always when using iscsi hba

Dell SAN Management Software

and some other things.

Dell blabbering

Our idea was going for two Dell Power Edge 2950 and

AX 150 iSCSI SAN.

It will work but not especially fast.

We are Dell shop too - using Equallogic now and satisfied with it.

The SATA models are faster than AX150 but more expensive too.

I like the easy administration of EQL and the all licences included.

0 Kudos
femialpha
Enthusiast
Enthusiast

Dell is just trying to lure you to spend more money. I have the AX150i with 3 hosts and 20 low utilization guests and it works fine. I have tested wity my Equallogic and iSCSI and you will get much better performance from them. iSCSI is only going to get better so if you don't have a FC infrastructure in place i would recommend iSCSI. Just note that the AX150 is not a high I\O appliance.

0 Kudos
Osm3um
Enthusiast
Enthusiast

I am using a Dell 2800 and a Dell 2800 connected to an EMC AX150i. I have yet to have a single problem.

We have 65 users and am running a Sharepoint 2003 Server (300,000+ documents), Exchange 2003 (40gb store), file server, av server, print server.

I can't compare it to anything else, but I have heard that the Equallogic units are the best.

Bob

PS The Raid card on my 1800 died the other day and wiped my local HDs with my ESX installation. I simply restarted my VMs on the 2800 and have been running quite nicely. A good example of a reason to use shared storage!

0 Kudos
Paul_Lalonde
Commander
Commander

All good points!

Most technical people tend to be cautious around "iSCSI" because they think the speed of the Gigabit Ethernet link will somehow hinder their performance.

In the world of storage, performance is mostly determined by the characteristics of the drive array, not the transport medium between the host server and the storage head. Believe it or not, most people think that FC is faster than SCSI and that both are faster than iSCSI. Well, in terms of raw transport speed, yes: Ultra320 SCSI is 320MB/s, 2Gbps FC is 250MB/s, and iSCSI is 125MB/s.

But if you take the same drive array and interchange the transport (ie. change from SCSI to FC or to iSCSI) the overall performance stays mostly the same. Why?

Because disk performance isn't dependent solely on transport, it's dependent on the drive array configuration itself.

As a consultant, when I perform server performance analyses, my customers are always amazed with how relatively poor their server disk subsystem performance is. They think that, because they have three to five 15K Ultra320 SCSI disks on a high-end Smart Array (for example), they're going to have exceptional disk performance. When I show them the actual performance numbers, they're shocked. Why are the numbers so low?

Because the configuration of the drive array is suboptimal! In order to achieve near wire-speed performance of any disk array, you need 1) enough disk spindles to saturate the I/O bus, and 2) effective array caching mechanisms to reduce the effect of latency.

Long story short, if your disk subsystem is only capable of pushing a sustained rate of 100MB/s -- and most fail to realize that this is MORE than adequate 99% of the time -- why \*wouldn't* you choose iSCSI?

Paul

0 Kudos
acr
Champion
Champion

Totally agree Paul, nicely said, IMHO still far to many people still believe iSCSI is not a choice for storage, or at least overly concerned about its place in Teired Storage Architecture.

iSCSI is most definately a great choice..

0 Kudos
BUGCHK
Commander
Commander

2Gbps FC is 250MB/s,

2 GigaBit FC transfers 2.125 GigaBits per second on the wire.

If you strip off the 8B10B encoding and the header overhead,

you are left with 200 MegaBytes per second.

0 Kudos
lightfighter
Enthusiast
Enthusiast

I have both FC and Iscsi storage in my farm, and the performance is something that you will not notice

I haven't at this point

0 Kudos
BUGCHK
Commander
Commander

I did not (intend to) claim that you will notice a difference, but I do claim that you cannot simply divide the number of transmission bits on the cable by 8 to get a meaningful value.

0 Kudos
Paul_Lalonde
Commander
Commander

You are right; I was generalizing, but since I'm not an FC guy, I didn't know the true overhead.

The actual performance figures for 1gig, 2gig, and 4gig fibrechannel are:

"full speed" 100MB/s or 1,063Mbps

"double speed" 200MB/s or 2,126Mbps

"quad speed" 400MB/s or 4,252 Mbps

I stand corrected! Smiley Happy

Paul

0 Kudos
letoatrads
Expert
Expert

Listen to the above posters....ISCSI is a perfect solution. I have both FC and ISCSI in productions, and plenty of ISCSI in production with VMWare.

Ease of use, lack of need for SAN/FC Fabric experience, and in some ISCSI boxes, GREAT performance.

0 Kudos
BUGCHK
Commander
Commander

That's OK, Paul.

I, too, learn something new every day and I really appreciate your contributions.

I don't know what the bit-rate on a GigaBit Ethernet cable is (and I didn't invest much time to try and find out), but I have my doubt about 125MegaByte/sec for iSCSI, too. As far as I know, GbE took the 8b10b encoding from FC.

0 Kudos
Paul_Lalonde
Commander
Commander

Actually, Gigabit Ethernet's signalling rate is 1.25 Gb/s per the standard, and the effective data rate is 1 Gb/s after 8b/10b encoding.

As for 10 GigE, it's a 12.5 Gb/s signal rate with 10 Gb/s usable for data transmission.

But, the best throughput I've personally attained on a GigE link is 950Mbps.

Paul

0 Kudos
jared1
Contributor
Contributor

Paul -

I am new to the SAN/iSCSI discussion. I thought it would be a great tool of there was a layout end-to-end of the throughput at the various points of data retrieval from disk. Somekind of a diagram with speeds converted to a common measurement. Is this conceivable?

app - os - interface - hba - switch - array - disk, etc.

I'm sure I'm leaving pieces out, but does the concept make sense?

Thanks,

Jared

0 Kudos
xunillator
Contributor
Contributor

Listen to the above posters....ISCSI is a perfect

solution. I have both FC and ISCSI in productions,

and plenty of ISCSI in production with VMWare.

Ease of use, lack of need for SAN/FC Fabric

experience, and in some ISCSI boxes, GREAT

performance.

I'm curious about your setup.

You say you have FC and iSCSi in production. Does this mean you have ESX hosts running a combination of iSCSI and FC on the same box? Or does this mean you have ESX hosts that are iSCSi only and ESX host that run FC only?

The reason I ask is that I have been doing some testing in our lab with iSCSI targets (SLES 10, 25MB/s). iSCSI performance has been very good so far. I would like to house our VM's on an iSCSI Datastore and provide the VM's with FC RDM LUNS.

I read a white paper from EMC. They state that iSCSI and FC on the same ESX host is not supported. I think that's a crock. I can see where I might have issues implementing HA and Vmotion if the ESX hosts can't see all the storage.

Comments?

0 Kudos
bjsthans
Enthusiast
Enthusiast

Does any of you iSCSI "people" have experience with using iSCSI from within a VM.

If that's the case, what is the performance like - I would assume that the VM would be better of processing application CPU cycles, not disk I/O cycles when it's actually scheduled to run on the CPU

Regards

0 Kudos
terets
Contributor
Contributor

The reason that you can't do iSCSI and FC on the same host with an EMC SAN is that the SP's cannot present both to one host. It's a crock but only in the terms that EMC has not implemented this ability within Navi. You can only go iSCSI or FC with a host, not both. It's not a VMWare or ESX specific case. It's just a limitation of the SAN.

Netapp, however, can do this.

As far as a VM running iSCSI on an ESX server, why would you do this? If the datastore is already located on an iSCSI LUN, just present an RDM. (assuming of course that you've implemented iSCSI HBA's and not relying on the SC to service iSCSI traffic to the SAN).

0 Kudos