VMware Cloud Community
chriswright128
Contributor
Contributor

How do you connect an iSCSI SAN to a server?

Hi Guys,

I know this isnt strictly just a VMWARE question so sorry if this is the wrong place to post it but im in need of answers fast and Im guessing you guys will know a fair bit about virtualisation technologies in general Smiley Happy

Basically, we are looking at creating a new virtual infrastructure and we want to use 2 Dell PowerEdge 1950 III servers to host several VMs on each of them. So right now I am trying to give our accountants an exact price of how much this will cost. So I need to know exactly what we will need to make this work, so far I have got the 2 servers with 2 processors and a lot of RAM etc etc but we also want to use a SAN and the cheapest option I can see is using an iSCSI SAN such as the Dell MD3000i but I need to know how you actually connect these to the servers. I mean does it require some kind of additional controller / interface to be purchased for both of our servers or does it use ethernet or what? The Dell MD3000i comes with the option to include cables that connect it to a SAS HBA but that doesnt make much sense to me if its an iSCSI device (I dont know a lot about iSCSI but I didnt think it had anything to do with SAS).

Can someone enlighten me on exactly what we need to make this solution work? (or if you have any better/cheaper suggestions for SAN-like storage for our 2 physical servers to connect to I am all ears!)

Thanks

Chris

Reply
0 Kudos
26 Replies
AndreTheGiant
Immortal
Immortal

Dell MD3000i (or other similar solution) could be very simpy.

You only need ethernel switch (best solution is 2 dedicated, but can work also with one).

On ESX you need only some spare NIC.

Have a look at:

http://www.dell.com/downloads/global/solutions/md3000i_esx_deploy_guide.pdf

Andre

**if you found this or any other answer useful please consider allocating points for helpful or correct answers

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
JeffDrury
Hot Shot
Hot Shot

The short story is that you can use a traditional Ethernet NIC to connect to your iSCSI storage. There are iSCSI HBA's that can connect to the SAN at a lower level and allow you to do thing like booting ESX from the SAN, but for simplicities sake I would stick with a regular NIC and using the VMware iSCSI software adapter. One thing that I would recommend is adding a dual or quad port network card to your servers to that you can segregate iSCSI traffic from your production network. You may want to also look at purchasing an additional switch to dedicate to your iSCSI traffic.

Designing a virtual infrastructure can be a large task and I would recommend building two or three architectures at different price/functionality points. Remember you get what you pay for, especially when virtualizing. Cheap storage, cheap servers and cheap networks often result in poor performance. I would suggest contacting your local VMware vendor to assist you with your architecture to ensure that your purchase will be able to do what you want it to do.

chriswright128
Contributor
Contributor

Thanks for the replies guys.

We are trying not to go for just the cheapest of everything - I've hopefully convinced them that just using a single physical server to run all of our VMs is asking for trouble (ie, no SAN or other server to fail over to or anything). I just know that cost of the SAN is tempting them to go with the single server option so I'm trying to keep the costs for that as low as possible.

But back to the question - you pretty much answered it completely so thanks Smiley Happy So just to clarify - basically I should make sure we have a normal ethernet port and a dual ethernet port on each of our servers (to be used for the SAN connection) and thats pretty much all we need on the hardware side of things yeah?

Thanks again

Chris

Reply
0 Kudos
JeffDrury
Hot Shot
Hot Shot

The server should have between 4 to 6 ethernet ports and more if possible. Ideally you would have 2 NIC's on your prod network, 2 for iSCSI and 2 for vMotion. You can condense that to 4 NIC's total with 2 on prod and 2 for iSCSI/vMotion.

Reply
0 Kudos
jayctd
Hot Shot
Hot Shot

You have a few options here for connecting your virtual's to ISCSI volumes

1) Use the ESX software ISCSI initiator and make it a VMFS volume and create VMDK's for your Servers

2) Use HBA's to do the same as option 1 (more expensive)

3) Use option 1 or 2 for your System volumes (C: Drives) and present your SAN network directly to the virtual and use the MSICSI (or other software ISCSI initator)

Option 3 is the most common for large SAN connected virtuals in 3.5 as it allows you to utilize MPIO and get more then 1Gbps san connections on those volumes

Option 1 is probably the least amount of work and configuration but you don�t get the advantages of MPIO in 3.5

Things change in vsphere4 where the software ISCSI initiator supports MPIO and option 1 gives you just as much performance as 3

Hope that helps I can explain in more detail if needed

##If you have found my post has answered your question or helpful please mark it as such##

##If you have found my post has answered your question or helpful please mark it as such##
Reply
0 Kudos
DSTAVERT
Immortal
Immortal

I would also look at NFS storage. For equal hardware the speed difference is minimal. Configuration and management can be much simpler and depending on where your budget ends up could be much less expensive.

-- David -- VMware Communities Moderator
Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

Does your current switch do VLANs?

If it does, you can avoid any additional switching needs yet separate the environments.

Reply
0 Kudos
jbenton8939
Contributor
Contributor

You really want to seperate the ISCSI and LAN traffic. You also want to get a switch that is good at ISCSI like the Dell 5424's. You should also enable jumbo ethernet packets on both the switch and the NIC's. Then your SAN will Fly.

The best configuration is two ISCSI switches, a 6 NIC SAN and 4 port ethernet adapters for your host servers. That way you can have 2 ports dedicated to storage traffic and 2 ports dedicated to LAN traffic. You get speed and redundancy.

Have 4 cables from one switch to the other for better throughput.

Reply
0 Kudos
chriswright128
Contributor
Contributor

Thanks, I've put the quote we got from Dell to our accounts department and will see if we get the go ahead Smiley Happy

I quoted for:

2 X PowerEdge 1950 III servers with the one built in NIC and one dual port NIC each (We only need one NIC for LAN traffic as this isnt a huge network)

PowerConnect 5424

MD3000i

I know its by far not the best virtualisation setup but its going to be hard enough to get them to go for this, anything more expensive would just get laughed at!

Thanks again

Chris

Reply
0 Kudos
JohnADCO
Expert
Expert

Actually, this is an excellent Virtualization platform.

We run 2950's with Md3000i's, we run up to 24 windows server VM's per host without issue. Speed is actually impressing.

With the MD3000i at a minimum I really suggest the two cheapo unmanaged switches devoted to the iSCSI netowork only, no LAN side connectivity to it. Only the hosts paths to the san(s).

We do the exact same thing, the on borad is a dual and we added a dual, we only use three nics, two for iSCS and one for lan and management. The network interface to the VMswitch as it's called is very fast. Handles 24 windows server vm's well, including FOIP, heavy duty SQL, and some heavy duty Exchange too.

Reply
0 Kudos
jayctd
Hot Shot
Hot Shot

While not a bad setup I would caution against "Cheepo unmanaged" for the SAN network. If you cant afford managed switches that do Jumbo Frames and Flow control i would recomend putting the SAN traffic into a vlan on the frontside (assuming the frontside does Jumbo and Flowcontrol)

The best solution would be to get SAN switches that have thoes features and keep that traffic seperate though.

##If you have found my post has answered your question or helpful please mark it as such##

##If you have found my post has answered your question or helpful please mark it as such##
Reply
0 Kudos
JohnADCO
Expert
Expert

I did add at a minimum to that post of mine above. Smiley Happy

But I must say the two 16 port Dlink DGS-1016D switches we use have been excellent throughput wise. I mean our iSCSI seems as good as anybody elses iSCSI.

So I can say that switches above which only cost like $100-ish work decently well.

Reply
0 Kudos
vxxxbazaaz
Contributor
Contributor

Jumbo yes but flow control is design to control transmission on links between two switches, one of which is not wirespeed. I would have thought most production-able servers should be able to handle GigE throughput?

Reply
0 Kudos
jayctd
Hot Shot
Hot Shot

Not nessisarily, flow control is used between any two nodes (not nessisarily switches) and comes in as a hard recomendation by both of the major ISCSI vendors I have worked with (Equallogic and Netapp)

To be honest early in our equallogic deployment their techs made it a point to say that flowcontrol is more important than jumbo frames in controling traffic back to the ESX hosts, even to the point on one of the smaller deployments where we only had an HP switch (that were notorious for supporting flow control or Jumbo but not both at the same time) they had us disable jumbo's in favor of flow control

http://www.cns-service.com/equallogic/pdfs/tr-cisco-3750-2970-switches.pdf

"It is recommended that you configure Flow Control on each switch port that handles iSCSI

traffic. PS Series storage arrays will correctly respond to Flow Control if enabled on a

switch. If your server is using a software iSCSI initiator and NIC combination to handle

iSCSI traffic, you must also enable Flow Control on the NICs to obtain the performance

benefit."

this one is based off of equallogic but any other storage vendor i have worked with has made it a must.

##If you have found my post has answered your question or helpful please mark it as such##

##If you have found my post has answered your question or helpful please mark it as such##
Reply
0 Kudos
JohnADCO
Expert
Expert

I say both are more fluff than anything these days.

Even though it is said to support flow control, I am pretty sure the Md3000i doesn't even respond to flow control looking at the traffic in testing. Jumbo? I have tested with and without, and made sure it passed all the way through to the VM's, it made no difference and actually seemed to hurt some of the smallest random I/O a little.

So my personal recoomendation stands, at a minimum get two cheapo gig switches and devote them the the iSCSI, use one for one subnet to both controllers and the other subnet for the other two ports on both controllers.

I can only say we are not using it, and it's fine. Our MD3000i's are pretty darn slammed with heavy traffic too.

Reply
0 Kudos
WSBETA
Contributor
Contributor

I spoke to Netapp tech that did a lot of direct testing and said the same. Jumbo did very little and hurt in some cases.

Reply
0 Kudos
vxxxbazaaz
Contributor
Contributor

Very interesting re flow control. All it does is to allow the receiving devices to request the sending device pause because it's receive buffers are full, avoiding packet loss which would normally be controlled at upper layers of the stack.

With the software iSCSI stuff, the clear implication is that it is not wirespeed on most hardware even with a single GigE. That in itself is a clear argument for a dedicated HBA in my opinion.

Re cheap switches - I would test very carefully with these, particularly if using software iSCSI (and hence flow control). Certain switches (LinkSys SRW for example) do not maintain flow control settings when rebooted, even though the interface indicates otherwise....

Reply
0 Kudos
Josh26
Virtuoso
Virtuoso

What you're missing is that "cheapo" switches, despite being gigabit, usually have a backplane nowhere near fast enough to allow the switch to actually push gigabit speeds on more than a few ports simultaneously. This is not the environment you want to be using iSCSI in.

I suspect a "cheapo" switch implementation of jumbo frames was the main reason it was felt there was no performance improvement. Commonsense shows that when you remove a lot of IP header relate traffic, the ratio of useful traffic/waste is improved, preformance HAS to go up, assuming the switch can provide for that performance.

Reply
0 Kudos
jbenton8939
Contributor
Contributor

Exactly this can bring apps that need iops to their knees

Joshua Benton

Reply
0 Kudos