VMware Cloud Community
jredwine2857
Enthusiast
Enthusiast
Jump to solution

How do you have your Equalogic SAN setup??

We are getting ready to implement our Equalogic PS6000 series SAN. Just wondering from any users out there what if any tips you might have? Did you use NIC bonding for increasing the b/w to the SAN? Any pitfalls with using it with vSphere?

Tags (1)
Reply
0 Kudos
1 Solution

Accepted Solutions
AndreTheGiant
Immortal
Immortal
Jump to solution

Tips and cabling topology are on Equallogic documentation.

There are some switches tuning: flow controll, RSTP, storm control, ...

Do not use NIC teaming (for example with Etherchannel) cause Equallogic could not work in optimal way.

Use instead multiple vmkernel interfaces for iSCSI (with vSphere is now possible) with different NIC binding.

See http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro

View solution in original post

Reply
0 Kudos
2 Replies
AndreTheGiant
Immortal
Immortal
Jump to solution

Tips and cabling topology are on Equallogic documentation.

There are some switches tuning: flow controll, RSTP, storm control, ...

Do not use NIC teaming (for example with Etherchannel) cause Equallogic could not work in optimal way.

Use instead multiple vmkernel interfaces for iSCSI (with vSphere is now possible) with different NIC binding.

See http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
Reply
0 Kudos
sketchy00
Hot Shot
Hot Shot
Jump to solution

Make sure you have SAN switches that are blessed by Equallogic, and double check your configuration of the switches. Equallogic puts out a nice step-by-step instruction set for configuring Dell Powerconnect iSCSI optimized switches. Follow this and you will be fine. Throughput increases will be achieved using MPIO. If you are using ESX 3.5, in order to use any multipathing, you will need to connect up LUN's using the guest ISCSI initiator. This requiress their HITKit installed on the VM. So, in cases of say, Exchange or SQL servers, you'd have the primary OS partition residing on it's normal VMFS volume, but then you'd use the guest VM iSCSI initiator to connect high I/O partitions (db and transaction log partitions) and set up multipathing. It's pretty easy stuff, and is "application aware" but if you need more specifics, let me know.

vSphere of course will allow MPIO at the hypervisor layer, but you might want to familiarize yourself with the tradeoffs of having LUNS connected at the guest/VM level, and at the hypervisor level. If you are using vsphere, make sure you bump up to U1. You might run into this scenario (http://virtualgeek.typepad.com/virtual_geek/atom.xml) but that's about it.

My other advice (I have a PS5000) is as nice as thin provisioning is, don't ever use it on customer/employee facing drives (file shares, etc.). The space will get sucked up right away. Save it's use for scenarios like say, a transaction log drive, or a db drive. Or, if you have say, a 600GB VMFS LUN for VM's to reside, you could thin provision that just so it only initially occupies 300GB or so.