VMware Cloud Community
arthurvino1
Contributor
Contributor

Setting up a new VI 3 with Lefthand or Equallogic & Dell M600 blades questions.

Still have a lot of questions that need to be answered, thought maybe someone here already has a similar setup and could provide guidance.

Purchased Dell M1000e chasis & M600 blades..

Need a SAN or 2. DR SAN with SRM is also in the plans.

SAN will need to host about 40-55 VMs, that will include Exchange 2003 (clustered), SQL 2000, SQL 2005 (both clustered), bunch of applicaiton servers (db, etc) and other servers.. Citrix is currently on physical servers, but thinking of migrating to these new blades/SAN (about 35 Citrix servers, 500-700 users)..

Currently have pass-through ethernet modules in M1000e chasis.. need to design a network to connect to iSCSI san, etc..

Q1) Should I purchase internal switches or external switches?

If external switch is the answer, what switch works best with iSCSI sans like LeftHand or Equallogic? Is it Cisco 3750e? 3750g? whats the diffrence between those 2 models?

If going with Equallogic the plan is to get 1 MS5000e unit (SATA) and 1 MS5000X (10k sas) or MS5000xv (15k sas) units.. high capacity SATA unit for dns, and low IO servers and sas unit for SQL/Exchange/etc..

Q2) Will that work or should I be getting same SANs types for scaling?

Both SANs should be asynchronously copying data to a SAN in DR (5000e)..

Q3) WIll that design work?

Lefthand design would be somewhat similar:

2 mirrored SAS units and 1 SATA units @ primary site (NSM2120 modules) and all 3 snapshotting to SATA unit in DR..

Anyone has a similar design?

Finally.. VI 3 calls for 8 NICs for proper design. My blades only have 6..

Q4) Whats the best way to break the networks down? I plan to:

NICs 1&2 - iSCSI SAN

NICs 3&4 - Production server Network

NICs 5&6 - vMotion, service console, and whatever else I missed?

Any help is greeatly appreciated...

0 Kudos
8 Replies
mcowger
Immortal
Immortal

1) We use external switches so we aren't forced to use the Dell or Cisco switches (neither of which we like). We feel we have more options. Personally, I wouldn't use any form of 3750 switch - 100MBit is outdated, and fior an iSCSI SAN you NEED Gigabit.

2) your choices seem backwards - I've done large DNS implementations, and never once seen one that was particularly high IO. I HAVE seen high IO SQL and Exchange though....

3) Sounds like you might not have enough disks to support 55VMs....

4) That seems fine to me....I personally make do with 4 interfaces, but we use FC for our storage, not iSCSI.

--Matt

--Matt VCDX #52 blog.cowger.us
christianZ
Champion
Champion

I would say the 2 Eql boxes (one with 15k disks) should be fine for 60 vms - but as mentioned above, the Exchange and Sql servers belong on the 15k disks (definitely).

I'm not sure if putting the Citrix server on Esx (with 700 users) would be the best choice.

The network design seems fine for me (you forgot the console for iscsi). I would preffer external Gbit switch for iscsi (e.g. from hp).

Just my thoughts.

matuscak
Enthusiast
Enthusiast

Ummh, there are a number 3750 switch models that are gigabit. The 3750E models are newer and come with 10gig uplinks.

For what it's worth, when I took the Equallogic training class (just prior to the Dell acquisition), the instructor told the class that while they were supposed to be neutral on switches, all the engineering and support people viewed the 3750G as the "gold standard" for iSCSI SANs. Interestingly, he also said that the one switch vendor that they thought was junk was.... Dell. I don't know if thats changed or not, but they probably won't admit to it anymore Smiley Happy

0 Kudos
Rumple
Virtuoso
Virtuoso

There is a reason the standard for networking gear really is Cisco. At the switch/router level they are hard to beat for performance and reliability.

The 3750's in a stack are amazing for ESX. the one downfall I would say they have is that jumbo frames need ot be enabled at the swithc level vs the port level.

Dell switches are basically bastardized cisco ios switches (typically come from same plant but custom ios). Overall alot of the same but I always think its IOS 5.x vs cisco 12.x

0 Kudos
mcowger
Immortal
Immortal

Interesting you say that about the cisco switches - we mvoed away from them because we weren't getting the performance & reliability we needed.

--Matt

--Matt VCDX #52 blog.cowger.us
0 Kudos
Rumple
Virtuoso
Virtuoso

I am in a shop right now with cisco worldwide (including putting gear in the worse field hellholes you've ever seen) Never had any issues. Any performance issues were usually related to duplex mismatching (like server guys leaving servers on auto for 100 Mbit) or cisco guys bound and determined that they MUST hardcode gbit interfaces (or the "Senior" network guy not knowing the difference between lacp and etherchannel).

Backplane wise its hard to beat a device that could support every port in a 48 port switch being maxed out and not suffering from performance issues.

0 Kudos
chrisfmss
Enthusiast
Enthusiast

We have 2 PS5000E. Equllogic recommend for me the Cisco 3750-E or 3120 for Dell M1000e. For the Jumbo frame, you activate this on switch level, if you receive standard frame, the switch will ajust to them. It is the same thing with the PS serie, the port is configured with jumbo frame end you cannot change this setting.

0 Kudos
doubleH
Expert
Expert

well i'll add in my config as well....i have 3 x PS100E's running on HP Procurve switches and couldn't be happier. I went with HP over Cisco because Cisco was out to lunch with their pricing and the fact that there was a year maintence cost. With HP you pay for the switches and no yearly maintenance costs. all firmware upgrades and support are included in the price.

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points

If you found this or any other post helpful please consider the use of the Helpfull/Correct buttons to award points
0 Kudos