VMware Cloud Community
arthurvino1
Contributor
Contributor

Help needed with Dell M600/1000 Chasis configuration.

HI,

Purchasing a new Dell 1000 Enclosure with 8 M600 blades..

Have Fiber Channel SAN currently and getting iSCSI SAN (Equalogic/LeftHand/Falcon) SAN soon. vmware will need to see both.

Vmware ESX 3.5 Enterprise for all servers.

Vmware requires multiple network ports. Currect configuration has 2 redundant Ethernet switches and 2 FC switches.. Dual port NIC on each blade. Is that enough of a configuration for Vmware setup with HA/DRS/vMotion? or I need 6 NICs/blade and should order a 2nd set of redundand switch for the enclosure?

Thanks

0 Kudos
5 Replies
kjb007
Immortal
Immortal

This will depend on how many vm's and what kind of vm's you are running. I have an environment of blades where I use 2 teamed NICs for all of my network traffic, but I use 802.1q vlan trunks and logically separate the network traffic that way. I use 6 pNICs in my regular servers, and use pairs of NICs for physical traffic type separation there, service console/vmotion, vm traffic, storage. I use trunks there as well so I can move NICs around if I need to allocate more resources, but use pairs for the most part, and have never needed to allocate more than that per traffic type.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
0 Kudos
balacs
Hot Shot
Hot Shot

or I need 6 NICs/blade and should order a 2nd set of redundand switch for the enclosure?

You are probably better off getting additional dual port Ethernet controller for each blade with 2nd set redundant switches. You will need

2 NIC ports for service console for VMware HA. You can have VMotion here as well as VM traffic

2 NIC ports for iSCSI

2 FC ports for FC SAN

Bala

Dell Inc

Bala Dell Inc
0 Kudos
arthurvino1
Contributor
Contributor

And how about production traffic?

I think Vmware recommends 8 total NICs for iSCSI SAN configuration:

2 iSCSI NICs

2 Production Network NICs

2 Service Console NICS

2 HA/Vmotion/DRS

Looks like Dell M600 has 2 NICs on Motherboard and 2 expansion slots which can be filled with either 2x2 port NICs or 2 Port NIC + 2 port FC.. 4 and 2 total.

In my case I have FC SAn and iSCSI SAN.. I would have 4 NICs for iSCSI/Produciton/Service Console/HA (1/2 of recommended 😎 and 2 for FC

Looks like thats not enough ethernet ports.. Thinking about eliminating FC SAN all together, if I have to..

Is second set of switches necessary?

Any suggestions appreciated.

0 Kudos
kjb007
Immortal
Immortal

If you're using blades, my experience is that you will exhaust the cpu and memory on these blades before you fill 4xGbE interfaces. If you have 4 interfaces to work with, you'll have to make some concessions. Service console and ha/vmotion/drs can share the same set of NICs. If you don't have huge number of vm's, you will not flood that network. Those interfaces will be used for regular management traffic, and will be used in bursts when a vmotion occurs. While it is recommended to separate them, it is not required, those will do fine being together. I would co-locate them on the same pNIC, and use 1 pNIC as standby. Using 1 for service console, and the other for vmotion, and using each as a standby for the other.

That leaves then Production and iSCSI, and they will be doing the bulk of the traffic generation for the blade. I would do the same here as with the other two. Dedicate 1 for the traffic, and use the other as a backup. If you find you are running into traffic problems, then I would put service console and vmotion on the same pNIC, and put the standby pNIC as active for 1 of these.

It's not the prettiest situation, and I find 6 to be a good number to spread my traffic as evenly as possible. 8 would be ideal, but very few blades can achieve this level of I/O.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
arthurvino1
Contributor
Contributor

I got the unit in place and lost.

We have Dell M1000e enclosure with 4 M600 blades inside.

There are 4 CISCO 3130 blade switches and 2 pass-through modules.

Each blade has 6 nics.

Lefthand iSCSI SAN.

Trying to setup VMware environment but no matter what I do, I cant see to make blade ping the VMKernal IP.

Here is our configuration:

A1 - pass-through switch

B1 - Cisco 3130 blade switch

C1 - Cisco 3130 blade switch

A2 - pass-through

B2- Cisco 3130

C2 - Cisco 3130

There are 4 external ports on cisco switches and 16 internal ports.

ANyone has a similar setup and can make a few pointers?

When installing VMware I see NICs: 4, a, 6, c, e and 10.

VMware showing nics: 0,1,2,3,4,5

Thanks

0 Kudos