VMware Cloud Community
FrostyatCBM
Enthusiast
Enthusiast

Network design feedback (3 x ESXi host SMB)

Just looking for a little bit of feedback. Am about to start designing the detail of a new 3 x ESXi 4.1 environment. Have purchased some Dell R710's with 48GB RAM each and 4 x 1Gb/sec network ports each. Storage is a Dell 3200 and a Dell 1220 JBOD, shared DAS (not iSCSI), so I don't need to provide any iSCSI networking, just LAN, DMZ and management/vmotion.

Initially thinking that I would create distributed switches to ensure centralised design. Would allocate 1 NIC for vMotion/management network traffic, 2 NICs for LAN traffic (teaming), and 1 NIC for DMZ traffic ... and would run each of these (vMotion, LAN and DMZ) through their own physical switches.

I know there are probably a dozen different ways of setting this up. I prefer to keep my LAN, DMZ and other traffic separated on different physical networks, and I hoped to use some fairly inexpensive unmanaged Cisco SG100-24 gigabit ethernet switches to avoid VLAN complexities and routing issues ... unless there's a very good reason to do it that way.

What are the drawbacks of this (simple) design. Should I be getting some extra NICs in the servers?

0 Kudos
6 Replies
AndreTheGiant
Immortal
Immortal

VLAN solution is quite good, in most cases also for DMZ networks.

Use custom teaming policy for each portgroup to dedicate a different NIC for VMotion, Management, LAN and DMZ.

See also: - Best practice for NIC cards and vSwitches design

PS: The R710 has 4 slots, why you do not plan to add more NICs? Just to improve network performance.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
FrostyatCBM
Enthusiast
Enthusiast

Thanks Andre,

Yes, I might order some extra NICs, though I thought would wait first to see how it performs. Its only a relatively small network (<100 users) so usage is not very heavy. We're currently running a 2 x ESX host config with an iSCSI SAN and whilst the LAN performance is quite good, the SAN performance is not so great, hence the upgrade. I don't really expect the LAN usage profile to change much.

Appreciate the feedback.

0 Kudos
AndreTheGiant
Immortal
Immortal

In this case, 4 NICs could be fine.

Be only sure to put vMotion and FT log on different physical NICs.

And create at least 2 RAID group on your MD3200, one for one controller and one for the other.

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

We don't run any FT stuff at the moment. Might do in future conceivably, so if we do I will buy more NICs and separate it out (thanks for the tip!). Will be doing vMotion and Storage vMotion though. Had a chat to the other admin here earlier today. We've decided that we're happy to try things just with a single NIC for the vMotion network and for the DMZ network. If we had a physical failure of a NIC on either of those, the impact would not be huge. Dual NICs for the LAN network of course, as we'd get our butts kicked if that failed. &lt;lol&gt;

Re: your comment on the 3200 ... because we're using shared DAS, my understanding is that the dual controllers will only give us failover, not MPIO (limitation in VMware, not the SAN itself?) ... so I thought that probably meant that one controller does all the work, day in, day out, and the other one if effectively a 'hot spare' for failover. But I suppose I will find out for sure when I get all the gear set up and start configuring and testing!

0 Kudos
FrostyatCBM
Enthusiast
Enthusiast

I suppose that other thing that I didn't explicitly mention is that our vCenter server and the management functions will be on the LAN not on their own network. I believe that this is not ideal (from a security p.o.v.?), however its also the way our current ESX 3.5 setup is configured and it seems reliable enough.

0 Kudos
AndreTheGiant
Immortal
Immortal

You can have MPIO also with a DAS connection.

But only working with multiple LUNs (each LUN always only one path).

Andre

Andrew | http://about.me/amauro | http://vinfrastructure.it/ | @Andrea_Mauro
0 Kudos