VMware Cloud Community
nmillerNC
Contributor
Contributor

VMWare Network Design Help

So my brain has become quite scrambled by over-thinking this design and I am hoping someone here is kind enough to set me straight. My overall goal is to adhere to best practices, eliminate single points of failure, and achieve the most performance possible. I have attached a Visio of my current physical plan, but it lacks a lot of redundancy and I was wondering how best to achieve that with the resources I have.

In brief I have a VMWare ESXi 5.1 cluster with six hosts each with eight 1Gb ports and four 10Gb ports. I have one SAN with 2 10Gb ports each on the active and passive controllers along with 3 1Gb switches and two 10Gb switches. Any advice would be greatly appreciated.

Reply
0 Kudos
3 Replies
rickardnobel
Champion
Champion

It should be very possible to create a fully redundant solution with your hardware setup. However, I did have a hard time understand your Visio drawing.

Could you draw or describe how a single ESXi host network setup is in your design? With vSwitches, vmkernel nics, vmnics (physical network ports), VM portgroups.

My VMware blog: www.rickardnobel.se
Reply
0 Kudos
nmillerNC
Contributor
Contributor

Thanks for taking a look. Hopefully this makes sense to people and is a good setup. Comments and criticism will help me a great deal and are very much appreciated.

Reply
0 Kudos
BenLoveday
Enthusiast
Enthusiast

Hi there,

Based on what gear you have you could cut this many different ways and would really depend on your requirements. If you wanted to simplify your install and reduce hardware you could actually get away with just the 10GB switches, separating off the iSCSI network and Mgt/vMotion/VM traffic using separate pairs of uplinks. Or on the other hand, as you have it currently, you can separate management and VM traffic from the storage and vmotion traffic. I would say there isn't necessarily a wrong or right way to do this but one that suits your needs best.

Another option would be to use the 10GB switch uplinks for storage and VM traffic if your VM traffic warranted 10GB, leaving vmotion/management to the 1GB links. But this would also depend on the size and number of VM's. If you had a sizable VM footprint a vMotion could take a lot longer compared with having 10GB at it's disposal.

Alternatively you could set two 10GB ports for storage, two 10GB ports for vMotion/FT and carve the 1GB links up for VM/Management traffic (again VM and Management traffic could be VLAN'ed off, either being trunked or completely separate nics).

If you have any specific business or security requirements these would be a big help in deciding the best route.

Cheers,

Ben Loveday

Reply
0 Kudos