VMware Cloud Community
w00005414
Enthusiast
Enthusiast

network design suggestions

Hi all,

We are designing a new VMware environment with three IBM ESX 4 hosts, a physical server acting as the VCenter server, and the vmdk files for the VMs will live in a Dell Equallogic PS6000E connected via iSCSI. Our IBM consultant said we should have 4 networks, they are

Service Console

Vmotion

vmNetwork

iSCSI Storage

The switches we are looking to use are Nortel 4548GT-PWR. A couple questions are (and bare with me, I am just getting my head wrapped around all this).....

1.) Has anyone used these switches? Any pros and/or cons or suggestions?

2.) Of the above 4 networks, which should be redundant (across 2 or more physical switches)? I would think the vmNetwork and the iSCSI Storage network are a must.

3.) How many physical switches should we get? 2, 3, maybe 4? I know the Equallogic takes up 6 ports (3 across each switch) and each ESX hosts will have up to possibly 8 connections I believe (4 across 2 different switches) so thats a total of 24 ports + whatever ports our NetBackup Enterprise server and the VCenter need to use to connect..

4.) What network would be used for backups? We use NetBackup Enterprise 6.5.3 . As some background, in the past we have tested VCB, for some VMs we would just install an agent.... the new Data Recovery option from VMWare looks interesting too as a replacement for VCB..... not sure which is best.

Any suggestions you could offer would be greatly appreciated :smileyblush:

Take care,

Brian

0 Kudos
3 Replies
bolsen
Enthusiast
Enthusiast

In a perfect world, I would consider the following:

2 pNICS for Service Console and VMotion

2 pNICS for SAN SC and iscsi

2 pNICs for vmNetwork

As for the SAN, the more ports you add the more bandwidth you'll allow. The number of ports is dependent on your application and utilzation levels.

rayan68
Contributor
Contributor

For the Nortel 4548GT-PWR I never user them I prefer to user Cisco switches , maybe ther are more expensive but still the preferable one, you will need 3 switches , each NIC in IBM server will be connected to diffrent switch , for example if you have NIC1 you asign it to service consol , in each server you will have NIC1 in each server asign to service consol and each NIC connected to different switch.

Of course if you can assign 2 NICs to each service that will be good , but if you have only 4 NICs and there is no way to add more , you can do this ,

NIC1 ( SC with Vmotion)

NIC2 + NIC3 (vmNetwork)

NIC4 (iSCSI Storage)

But in real world , you should have at least 2 NICs in each service , for minimum 6 NICs.

milos77
Contributor
Contributor

First of all i suggest you to purchase 2 multi-nic card with 4 interface and to devide each service on different card. (This for fault tollerance)

I think you need:

1 Group of 2 interface for Service Console and Vmotion

1 Group of 2 intercace for Iscsi (but EQP6000 has 4 intercace for each controller so you can consider to upgrade to 4 interface on each node for iScsi)

1 Group of 2 intercace for vmNetwork but this depend on how many virtual machine you run on each node

1 Group of 2 interface for vmNetwork for DMZ uses (I suggest you to phisically separate DMZ and LAN traffic)

Remember that is important to use interface card with TOE Capabilities.

0 Kudos