VMware Cloud Community
Razorhog
Contributor
Contributor

VMWare Infrastructure - Lan Configuration

Hello all - I am the network admin for a public school. I'm going

virtual with my servers, and could use some help - especially with the

LAN configuration. I am getting 2 servers (Dell R710, dual Xeon 5520's,

and 48gb ram) and an MD3000i SAN (15 146gb 15k rpm SAS drives). VMWare

Infrastructure Enterprise edition, Virtual Center Foundation.

I

don't have a gigabit switch, but plan on getting a gigabit copper

module for my HP Procurve 5308XL. That switch is the core switch for

the entire district.

Would putting that module in the 5308XL

and using it for iSCSI connections be OK? I know you should have 2

switches for redundancy, but I figure if the core switch goes down,

everything goes down regardless. If that would be ok, do all of the

iSCSI connections need to be on a separate subnet and have a vswitch

handle VLANS? Or are the VLANS handled by the 5308XL?

I have a

lot of questions and things I need to get straight in my mind. Any help

would greatly be appreciated. I'm a bit scatterbrained right now, so

please let me know if you need more information about anything. None of

this equipment has been ordered, so I can make modifications as needed.

Thanks!

0 Kudos
14 Replies
RParker
Immortal
Immortal

If you can use iSCSI you can use NFS. NFS is probably a much cleaner protocol and it's much easier to use, than iSCSI.

But that connection is great for either. Just make sure it's segregated by either a VLAN (on physical switch) or separate IP segment.

0 Kudos
JoJoGabor
Expert
Expert

I would strongly advise getting your VCP or getting a VMware Authorised Consultancy to help you with the design here. You really need to get the base design correct otherwise your whole environment will start to give you problems.

In basic terms, you will need network connections for iSCSI, VMotion, VMs and Service COnsole. iSCSI and ideally Vmotion should be on a separate VLAN at a minimum, separate swithces if possible. You will also want redundancy built into the network, so I would suggest getting a server with at least 4 NICs:

1 for iSCSI (1Gbps) but then you will need a 1Gig port for every host and for the SAN itself

2 for VMs (ideally gigabit) load blanaced or failover

1 for Vmotion (should be 1 gig for production), but will work at 100Meg but may take longer and/or fail.

THis is really a bare minimum

0 Kudos
Razorhog
Contributor
Contributor

A VMWare Authorized Consultant sounds like a good idea. I want to do this the right way the first time.

I kind of understand what you are saying JoJo - the servers will have 4 gig ports. I wasn't aware that those features each needed a dedicated NIC. Still trying to understand all of this.

In the mean time, does anyone know of a good resource for learning about VI design/implementation?

0 Kudos
runclear
Expert
Expert

Uh - so you would only use 1 port for the Storage link? I would strongly advise to use at least two ports here.....

-


-------------------- What the f* is the cloud?!
0 Kudos
JoJoGabor
Expert
Expert

Sorry, I wasnt clear in my post, I agree I would always use 2 links for the iSCSI or NFS link, ideally to 2 different switches, but if you only have one gigabit port then we're stuck.

If money is an option you can get cheap gigabit switches but they aren't great at load-balancing and VLANing a lot of the time. For Vmware I would strongly recommend putting all ports on gigabit.

Drop me a private message if you need come consultancy help on this if you are in the UK

0 Kudos
Razorhog
Contributor
Contributor

I plan on putting all ports on gigabit. The servers will both have 4 NIC ports. I am trying to decide whether I should connect the servers to the iSCSI SAN through the HP 5308XL or getting a new switch(s). Either way I'm going to have to get that gigabit copper module for the 5308XL.

Sorry JoJo, I'm in the USA.

Please keep feedback coming!

0 Kudos
JoJoGabor
Expert
Expert

If you can afford it get a separate giga switch for the storage. You can also use this for Vmotion. You could then use the 1Gig modules for your VMs but again you really should have redundant links for your VMs, these can load balance as well. It depends how many hosts you have and how many giga ports the copper module will give you, I imagine it will only give 2.

Hell why not go whole hog and get 10giga switches!! Smiley Wink

0 Kudos
Razorhog
Contributor
Contributor

10gbe switches, haha yeah right!

Right now, I have the servers configured with 4 NIC ports. I know it would work, but it looks like 8 would be ideal.

How to setup virtual networking when 8 pNICs are involved follows:

pNIC0 -> vSwitch0 -> Portgroup0 (service console)

pNIC1 -> vSwitch0 -> Portgroup0 (service console)

pNIC2 -> vSwitch1 -> Portgroup1 (VMotion)

pNIC3 -> vSwitch1 -> Portgroup1 (VMotion)

pNIC4 -> vSwitch2 -> Portgroup2 (Storage Network)

pNIC5 -> vSwitch2 -> Portgroup2 (Storage Network)

pNIC6 -> vSwitch3 -> Portgroup3 (VM Network)

pNIC7 -> vSwitch3 -> Portgroup3 (VM Network)

That is taken from http://www.networkworld.com/community/node/36691

That makes sense to me, using 4 or 6 NIC ports is possible but more confusing.

0 Kudos
Razorhog
Contributor
Contributor

Now that I think about my last post, I don't think that is right for ESXi. ESXi doesn't have a console port, right? So with 4 physical NIC ports, it might look like this:

pNIC0 -> vSwitch0 -> Portgroup0 (VMotion)

pNIC1 -> vSwitch0 -> Portgroup1 (Storage Network)

pNIC2 -> vSwitch1 -> Portgroup2 (VM Network)

pNIC3 -> vSwitch1 -> Portgroup2 (VM Network)

0 Kudos
Dave_Mishchenko
Immortal
Immortal

ESXi doesn't have a Liunx service console and thus no service console port type, but for planning purposes you should still create a VMkernel port that will be used for management purposes.

0 Kudos
Razorhog
Contributor
Contributor

So for ESXi, 6 NIC ports might make more sense - 2 for VMotion, 2 for VMKernel (iSCSI), and 2 for VMNetwork.

Is this correct: Each port group will get a VLAN tag and connect to a vSwitch. Each vSwitch is associated with 1 or more pNIC port(s).

0 Kudos
JoJoGabor
Expert
Expert

Thats the ideal setup, yes. Bear in mind whether you want each pair to be in failover mode or load-balanced. If you want to load-balance (recommended for the VM network) then your switch will need to support load-balancing such as Etherchannel for Cisco, LACP or 802.3ad.

For Vmotion its good enough to have it in active/standby to reduce the complexity, for iSCSI it depends how many network connections your SAN has, if its only 1, again active/standby would suffice

Razorhog
Contributor
Contributor

With the MD3000i, it looks like I might be able to simply connect the two servers directly.

You can cable from the Ethernet ports of your host servers directly to your MD3000i RAID controller

iSCSI ports. Direct attachments support single path configurations (for up to four servers) and dual path

data configurations (for up to two servers) for both single and dual controller modules.

However, I would probably want to go with at least one dedicated 1GB switch, in case I add more ESXi servers in the future. The MD3000i can support up to 16 physical hosts.

0 Kudos
Razorhog
Contributor
Contributor

Would using the dual-path direct connection method for the 2 servers be better than connecting through a 1gb switch?

0 Kudos