VMware Cloud Community
EcioBNI
Contributor
Contributor

Your suggestion for network configuration (

Hi all,

we're going to deploy this Virtual Infrastructure on our shop: we will have

- four ESX 3.5 servers, each has 4 Gigabit pNics

- three Cisco Catalyst 3750G (stacked)

- a Netapp storage (4 nics)

the storage connection will be deployed using iSCSI (we have no NFS / FC licenses).

We already deployed a similar config one year ago (esx 3.01) and in that occasion we used this approach:

-one pNic for dedicated Service Console (on an separated management network - with switch accessed "out-of-band")

-three pNics configured in etherchannel (one link per switch) - load balancing based on ip hash - vlan trunking with separated vlan for vMotion, another SC, iSCSI and VM networks (all on the same vSwitch)

-on the ntap 4 nics teamed on a etherchannel link, over that two virtual interfaces on two separated vlan (one for cifs and mgmt and the other for iscsi)

That config worked flawlessy but probably it wasnt perfect in some points: given the fact that the iscsi initiator and target

have one fixed ip each, i think that even though each ESX server has three links it always use the same link for iSCSI traffic.

In addition, the two Service Console were not a real backed up (i.e. the Virtual Center Server was on only one of this networks)

I've read on this forum that someone suggests different config, for example grouping the nics two by two

and then using other load balancing techniques (route based on originating port ID).

What network implementations do you suggest me with this kind of setup?

I was thinking about this alternatives:

1) all the four pNics in a vSwitch connected via etherchannel to the three Cisco switches;

two SC vlans, N vm vlans, 1 vmotion and 1 iscsi vlan all on the same vSwitch

2) something like 1) but maybe doing some kind of active/passive assignation using portgroup and overriding vSwitch options

3) create two vSwitches (2 pNics each one) and put SC/vMotion on the first and SC/iSCSI/VMnets on the other

(but i think the second vSwitch will be "loaded" too much with iSCSI and VMnets together..)

All of this config have always the same problem: iScsi traffic will always flow on the uplink; i know that using NFS we could use

more links with different aliases and mountpoint. some ppl from Netapp told me that in 3.5 there are new features for iscsi

that could allow to use more links but i wasnt quick enough to go deeper and unfortunately i havent found info about it yet.

Does ESX 3.5 (+Netapp) support some kind of iSCSI multipathing ?

Thanks you in advance for any advice

PS I know it would be ideal to use 6-8 pNics, but We're thinking about starting with this base config..

0 Kudos
7 Replies
bfent
Enthusiast
Enthusiast

I would use the following:

2 for SC/VM - seperating the two with vlans for security and possibly having one vmnic in standby (VMware does not perform 'true' load balancing)

2 for VMkernel/SC/Vmotion - seperating physically from the other network(s) and having one vmnic in standby.

This config will provide redundancy (VERY important) for both the VMs and the VMkernel.

0 Kudos
EcioBNI
Contributor
Contributor

bfent, your config provide redundancy (mine too) but if you use one generic pNic per couple as standby it limits you to 1gbit link for SC/VM and 1 gbit for iSCSI/vMotion...

0 Kudos
Texiwill
Leadership
Leadership

Hello,

With only 4 pNICS I would opt to do the following:

2 pNICS for vmkernel/sc/vMotion

2 pNICs for VM Network

The split will give you redundancy for everything and redundancy and load balancing for the VMs.

I would not place the VMs on any vSwitch where a SC port lives. This split protects against that. If you have alot of SC/vMotion traffic your iSCSI traffic will be impacted. Ideally you want 8 pNICs and anything less than that is not optimal for redundancy and performance.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
bfent
Enthusiast
Enthusiast

That is true. For the VMs, I suggested "possibly" making one a standby. If you want to keep them both active, that is fine. Keep in mind though, VMware does not 'load balance' the traffic, it more like load balances VMs (and not live).

As for the VMkernel/Vmotion suggestion, I apologize. My suggestion is based on Dell's MD3000/NX1950. If your iSCSI SAN supports true load balancing and/or redundancy, by all means, utilize all vmnics in active mode (for the first time, I've maxed out our iSCSI network, of course, it took deploying 5 VMs from templates at the same time).

I am sticking with 2 and 2, though. I would not suggest putting SC on with anything other than VM (only with vlan'ing). Security should be a big concern, too. You do not want to make your iSCSI network accessible to anything (also a VMware best practice).

0 Kudos
Texiwill
Leadership
Leadership

Hello,

When I deploy iSCSI the first thing I suggest is iSCSI-HBAs, if cost is not really an option, this way you have 4 pNICS for SC/vMotion/VM and 2 iSCSI-HBA ports for iSCSI, granted your iSCSI server needs to support them, which not all do. Smiley Sad

The SC must participate in the iSCSI network when using pNICS or iSCSI-HBAs either by being directly on the network, or routed through the appropriate device. This requirement is for the way iSCSI is authenticated (even when you are not using authentication). So please keep that in mind.

You really want your storage network to be redundant and as fast as possible. At the very least I would add in 2 more pNIC JUST for iSCSI. 4 pNICS just will not cut it in the long run. Consider the case where you have to vMotion 30 VMs off a server so you can do an upgrade. Your vMotion usage via VLAN will Impact your iSCSI on a different VLAN of the same cable. It is really much better to use a pair of pNICs or iSCSI-HBAs just for iSCSI to alleviate any concerns.

You can use 4 like you have, but you will be forced to make some choices when things go south, and they will not be great choices either.

BTW, I treat NFS and iSCSI just the same way I treat a SAN. You are forced to use FC-HBA for a SAN, so I look for 1 or 2 pNICS just for iSCSI. 2 for redundancy.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
EcioBNI
Contributor
Contributor

Hi Tex,

actually i have used (and i will use) more than one Service Console: given the fact that we want iSCSI traffic completely isolated (so dedicated vlan AND traffic not routed), and that one SC is needed for iscsi discovery (hba rescan and so on..) we will have three SCs: two for VC-ESX communication and one for iSCSI jobs.

Talkin about iSCSI-HBAs, i dont like them too much: first of all Netapp is not a great supporter of them (they usually suggest using NICs), secondary i hadnt a great experience with them: in my previous shop we bought 4 Qlogic 4010c (if i remember correctly the model) in 2006 for our sql cluster (2 HBAs for servers in order to provide redundancy) used with Netapp. They worked correctly but they didnt support jumbo frames (i know now qlogic sells hbas with jumbo) and qlogic almost dropped development for them so now my ex colleagues are in this impasse: they cant install 2003 SP2 because SP2 has a problem with some qlogic port driver that cant be updated because it needs something else bla bla bla and that config is not on Netapp compatibility matrix. So, iSCSI HBAs, no thanks Smiley Happy

About 4 pNics i dont think we will ever had the 30 VMs of your example on one of those servers (we're still talkin about dual quadcore with 24GB ram, not 32 cores and 128GB ram)

Btw all your suggestions are precious, thank you!

0 Kudos
Texiwill
Leadership
Leadership

Hello,

I agree, iSCSI-HBAs have limited support at the moment. I think there are only a few devices that even support them. WHich is too bad as you do get quite a few nice features over the Software iSCSI initiator but alas this is the situation as of now. I expect it will change over time. But yes, pNICs are the safe way to go now. The newer qlogic iSCSI HBAs do support Jumbo frames, and the early ones are no longer on the HCL for VI3.5.

Having more than one SC is a choice you need to make, just be aware that you now have another way to reach the Service Console. ESX does not lock down services by IP. All services are available on all IPs used by the service console. So in effect port 902 for VC is now open and available on the iSCSI network SC link (after a reboot actually or hostd restart). Depending on how the network is setup and what is on the iSCSI network you can open up other attack vectors against the SC. There is now more to consider from a security perspective.

There are two schools of thought on this, one is multiple SC ports, the other is a routed network through a firewall/gateway for the specific port so that the iSCSI server is isolated from even the SC except for the necessary port. Both have risks but and quite frankly I am not sure which is better. I have used both but only go the extra SC port road when I have more than 4 pNIC.

On some of the Dual Quad Core servers out there with less than 64GBs of memory it is possible to run 30VMs easily, remember the theoretical limit for these servers is much greater than 30 VMs and depends on the memory footprint of each VM. But even if you limit your self to only 12 or so VMs you may have to push all VMs off one system very quickly, trying to get around some critical failure. This action could impact the storage network if it is on the same link as the vMotion network. Just food for thought.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education. As well as the Virtualization Wiki at http://www.astroarch.com/wiki/index.php/Virtualization

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos