VMware Cloud Community
Shook4Brains
Contributor
Contributor

Networking and vSwitch design question

List,

I'm building a brand new infrastructure and I have questions concerning the vSwitch config. To keep things in prespective, I work for a 50 user shop that is in high growth mode (hence ESX now). My new infrastructure is:

1. 3 Dell PE 1950 with dual quads, 16GB of RAM, two local SAS dirves in a RAID one and four gig NICs

2. 2 Cisco 3750 Gig switches

3. Equallogic PS100 iSCSI SAN

Internal VLANs are:

1. Managment

2. Storage/iSCSI traffic

3. Vmotion/HA

4. Production Servers

5. User PCs and Printers

6. Dev and Test domain (we are a software development firm)

7. Voice

8. Internal wireless

9. Public wireless

current vSwitch idea

1. Storage/iSCSI vlan has a VMK on a dedicated NIC

2. VMotion/HA/DRS has another VMK on a dedicated NIC

3. Prod Servers, DEV/Test, users and internal wireless are VLAN'ed on a single NIC

4. Service console on a dedicated NIC

Where is the best place to put the managment traffic and is there a better option(s). I'm new to the ESX game and I would like to design this the best way the first time, so any/all counsel is appreciated.

TIA,

Shook

Reply
0 Kudos
23 Replies
Chris_S_UK
Expert
Expert

Don't create port groups with only a single physical NIC each. It would be better to create 2 vswitches with "NIC bonds" (i.e. 2 NICs load-balanced together) and put multiple port groups (vlans) onto each vswitch.

Chris

Reply
0 Kudos
Shook4Brains
Contributor
Contributor

Since I'm an ESX rookie could you please elaborate?

Shook

Reply
0 Kudos
BryanMcC
Expert
Expert

In ESX you create redundancy on a vswitch through Nic bonds. This is the practice of adding/bonding 2 or more nics to a vswitch. This is in its most used scenario setup for Virtual Machine portgroups and VMkernel portgroups for storage. To elaborate a bit more you take one or more nics and bond them to a vSwitch. Then you trunk the ports that support the nics you have just bonded to the vSwitch. Afterwards you create port groups and use vlan tagging when creating the port groups for your specified vlans (one port group created on the vSwitch per vlan). When you create VMs you specify whcih portgroup to add them to based on their IP address scheme and what vlan they are going to be on.

So lets say you have six network adapters..... You could do the following (this is a very simple design and shoudl be modified to fit your needs)..

1 Nic vswif0 - Service Console

2 Nics vSwitch0 - After creation you create VM portgroups using VLAN tags for your VMs (these Nics will need to be on trunked, 1q most likely, ports on the physical switch)

1 Nic vSwitch1 - VMkernel portgroup for VMotion

2 Nics vSwitch2 - VMkernel portgroup for storage

Hope that helps..

Help me help you by scoring points.
Shook4Brains
Contributor
Contributor

i've only got 4 NICs per server, how would you consolidate? I really don't want to go ask for more money for additional NICs if I don't have to.

Thanks.

Shook

Reply
0 Kudos
BryanMcC
Expert
Expert

I guess I would have to ask some questions.. Are you planning on using iSCSI? IS your Dev environment on the production network?

You really want to have iSCSI dedicated on a vSwitch with its own Nic bond for redundancy and security

Service Console can be on 1 Nic for vswif0 or added as a port group on vSwitch0 with 2 Nics with the VM portgroups

Then you could create antoher vSwicth and assign a single or two nics if you share the VM portgroups with the Service Console for VMkernel/VMotion portgroup

Or bond 2 Nics to vSwitch0 and create port groups for VMotion, VMs, Service Console and 2 Nics to vSwicth1 with a portgroup for your iSCSI

Let me just tell you this to explain how fexible this is. In our current network we do not use iSCSI.. We use Dell blades which are limited to 2 pNics so I create one vSwitch and bond the two Nics then I use a 1q trunk for the Nics. I then create port groups using VLAN tagging for Service Console portgroup, VM portgroup (one per VLAN), VMkernel for VMotion.

This is not "best practice" but sometimes "best practice" just doesnt fit.

I hope this isnt to confusing.

Help me help you by scoring points.
Reply
0 Kudos
BryanMcC
Expert
Expert

BTW.. I have seen no negative impact from the network side with my current configuration... And we have soem pretty IO intensive VMs on some hosts. I use HA/DRS and VMotion manually quite frequently with zero service impact.

Help me help you by scoring points.
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

There are actually several issues to consider.... Performance, Cost, and Security.

For Security you really want the following features split out over different redundant pNICs:

Admin Network - Service Console (note service console must participate in iSCSI network or have multiple vswifs that do)

vMotion Network - Memory image sent over network in clear text. Very dangerous not to have private

VM Network - can use 802.1q

Storage Network - iSCSI (see Admin network caveat) or NFS. Separate pNICs for pure performance and security.

Ideally for the above 8 pNICs would be desired. However that is not always possible so you need to consider other options. You have 4 pNICs, if you can put in 4 more I would. This would give you performance and redundancy where you need it.

If you can not get a 4 port NIC for the systems then you have to do the work of 8 pNICs within 4 pNICS.

Split them into 2 groups of pNICs. For security reasons you do not want your VMs to access any of the other networks so....

Set aside 1 group for just VM vSwitch

Put everything else on the other vSwitch

Security issues to be aware of:

  • Remember that if you allow any promiscuous mode ethernet devices on the virtual network all traffic on a vSwitch is sniffable, the arp cache can be poisoned and really bad things happen.

  • It is possible to use a SSL MITM attack on any Web based Administrativia and thereby gain access to the ESX servers

I tend on the side of security and use physical separation where ever possible, yet it is possible to have 4 pNICS do the work of 8 pNICS. Perhaps using a split like:

SC/iSCSI - 1 pNIC

VM Network - 2 pNIC

vMotion - 1 pNIC

or

SC/iSCSI/vMotion - 2 pNIC

VM Network - 2 pNIC

If iSCSI is in use then performance or redundancy suffers in each case. So if you are starting from scratch I would up the # of pNIC.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
dkfbp
Expert
Expert

For security and redundancy I would add 2 nics to the server so you have a total of 6.

Then I would configurre like this:

vSwitch0: Service Console and Vmotion pnic1 & pnic3 you have already "VLANED" your vmotion traffic hence this should be a secure solution. Put the service console on pnic1 and put pnic3 as standby adapter. Put the vmkernel for vmotion on pnic3 and put pnic1 as standby.

vSwitch1: Service Console for iSCSI and vmkernel for iSCSI pnic2 & pnic5

vSwitch2: Virtual machine traffic pnic4 & pnic6.

Frank

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Actually having just 6 as you stated is a security concern. THe SC should not be able to read the memory image of the VM as it is transfered and since you have placed vMotion and the SC on the same vSwitch that potential security issue can occur. Granted it would take a slip up in configuration but it is extremely easy to make that slip up. To alleviate this risk I would put vMotion on its own pNICs and mix the SC/iSCSI networks on the same pair of pNICs. Since the SC must participate in the iSCSI network the potential security issues can not be avoided. I would let vMotion be its own pNICs and move iSCSI/SC onto their own from a security perspective, this trades-off with performance.

2 - SC/iSCSI

2 - vMotion

2 - VM

The main reason for adding 2 unique for iSCSI is performance (given a total of 😎 as during your backups and deployments the SC is used intensively and would impact iSCSI and anything else necessary like vMotion. Adding the 2 for just iSCSI will increase overall disk IO performance and provide the necessary security.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
dkfbp
Expert
Expert

Texiwell:

I do not agree. As long as you have your service console in one vlan and your vmotion in another vlan it is not a security concern. You simply put two nics on the

same vswitch and set the physical switch ports to TRUNK. On the service console you then define the vlan and on the vmkernel port you define the other. Now

you have both security and redundancy with only two nics.

Frank

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
Reply
0 Kudos
Shook4Brains
Contributor
Contributor

Just to let everyone know, I am ordering an additional quad NIC for each server, doubling my port count. I'll redesign from here. I just wanted to thank everyone for posting and guiding me on my inugural post.

Have a great Thanksgiving,

Shook

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

VLAN Tagging does not guarantee security on a vSwitch. Any VM in promiscous mode (easy to do and maybe necessary for various security auditing tools) can read data from any VLAN on the vSwitch regardless of the tag used. Because VLAN tagging on a vSwitch does not provide security it is necessary to provide it in some physical mode.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
dkfbp
Expert
Expert

Texi:

You have failed to point out why it was so bad to let the vmotion and service console share the same vSwitch when they are

both running on different VLANS. You would never enable promisious mode on that vswitch and furthermore no virtual

machines are running on it.

Frank

Best regards Frank Brix Pedersen blog: http://www.vfrank.org
Reply
0 Kudos
jliu
Contributor
Contributor

Never enable permiscous mode on vswitch or portgroup until your user has a good reason and has change control followed by.

No one mentioned that your vlan may come from different pair of core switches. On my ESX, I basically have dual uplink to each different switch if I want that vlan available to ESX switch. I have 3 pair of core switches, and I will need 6 NICs just for that.

To me, it is not nescessary to separate iSCSi traffic from VM, as long as you have enough bandwidth to handle. it is absolutely not a problem if you have such as 10gb link, and definitely have problem if you only have 100mb link. In real world, you may want consider separate links if you are really heavy on iSCSI traffic, but that is pure on bandwidth design.

On top of it, I always have dedicated redundant links for service console, it is scary if you lose service console and can't get into server console quick enough.

It is a good idea to use a private IP for VMotion.

Jeff

Reply
0 Kudos
JDLangdon
Expert
Expert

current vSwitch idea

1. Storage/iSCSI vlan has a VMK on a dedicated NIC

2. VMotion/HA/DRS has another VMK on a dedicated NIC

3. Prod Servers, DEV/Test, users and internal wireless are VLAN'ed on a single NIC

4. Service console on a dedicated NIC

You could, and the key work here is COULD, combine the Storage/iSCSI vlan,VMotion/HA/DRS, and Service console on a two nic bond. This would give you two available nics to configure as a two nic bond for your Prod Servers, DEV/Test, users and internal wireless networks.

Jason

Reply
0 Kudos
aleph0
Hot Shot
Hot Shot

Hello Shook,

I think that 4 physical eth are too few for a complex environment as your...

first: HA do not rely on a vswitch so do not think about HA that do not rely on vMotion to work.

If you want a completely redundant env you need:

2 eth for service console: --> 1 on board port and 1 eth port

2 eth for vMotion: --> 1 on board port and 1 eth port

2 eth port bonded on different NICs for iSCSI

other ports for VMNet(s) ---> VLANned

I've attached an image with (2 onboard) and 3 additional (three ports) NIC: however you can use conf 2 ... with no redundant Vmotion... and no redundant VMNet....

RED: Service console

BLUE: vMotion

VIOLET: iSCSI

Other color(s): VMNet(s) ---> VLANned

HTH

Aleph

http://virtualaleph.blogspot.com

\aleph0 ____________________________ http://virtualaleph.blogspot.com/ ############### If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

I agree, on a properly setup system you would never allow promiscuous mode on the portgroup or the vSwitch, unfortunately this is not the case in the real world, nor is it impossible to change the system so that promiscous mode is just plain not allowed. Since there is no software or ESX way to prevent this possibility, it is best to not make the situation too tempting by using physical adapter separation. The VIC and SC does not have the additional layers of protection necessary to block this from happening.

Looking at this from purely a security prospective, if I can get to your SC, I can put the network in promiscous mode, and therefore not only get to your VMDKs but now any memory foot print that comes across the wire from a vMotion, I can just sit and wait for it to happen. This would give me further credentials to login to other VMs. Granted with the VMDK all bets are pretty much off but this gets me the double whammy that a hacker would revel in.

Yet let us consider the option where there is a VM on this same vSwitch that suddenly goes into promiscous mode, just a simple VM for use by an Admin. It is hacked and now I do not even need access to the SC to get access to the VM data, I just put the network in promiscous mode within the VM and away I go. I could also wait until you deploy the VM (as I am watching), grab the disk, wait for you to vMotion and grab the memory image. Now I Have everything I need to hack the VM. Not to mention the ability to poison the arp cache and other nasties....

Physical separation would prevent at least one avenue of attack, and hopefully others. Remember 70% of all attacks come from inside. This is a inexpensive level of protection that is worth doing, you get the physical separation in case a mistake is made in configuration and you gain performance for vMotion and the SC actions. You may say your team would never do this, and that you have plenty of controls in place, but all it takes is one simple mistake or intentional action and you now are in dire straits.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
jliu
Contributor
Contributor

Quote:

I agree, on a properly setup system you would never allow promiscuous mode on the portgroup or the vSwitch, unfortunately this is not the case in the real world, nor is it impossible to change the system so that promiscous mode is just plain not allowed. Since there is no software or ESX way to prevent this possibility, it is best to not make the situation too tempting by using physical adapter separation. The VIC and SC does not have the additional layers of protection necessary to block this from happening.

Hi, Texi:

Maybe I am missing something here. My understanding is that vm will not be able to snoop on the vlan or wire if promiscous mode on ESX (portgroup and vswitch) is not enabled and there is no way to change that unless from virtual center. Do you mean that it is hard to prevent someone to "enable" promiscous mode on ESX (and all consequence), and this is a security concern?

Thanks,

Jeff

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello Jeff,

If the change is never made then security is fine, but the change is so easy to make as an administrator that someone could intentionally make the change for say an IDS or inadvertently checking the box. If either happens it is a trivial dropdown menu to move any VM to that portgroup. I make mistakes on drop down menus all the time, since there is no security at this level a simple honest mistake could be the leverage a hacker needs. While stating NO we will not do it is one thing, it also boils down to accountability and trust. If you add physical separation as described then you remove one possible vector of attack as a simple mistake will not be a huge problem it could be.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos