Contributor
Contributor

Service Console - DRS - vMotion - VLAN

Jump to solution

Hello Community,

do I run into problems if I separate the service console from the vMotion network (two vlans) using one NIC port (1 GBit) with DRS/vMotion for the VMs in a 3rd vlan (built with tree NIC ports)?

Thank you

Marcus

Tags (3)
0 Kudos
1 Solution

Accepted Solutions
Leadership
Leadership

Hello,

1. 2 pNICs for V.C

This would be for all administration I would assume including network based backup depending on how you are doing this of course.

2. 2 pNICs for Vmotion /DRS

3 2 pNICs for VMs Production

4 2 pNICs for DMZ

This looks good to me from a networking perspective. You have a good split on security, redundancy, and performance. The only way to make it more secure is to have DMZ based hosts and Production based hosts... As it is now, it will be very simple for a Production machine to suddenly appear in the DMZ unless you are very careful about how you handle the use of the VIC. I would only allow Admins to change any network settings of the VMs.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
46 Replies
Leadership
Leadership

Hello,

do I run into problems if I separate the service console from the vMotion network (two vlans) using one NIC port (1 GBit) with DRS/vMotion for the VMs in a 3rd vlan (built with tree NIC ports)?

A few things, it looks like you have 4 pNICs. In this case I would either do:

1 pNIC for SC (no redundancy, but physical security)

1 pNIC for vMotion (no redundancy, but physical security)

2 pNIC for VMs (redundancy in failover or load balancing)

or slightly less secure but more redundancy:

2 pNIC for SC/vMotion (failover redundancy, less physical security for vMotion)

2 pNIC for VMs

Having 3 pNICs for the VM Network really does not buy you much other than possible headaches down the line. More than 2 pNICs per NIC Team is not recommended as you either just have more failover paths or problems with load balancing if a pSwitch or Link goes away. In some cases with > 2 pNICs it could 10 minutes for the vNetwork to recover.

You have tradeoffs between redundancy, security, and performance. If you are using iSCSI you should add two more pNICs, if you are using SAN 2 FC-HBAs.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Hot Shot
Hot Shot

HI Texiwill

I'm on the process of building a new Vmware Infrastructure with ESX 3.x I'm thinkning to do the networking part will be as follows:

H/W:

2 460 C-Class Blades

orgnially 2 pNICs, will add 6 pNICs via the mezzanien card, and create a VLAN in Cisco 3650 switches it will be as follows

1. 2 pNICs for V.C

2. 2 pNICs for Vmotion /DRS

3 2 pNICs for VMs Production

4 2 pNICs for DMZ

am I going in the right path?

Please refer to the attached document and advice.

Thanks in advanced

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
Leadership
Leadership

Hello,

1. 2 pNICs for V.C

This would be for all administration I would assume including network based backup depending on how you are doing this of course.

2. 2 pNICs for Vmotion /DRS

3 2 pNICs for VMs Production

4 2 pNICs for DMZ

This looks good to me from a networking perspective. You have a good split on security, redundancy, and performance. The only way to make it more secure is to have DMZ based hosts and Production based hosts... As it is now, it will be very simple for a Production machine to suddenly appear in the DMZ unless you are very careful about how you handle the use of the VIC. I would only allow Admins to change any network settings of the VMs.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
Enthusiast
Enthusiast

For me, due to difference NIC brands on my HP DL585's, I have two pNIC's trunked with both Service Console and VMkernel VLAN's on them attached to a vSwitch for both of these only. These are the two onboard Broadcom NIC's.

I also have three dual port Intel NIC cards for six ports in total, from which I use three separate vSwitches. One is for layer three routed guests, one is for some guests that are on separate physical switching infrastructure, and the third is for our layer two network stretched from our Prod to DR sites. I have patched these to our various pSwitches in such a way that I have switch redundancy (all pSwitches are paired) and NIC redundancy.

I have separate HP BL25p's for DMZ guests and I use the built in NIC's again for the SC and VMK, with the other two add-in NIC's for DMZ guests. Unfortunately, 25p's only allow up to four pNIC's, so this is all these are capable of.

I have found no performance issues in relation to having the SC and VMK on the same vSwitch using the same pNIC's and there are no security risks as they are on separate VLAN's.

Cheers, Pete

0 Kudos
Hot Shot
Hot Shot

Hello Texiwill,

Thanks for your reply. Something cam into my mind that to use only 6 pNICs. and controll the Vmotion, HA/DRS via the HBA's. Since the servers will be connected to the MSA 1000 SAN Storage, and each server will be having 2 HBA's for radunandcy, is it possible to control it the Vmotion/HA and DRS via the HBA's ?

Infrastructure Designed and modified, Could you have a look on it?

BR,

Habibalby

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
Hot Shot
Hot Shot

Mork,

In my case i will be using HP Blades BL460c G1, the new model. These servers by default comes with 2 pNICs, no HBA's. I will buy the Mezzanien Cards to give me more 4 pNICs and Mezzanies Card for 2 HBA's. Total pNICs will be 6, where each pair will be teamed for Redanduncy and HA.

pNICs (For Service Console, and VMKernal, Spreate VLAN01)

pNICs (For VMs in Production, Spreate VLAN02)

pNICs (For VMs in DMZ and mixed with VMS in Production such as MS-ISA Server, Spreate VLAN03)

for VMotion/ HA and DRS, is it possible to contorl this via the HBAs?

BR,

Habibalby

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
Leadership
Leadership

Hello,

HBAs do not give you vMotion or DRS nor really HA, they are just a separate storage path when all these technologies fire..... vMotion, HA, DRS all use the Service Console to initiate the vMotion or failover. While DRS and vMotion then use the vMotion channel to move data. Since HA does not use vMotion it does all its communication over the SC.

For full redundancy with 6 pNICs, if you do not have DMZ style systems use:

2 pNICS for SC

2 pNICs for vMotion

2 pNICs for VM Network

2 FC-HBAs for storage

You really never want your SC and vMotion network on the same set of pNICs from a security perspective, yet I know people do this all the time to gain redundancy and use VLANs to protect the vMotion/SC from each other, however VLANs just direct traffic and offer no real protection from a promiscuous mode ethernet adapter. This is a trade off you may have to make to ensure you have enough pNIC for a DMZ network if you are limited to 6 pNICs only. Not one I would like to make.

If you have a DMZ + Production nodes you will want at least 8 pNICs for redundancy, security, and performance. Just add 2 more pNICs for a DMZ network per the above.

Never ever mix on the same vSwitch DMZ and Production servers, that will be a recipe for disaster. You want to have DMZ traffic 100% separate from internal traffic all the way down to the vSwitch. With only 6 pNICs you will be forced to use something like:

1 pNICs for SC (failover pNIC is the vMotion pNIC)

1 pNIC for vMotion (failover pNIC is the SC pNIC)

2 pNICs for DMZ

2 pNICs for Production

In the above case, only in a failure case will the SC and vMotion live on the same pNIC thereby keeping things separate until there is a failure, which you could then quickly fix. You maintain the best security with 6 pNICs this way.

I am writing several blog posts on vNetworking, may I use this as an example without your document of course?

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Champion
Champion

The HBAs have nothing to do with VMotion, DRS, HA, all this traffic goes over network cards.

With 6 NICs and a DMZ network needed this is the configuration I went with.

pnic1&2 trunk for Service Console and VMotion networks. These are 2 dedicated VLANs for only SC and VMotion. vSwitch has all security set to "reject". Follow other best practices for Service Console security and I would say this is secure enough for most companies, of course you need to decide this for yourself. You absolutely want Service Console redundancy for HA to work properly.

One additional thing, pNic1 is Primary for SC and Standby for Vmotion, and pNic2 is Primary for VMotion and Standby for SC.

pnic 3&4 trunk for Internal network(s)

pnic 5&6 trunk for DMZ network

Don Pomeroy

VMTN Communities User Moderator

0 Kudos
Hot Shot
Hot Shot

my 2 cents: cross the port for adding more availability

i.e: SC 1 physical port on board + 1 physical port on pNIC1 ("bonded"), ...., DMZ 1 physical port on pNIC2 + 1 physical port in pNIC3 (bonded)

\aleph0

____________________________

###############

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

\aleph0 ____________________________ http://virtualaleph.blogspot.com/ ############### If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
0 Kudos
Enthusiast
Enthusiast

To be honest, I'm surprised anyone has issues with sharing the same pNIC's for SC and VMK. As someone earlier mentioned, if your vSwitch security policy is set to reject, there should be no issues, unless of course you don't trust your VMware admins. But each to their own of course... company policy definitely differs Smiley Happy

I've just resigned from the company I was working for, but there, every single VLAN is firewalled and yes, this includes all internal VLAN's. So my SC can't talk to my VMK unless it is explicitly allowed by firewall rules. It's very ugly on initial setup, but doesn't cause me grief.

Anyway, best practise would say separate vSwitches for these, but in a lot of cases, when your onboard pNIC's are different brands to your add-in pNIC's and you need to use all the add-in pNIC's for VM's, then there's not a lot of choice.

Anyway, to address the original question, fibre HBA's only provide paths to the SAN for your hosts, they have no abilities to provide DRS/Vmotion/HA other than ensuring all your hosts see the same Datastores.

And aleph0, yes, that's what we did with the DL585's... one pNIC from one dual port card is teamed with a pNIC from one of the other dual port cards. This way all three VM vSwitches have card, cable, and switch redundancy.

Cheers, Pete

0 Kudos
Enthusiast
Enthusiast

And Texiwill, definitely be interested in having a look at your new publication Smiley Happy

0 Kudos
Hot Shot
Hot Shot

agree with Mork: want to see the publication!!!!

cheers:D

\aleph0

____________________________

###############

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

\aleph0 ____________________________ http://virtualaleph.blogspot.com/ ############### If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
0 Kudos
Champion
Champion

That was my other consideration, onboard NICs were different from the additional NICs and in the past it was not recommended to mix broadcom and Intel based NICs in the same bond. So I chose to use the two onboard as the SC and VMotion.

Don Pomeroy

VMTN Communities User Moderator

0 Kudos
Leadership
Leadership

Hello,

I've just resigned from the company I was working for, but there, every single VLAN is firewalled and yes, this includes all internal VLAN's. So my SC can't talk to my VMK unless it is explicitly allowed by firewall rules. It's very ugly on initial setup, but doesn't cause me grief.

Well within the vNetwork that firewall has absolutely no effect if:

a: SC and vMotion share the same vSwitch

b: Promiscuous mode is allowed. (reject is the default, but for some reasons people change this)

c: They are on the same subnet....

No way to place a firewall between the SC and vmkernel devices without some form of physical separation which forces things to be on separate vSwitches. Then you get to that point... I am actually gathering together all the hacks/attacks that can affect a virtual environment and so far I have been surprised by the MiTM possibilities.... Even without promiscuous mode adapters.....

I like physical separation because I tend not to trust anyone outside the administrative staff, considering that 70% of the attacks come from inside, can you afford to trust anyone? Even in the administrative staff we restrict rights whenever necessary.

Physical separation of the networks is the only defense against promiscuous mode ethernet adapters and while by default they are not allowed, there are quite a few security products out there that actually require this to be setup. Once it is setup for one portgroup, it is very very easy to drop a system on that network and now I have access to everything I should not. Defense: use physical separation where ever possible and monitor my configuration for such changes. I look over the network configuration pretty regularly.

I still think that load balacing requires like ethernet adapters, while failover does not.... But not sure, I tend to always match them up out of practice.

They say the book is on schedule! I will be writing more blogs on this subject as I find free time.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XII: 2009-2020,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos
Enthusiast
Enthusiast

I must totally agree with your point from the trust/attack side of things for sure.

The SC and VMK are actually on separate networks, so routed traffic must pass through the firewall and all security policies are set to reject. But, of course, they do share the same vSwitch.

The main reason for joining these was that, AFAIK, teaming different brand pNIC's is unsupported by VMware, but I may have read that wrong and it may just not be recommended.

Of course, if I had an extra pair of pNIC's, separating the SC and VMK to different vSwitches would be ideal Smiley Happy

Anyway, glad to hear your book is on time!

0 Kudos
Hot Shot
Hot Shot

Hi Texiwill,

Sorry it's my miss-understanding about the vMotion/HA and DRS. As per the Representive Technical Sales from the company where we deal with we cannot put 8pNICS in the BL-460c G1. The maximum pNICs we can go for are 6 pNICs. In this case, is it the recommended way to do what you have written for me?

1 pNICs for SC (failover pNIC is the vMotion pNIC)

1 pNIC for vMotion (failover pNIC is the SC pNIC)

2 pNICs for DMZ

2 pNICs for Production

Or i should insist to do with 8 pNICs? By insiting to go with 8 pNICs as per the HP Sales Representive we cannot go with 8 with BL460c. In this case we have to go for the BL480 or 680.

What is your recommendations?

BTW, Now, in the DR site the ESX over has to have the same configuration and networking setup as the Primary Site? like if we want to replicate from the Primary Site to the DR Site, how to acheive this?

Thanks for your support, i really appreciated.

BR,

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
Hot Shot
Hot Shot

IMHO 6 gigabit eth ports are more than ok! you'd better use VLAN ID and trunking ...

I've worked on a project to customer site with ESX installed on BL45p --> 4 eth ports (5 VLAN ID, 1 for SC, 1 for VMOTION other for VMs) see attached image (something erased for privacy)

pNIC bonded for availability.

\aleph0

____________________________

###############

If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!

\aleph0 ____________________________ http://virtualaleph.blogspot.com/ ############### If you found this information useful, please consider awarding points for "Correct" or "Helpful". Thanks!!!
0 Kudos
Hot Shot
Hot Shot

HI aleph0,

Thanks for your reply. The picture which have posted is a little not clear to me. Can you please be more specific? What are in my mind the following:

  1. 2pNICs (For VC - Teamed together. Create a VLAN01 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  2. 2pNICs(For Prodcution - Teamed together. Create a VLAN02 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  3. 2pNICs (For DMZ - Teamed together. Create a VLAN03 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  4. 2pNIC( Teamed together. Create a VLAN04 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs

am I goingin the right Track? or something else mush be considered.

In 6 pNICs, how the VLANs will be acheived?

Thanks,

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos
Hot Shot
Hot Shot

HI aleph0,

Thanks for your reply. The picture which have posted is a little not clear to me. Can you please be more specific? What are in my mind the following:

  1. 2pNICs (For VC - Teamed together. Create a VLAN01 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  2. 2pNICs(For Prodcution - Teamed together. Create a VLAN02 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  3. 2pNICs (For DMZ - Teamed together. Create a VLAN03 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs)

  4. 2pNIC( Teamed together. Create a VLAN04 in the CISCO Swicth and connect them together. Then make a vSwitch and assign the vSwitch to those pNICs

am I goingin the right Track? or something else mush be considered.

In 6 pNICs, how the VLANs will be acheived?

Thanks,

Best Regards, Hussain Al Sayed Consider awarding points for "correct" or "helpful".
0 Kudos