VMware Cloud Community
JDLangdon
Expert
Expert
Jump to solution

Hosting perimeter VM's

I'm in the process of opening up my virtual datacenter to our perimeter and would like to bounce a few ideas of anyone how may already be hosting external facing websites and applications within vSphere 4.1.

As it stands right now we have a single virtual datacenter consisting of 13 ESX host servers, we have a single vSwitch with network teaming for redundancy, and we have multiple portgroups which are identified based on 801.2q VLAN tags.  All management and vMotion traffic is located within a dedicated DMZ which is only accessible from within the internal LAN and by those who manage the environment.

All virtual machines, regardless of classification are currently hosted within the same virtual datacenter and on the same ESX servers.  All security is being handled by physical firewalls which are managed by a team of dedicated security personal.

The evnironment was designed based on VMware VI 3.5 best practises and passed a VMware Health Check with flying colors.  We have since completed an in-place upgrade to vSphere 4.1 ESX and have implemented the vSphere hardening guide suggestions which are relevent to our environment.

To mitigate the risk of a VM being placed on the wrong VLAN we have involved three sperate teams.  A dedicate network team is responsible for physical switch modifications, a dedicated VMware team is responsible for all vSwitch modifications, and dedicated OS teams are responsible for configuring all TCP settings within the guest OS.  There is no overlap in duties between these three teams.

All VM's are currently stored on a shared fibre channel SAN which is accessible to both internal and external physical servers.

Our security teams have ok's this design but I would like to know if anyone else is running a similar environment or has everyone else divided there hosts into seperate datacenters?

Jason

0 Kudos
1 Solution

Accepted Solutions
TomHowarth
Leadership
Leadership
Jump to solution

Although I agree with your comment on the human issue or Layer 8 issue as some have come to call it.   if there are policies and procedures in place regarding the build of machines, coupled with defined roles and responsibilites these risks are minimised.

Guest machines with multiple VMNics should be the exception not the  norm,  the network resiliency is built into the host layer not the guest layer. Therefore the creation of a multi-homed VM Guest should raise alarm bells with the deployment team, and:

  • questions should be raised as to the requirements
  • a recognised deployment procedure should be in place for those exceptions to "Guiding Principles"

NOTE I do not Say Best Practice, as what is considered "Best" for one site may not be for another,  people have come to rely on the term "Best Practice" as a lazy man's crutch.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410

View solution in original post

0 Kudos
14 Replies
Buck1967
Contributor
Contributor
Jump to solution

Jason,

This is the approach we have taken for deploying images and the building out of infrastructure. Understand I'm no expert here, but here goes nothing... It starts with our security group classifying the networks. I'm not a security person but they color code the networks as red, amber, purple and green. I'm not sure if that is some type of security standard or not. We have separate clusters created for each type of security zone. We never put VMs on the interfaces we have for Console and VMKernel vSwitch configurations. We do have them in the same vDC within vCenter though. We have a few more ESX than you and we break them down into additional clusters as need based on additional information like possibly special requirements or the number of hosts within the cluster. We try to layout clusters to have around 12 hosts within them.

I'd be interested in others approach to this. We are currently looking at the possibility of going to CNAs and utilizing FCoE which introduces new design requirements. Maybe someone's approach can enlighten me on the best way to handle this.

Buck

0 Kudos
JDLangdon
Expert
Expert
Jump to solution

Buck1967 wrote:

This is the approach we have taken for deploying images and the building out of infrastructure. Understand I'm no expert here, but here goes nothing... It starts with our security group classifying the networks. I'm not a security person but they color code the networks as red, amber, purple and green. I'm not sure if that is some type of security standard or not. We have separate clusters created for each type of security zone.

We are seriously considering putting all VM's on the same cluster.  Unless I'm missing something, I see no reason why hosting both internal and external facing VM's on the same host would cause any type of security concern.

Jas

0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

There are many reasons why this is not a good idea and there are many reasons as to why it is.

remember you are dealing with Software, and just because there has not been an published exploit for the ESX kernel, does not mean that there will not be tomorrow or the day after.

however that said, alot of the issues regarding security on ESX hosts relys on the misnomer that security people believe that ESX is Redhat Linux, so the vunerablities that are existant in that OS are also existant in ESX, this is not the case.  the Redhat kernel is not the ESX kernel.

personally I am in the process of deploying a cross zone vSphere environment for one of my clients, however I am not just trusting the base ESX, I have introduced vShield App and Edge to provide an extra level of protection.

Read Ed's (Texiwill's) book and show the relevant sections to your sec team.  he does a very good job of debunking the Redhat issues

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
conyards
Expert
Expert
Jump to solution

Thumbs up to Toms comments.

Another consideration that you may like to consider is if you or the business is planning on obtaining any accreditation for the systems that you are hosting.  Whilst there are persuasive arguments on consolidating different network zones into the same clusters and vSwitch, one of the biggest counter arguments to that is accreditations and how they view this shared hosting.  It is worth also remembering that many accreditations are not yet reworked to consider advancements such as vShield Zones etc.

Sometimes, if this is in the mix it is simply easier to seperate the workloads and build a cluster for these workloads.

Regards

Simon

https://virtual-simon.co.uk/
0 Kudos
Dr_Virt
Hot Shot
Hot Shot
Jump to solution

The problem I have with this is that "defense in depth" in the context of virtualization is driving the security paradigm away from the "transport" and toward the endpoint. The transport is quickly becoming a nebulous menagerie of various protocols, owners, and security postures. Even different security zones are crumbling as more and more integration rises and service clientele are both internal and external.

While I agree that most accreditations are behind the security times (based on old methods and technologies) and may force your hand. We need to push the IT community to perform honest evaluations of security implementations and risk management.

We have successfully deployed a mixed VM environment and passed accreditation requirements. It was a struggle as the initial design made the auditors uncomfortable; but when pressed to define their concerns so that we could address them, must where unsustainable.

I would separate your vSwitches as much as possible to minimize the “shock value” and proceed to maximize your investment.

0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

I have read many whitepapers on this and without a doubt, it can easily be managed safely. My biggest concern though is that no matter how careful I am in the initial configuration, it is never difficult for someone to make the mistake of dual homing a VM with one NIC on each side of the firewall, thus bridging all of my security. As such, my approach is always to seperate ESX clusters based on Network level (though there is no reason for seperating ESX DCs etc)

One day I will virtualise myself . . .
0 Kudos
JDLangdon
Expert
Expert
Jump to solution

Alan van Wyk wrote:

I have read many whitepapers on this and without a doubt, it can easily be managed safely. My biggest concern though is that no matter how careful I am in the initial configuration, it is never difficult for someone to make the mistake of dual homing a VM with one NIC on each side of the firewall, thus bridging all of my security. As such, my approach is always to seperate ESX clusters based on Network level (though there is no reason for seperating ESX DCs etc)

There are several things I'd have to question here with regards to your  statement.  The first being why would you have a single VM with multiple  vnics and the second is why would you not have roles in place to prevent  people from modifying the VLAN settings within the VM settings?

While I do agree that this can be done, there should be enough policies and auditing in place to prevent something like this from happening.

jd

0 Kudos
TomHowarth
Leadership
Leadership
Jump to solution

Although I agree with your comment on the human issue or Layer 8 issue as some have come to call it.   if there are policies and procedures in place regarding the build of machines, coupled with defined roles and responsibilites these risks are minimised.

Guest machines with multiple VMNics should be the exception not the  norm,  the network resiliency is built into the host layer not the guest layer. Therefore the creation of a multi-homed VM Guest should raise alarm bells with the deployment team, and:

  • questions should be raised as to the requirements
  • a recognised deployment procedure should be in place for those exceptions to "Guiding Principles"

NOTE I do not Say Best Practice, as what is considered "Best" for one site may not be for another,  people have come to rely on the term "Best Practice" as a lazy man's crutch.

Tom Howarth VCP / VCAP / vExpert
VMware Communities User Moderator
Blog: http://www.planetvm.net
Contributing author on VMware vSphere and Virtual Infrastructure Security: Securing ESX and the Virtual Environment
Contributing author on VCP VMware Certified Professional on VSphere 4 Study Guide: Exam VCP-410
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

The key to the statement was

JDLangdon wrote:

Alan van Wyk wrote:

.......without a doubt, it can easily be managed safely. My biggest concern though is that no matter how careful I am in the initial configuration, it is never difficult for someone to make the mistake .....

In some environments I come in a build a solution as a consultant. Once I leave the site, the keys to the VC are handed to someone else, who may or may not be as aware of security implications.

For arguments sake, if I were the only VC admin, I have full access to rights to my VC - therefore I,  much like anyone can make mistakes - If I were to make the mistake of  adding a second vNIC to a VM, or change the port group allocated (considering in a DMZ environment I may  be hosting IIS etc - where multiple interfaces are common, or having backup NIcs attached for isolating backup traffic to different Physical networks etc)

I clearly stated I agreed that it could be done, but am simply not overly keen as HUMAN error can easily breach network security.

It is far easier to change a port group accidentally, than to physically patch an addititional physical cable.

I have lso spent a fair bit of time tryint to determing whether there is a way of limiting the number of vNICs that can be assigned to a VM, but found no solution.

One day I will virtualise myself . . .
0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

Agreed Tom - I was not saying this is the solution for all - just that where I do not control roles and responsibilities once I leave, I prefer to design solutions that limit the risk.

For the record, I have implemented both DMZ models before (Isolated Cluster / Shared cluster) and we have never seen anyone make this mistake (though had a close call when someone wrote a PowerCli script to remap backup PortGroups en mass)

If you are in an environment where no VM ever gets 2 Nics, there is (almost) no risk

One day I will virtualise myself . . .
0 Kudos
JDLangdon
Expert
Expert
Jump to solution

Alan Gerald wrote:

For arguments sake, if I were the only VC admin, I have full access to rights to my VC - therefore I,  much like anyone can make mistakes - If I were to make the mistake of  adding a second vNIC to a VM, or change the port group allocated (considering in a DMZ environment I may  be hosting IIS etc - where multiple interfaces are common, or having backup NIcs attached for isolating backup traffic to different Physical networks etc)

I understand where you're coming from and I do appreciate your input in this discussion.  While I agree that one could mistakenly assign the vnic to the wrong VLAN, given the steps involved in adding a vnic to a VM, I wouldn't call that a mistake.  Adding a second vnic would have to be deliberate.

Keep in mind that not only does one have to mistakenly place a VM on the wrong VLAN, one also has to go into the OS and configure the IP settings to match the VLAN in questions.  And, one has to have to OS configured to route between network cards.

jd

0 Kudos
Dr_Virt
Hot Shot
Hot Shot
Jump to solution

I find it interesting that this discussion has come to be focused on trusting the admin and mistakes?

In all roles in IT, there is a certain level of trust that must treated as accepted risk. Domain Admins must be trusted with authentication and identity, messaging admins must be trusted with information flows, network admins must be trusted with firewall/IDS/port security implementation.

Why then are we dealing with an issue of trust here? Any of the above roles can "accidentally" misconfigure something and wreak havoc on the dependent services or abuse authority in their given domains. Yet, IT as a whole has come to accept these risk as normal activity.

There are plenty of steps, as highlighted above, required for the "accidental" misconfiguration to yield any fruit. Simply moving a VMs vNIC to the DMZ portgroup would only blackhole the box (assuming seperate network scopes).

So again, why are we arguing this? Is this concern coming from competing silos who struggle against the management of the "last mile"? Are we trying to somehow display we measure up to the security team because of distrust? The virtualization administrator has the responsibility to manage and modify the application of resource silos to provide a required solution. Just as the Systems Administrator, Network Administrator, and Messaging Administrator are trusted to perform their roles with a high degree of professionalism and accuracy, so should we.

0 Kudos
JDLangdon
Expert
Expert
Jump to solution

Dr.Virt wrote:

I find it interesting that this discussion has come to be focused on trusting the admin and mistakes?

In all roles in IT, there is a certain level of trust that must treated as accepted risk. Domain Admins must be trusted with authentication and identity, messaging admins must be trusted with information flows, network admins must be trusted with firewall/IDS/port security implementation.

Why then are we dealing with an issue of trust here? Any of the above roles can "accidentally" misconfigure something and wreak havoc on the dependent services or abuse authority in their given domains. Yet, IT as a whole has come to accept these risk as normal activity.

I think the problem is that the role of VC Admin is such a new role that most "upper management" do not understand the role or the duties associated with it.  This places a much higher level of trust on the position.

Personally, I don't see this changing anytime soon.  The main turning point will be when the VC Admins start to move into the upper management roles.

0 Kudos
bulletprooffool
Champion
Champion
Jump to solution

JDLangdon wrote:


Keep in mind that not only does one have to mistakenly place a VM on the wrong VLAN, one also has to go into the OS and configure the IP settings to match the VLAN in questions.  And, one has to have to OS configured to route between network cards.

jd

True in most cases JD, though in our instance, the backup network is DHCP enabled.

I am protecting from the worst case, but again stress that this can be implemented safely as long as your admins are careful / diligent.

One day I will virtualise myself . . .
0 Kudos