VMware Cloud Community
rightfoot
Enthusiast
Enthusiast
Jump to solution

Protecting/Securing ESXi

I use ESXi to host a small number of public web servers. I use a 10.0.0.0/24 net for the ESXi management and guests are in a 192.168.1.0/16 net.

I constantly read about how there are new ways to attack networks, especially ones which use virtual hosts and guests. I find quite a bit of information but most of it seems to be enterprise related. What should I be doing to protect the setup other than what I have done and the usual things such as a network firewall of course and other standard server things.

In other words, are there some special things I should be doing to protect guests on ESXi or ESXi itself from remote users?

My setup is as follows.

ESXi hosts are blades on a BladeCenter chassis.

Each blade has direct FC access to storage units.

Each host runs directly off of the FC storage.

Thanks.

Reply
0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

You already stated the issue you have, At the Chassis/pSwitch level it looks like you bonded your pNICs.... This you cannot do.

To use NIC0 for SC all you need to do within ESX is assign NIC0 to the SC vSwitch.

To use NIC1 for VM Network all you need to do within ESX is create a new vSwitch and assign NIC1 to that vSwitch.

Simple to do.

However, do not bond the ports at the chassis level or physical switch level.

2 vSwitches 1 pNIC assigned to each... that is the ESX side of things. If you have other issues with traffic look at your pSwitch setup it sounds incorrect if bonding is involved. These ports should NOT be bonded at the pSwitch.

This sounds like an IBM blade and if so, there are several good write-ups on this and how to solve the problem from IBM and others.

pSwitch<->pNIC0<->vSwitch0 (Service Console/Management Appliance)
pSwitch<->pNIC1<->vSwitch1 (VM Network)


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

Reply
0 Kudos
27 Replies
Dave_Mishchenko
Immortal
Immortal
Jump to solution

I've moved your post to the Security and vSheild Zones forum.

Seperating your networks as you've done is an important first step. Ideally none of those host in your DMZ would have access to the management network.

Are these standalone hosts or are you using vCenter server as well? Are you managing the hosts on your own or with other? Ideally you would move away from using root and/or shared accounts if that is the case. You can also consider syslogging if you need to keep ESXi logs for long term analysis. Depending on your licensing level you could also look at vShields to further isolate your VMs. Have you taken a look at the hardening guide for vSphere - http://communities.vmware.com/docs/DOC-12306?




Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

&gt;Seperating your networks as you've done is an important first step. &gt;Ideally none of those host in your DMZ would have access to the

management network.

I've been trying to get the network separated into separate nets but have not completed this yet. As such, the 10's and the 192's are on the same lan at the moment. Also, they are using the same Ethernet interface on the blade. What I want to get to asap is using say Ethernet0 on each blade for guests and Ethernet1 on each blade for management. However, this is what is not clear to me, if there is any value in this or simply doing it how I have it right now.

&gt;Are these standalone hosts or are you using vCenter server as well?

&gt;Are

you managing the hosts on your own or with other?

I manage each guest manually, no vCenter or any other centralized tool

&gt;Ideally you would

move away from using root and/or shared accounts if that is the case.

I do change the ssh ports on all servers when they come to life. I also send web syslogs to a central logger and use ossec with ossim which I've yet to complete setting up.

&gt;You can also consider syslogging if you need to keep ESXi logs for long

term analysis.

I have not done anything with ESXi logs to date, this is why I am posting, to better understand what I should be looking for.

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

With the configuration as it is now if a web server was compromised an attacker would be able to add an IP address and acces the management port for ESXi (and perhaps other management IPs like the Bladecenter MM or your SAN). For that reason alone it would be worthwhile to seperate your VMs onto vmnic0 and managemenent onto vmnic1. You would also have to ensure that the rest of your networking infrastructure doesn't connect those subnets without traffic first having to cross a firewall.

So you would end up with 2 virtual switches, each connected to one of the NICs and the VMs on a virtual machine port group in one vSwitch and the management IP setup as a vmkernel port on the other vSwitch. Your VM vSwitch would not have a vmkernel port (IP) setup on it.

The downside to that setup is that you would loose NIC redudancy. Some might suggest a single vSwitch and using VLANs to seperate your VM and management traffic. The hardening guide states that VLAN will provide sufficient isolation as long as you don't misconfigure something.

With ESXi logs they're stored in a ramdrive so when you reboot the logs are gone. You can configure syslogging to a central server or to store files on a datastore. See the example here - http://vm-help.com/esx/esx3i/esx_3i_rcli/vicfg-syslog.php. You get all of ESXi's log data so it may be sufficient to use the datastore option to store the files rather than a central server unless you have a security requirement to perserve the logs.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

That is a lot of good information which I need to digest.

I use a juniper multi-port firewall and do have something already set up which might work here. On the juniper, I have one ports which I use for a separate lan segments

one of which is in the 172.16.0.0 area for a server which I wanted separated from the main lan.

On the bladecenter chassis, I have two Ethernet modules which break out to individual cables. The blades which are 172 have their NIC0 connected to a layer two switch which in turn is connected into one of the juniper ports. Anything that requires coming in or out of the main lan needs to go through the firewall and has policies in place. NIC1 isn't being used on any of those blades.

I could move all of my public servers to that segment for example. I would still have only one NIC per service but on the other hand, I also run most things in dual server or more setup which means I load balance across servers for most services.

What I can't wrap my brain around here is that since the blades are in fact stand alone, would I not only need one blade to control the chassis? I use non intelligent KVM modules which do have their own Ethernet ports though. Or do I basically need to give up one of the modules for management alone?

I hope this makes some sense, I'll try again if it doesn't.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

I should probably also mention that I don't use any major centralized tools, am just playing around with IBM Director for example, but servers are all maintained individually.

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

>> What I can't wrap my brain around here is that since the blades are in fact stand alone, would I not only need one blade to control the chassis? I use non intelligent KVM modules which do have their own Ethernet ports though. Or do I basically need to give up one of the modules for management alone?

I haven't worked on the newer bladecenters but the original one required a management module and IBM director used it for getting data on the chasis and blades.

Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

I'm sorry, I confused the thread. I didn't mean the blade chassis, I meant the ESXi server on each blade. Yes, the chassis is controlled via an IP on the management module so that's not an issue.

The issue is that each blade has two NICs. Once I install ESX, I need to pick a management IP for the blade.

If I pick one of the NICs, then that is using up one NIC for ESX management alone. I'm not sure if that's overkill or simply how it's done.

If I pick a virtual IP over one of the NICs which is on the LAN side, meaning not on the separated LAN which is meant to be used for public, then, in this way, I'm not sure how a user could not still add an IP if they gained access to the server, in order to get back into my network.

I of course block most incoming services but there's always trojans, injections, other things that user could send back into my network which is allowed, which would then give them remote access to the main lan.

What a sick Internet this has become, so many hackers wasting such fantastic resources. Anyhow... the above is where I am at.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

Shot in the dark but... is there any way of using the BladeCenter's MM in some way to reach the ESX management functions on each blade? The management IP's are in the 10.0.0.0 and so are the MM's. Or perhaps some other way but one that would leave the main Ethernet modules for the blades alone.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

Update on what I've done to change this setup.

I created a new lan on an interface of the juniper.

I reconfigured an esx blade that has two public web servers on it so that NIC0 is connected to a switch on the new lan. NIC1 is connected in vlan1 and I updatedesx so that it's management NIC was NIC1 only. Now all of those public services run on NIC0 of each guest, which don't have a second NIC installed. All traffic flows through the firewall back and forth for esx management, for dns, mysql and a few other basic things. I'll next put a dns server on that blade so that I can eliminate that extra traffic as well.

How is this as a first step, right direction or missing the point?

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

Ok, so things aren't working right after all.

I have the guests using Ethernet0 on module #1 (top) which is connected to the new lan segment.

I have Ethernet1 which is the lower module #2, connected inside the lan.

Everything, esx management and guests were on module #1 so, ethernet0.

I changed the esx management port to NIC1 and traffic stops dead to the guests.

If I switch it around so that I am using NIC2 instead, then I get esx management but no guest traffic.

So right now, I have traffic to the guests and only console access to esx.

What's going on?

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

See the Native VLAN issue sections in this document - http://www.vmware.com/pdf/esx3_vlan_wp.pdf.




Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

I was thinking about that earlier but what doesn't make sense to me.

I now have both BladeCenter modules connected to two different switches. One is vlan1 and the other is a separate lan on the juniper.

Why would I need to run a vlan if I just want to dedicate the second Ethernet port to esx?

And besides, if I do that, then I get to use both NICs for the guests but I would then have the same problem of someone being able to gain access to the server and adding an IP, no? Not to mention that I then also have to task the juniper with having to deal with a vlan rather than straight forward lans on it's interfaces.

I guess I'm also a bit confused about the use of the NICs from ESX's point of view. When I pick a NIC as a management link, am I giving up that Ethernet NIC for any other use? Why would one shut off when I use the other as I did earlier?

Reply
0 Kudos
Dave_Mishchenko
Immortal
Immortal
Jump to solution

>If I pick one of the NICs, then that is using up one NIC for ESX management alone. I'm not sure if that's overkill or simply how it's done.

>If I pick a virtual IP over one of the NICs which is on the LAN side, meaning not on the separated LAN which is meant to be used for public, then, in this >way,

In your case the NIC will largely idle as it will only handle vSphere client traffic (and perhaps backup?). If you need to seperate your networks then it's the way to go. If you could use VLANs then you could create a single vSwitch with both NICs, but then you'd have to update the config for the rest of your network.

I'm not sure how a user could not still add an IP if they gained access to the server, in order to get back into my network.

If you used a single vSwitch with no VLAN then it would be possible (I was wondering that with the original posts).

is there any way of using the BladeCenter's MM in some way to reach the ESX management functions on each blade?

You can use the MM to get to the DCUI for ESXi, but for the vSphere client you need IP connectivity to the ESXi management IP.






Dave

VMware Communities User Moderator

Now available - vSphere Quick Start Guide

Do you have a system or PCI card working with VMDirectPath? Submit your specs to the Unofficial VMDirectPath HCL.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

I had been working towards a vlan setup but I've gotten sidetracked each time so it's ended up on the backburner. For now, I would be happy to use one NIC for guests and one for ESX but I can't seem to get it to work. I can either use all traffic over one NIC or one or the other, meaning ESX or guests but not both. I'd love to figure out why this isn't working to start with. Getting this to work would give me a quick fix until I get to the vlans.

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

Wait now, maybe I'm not understanding how the Ethernet modules work on the BC. I made the assumption that all blade Ethernet0 NICs are Ethernet module #1 at the top and that NIC1's are Ethernet module #2, bottom.

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

For now let's ignore the Blade Center...

1) You have 3 basic networks in use, currently 2 are on the same subnet (BAD IDEA btw.)

2) Those networks are Management, VM, Storage

Storage uses FC so is pretty much segregated

But you are mixing 2 subnets on the same network. FOr example, if someone would attack your web server they could THEN attack your management appliance, this attack would most likely succeed and the attacker would OWN your infrastructure.

Your management appliances are within your DMZ, a juicy attack point. This is a BAD idea, see previous statement.

There is a solution but this depends on your BC chassis and blades:

Does your BC chassis use a passthru to the blades for networking? If so then the split you want is actually on each blade.

How many NICs are on your blades? If 2, then you need to use each. Ideally you want 4, 2 more for redundancy.

You absolutely want to firewall your Management Appliances/Network from ANY other network traffic.

I think your have setup the proper configuration but without a firewall bridge there is no way to talk between the networks, which is exactly what you want. Here is how I would set things up:

Method 1:

internet<-> firewall <-> VM Network <-> Firewall <-> Management Network

Method 2:

internet <-> Firewall <-> VM Network
internal <-> Firewall <-> Management Network

Where internal is your internal NO VM Network.

But the first thing you should determine is how the BC presents NICs to the blades and how they work externally. You may need to contact the vendor on that one.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

First, thanks very much for the information you've provided. Let me try and add some additional information.

The particular chassis does have pass-through Ethernet so each blade can be physically connected to something unique rather than all traffic flowing over the same NIC group. In my case for example, here is an example of what I've got going.

I have blades/servers which have public services on them. Each blade has ESX installed then guests.

The guests use their NIC0 and the blades NIC0 is physically connected to a switch that handles only 172.16.30.x traffic.

The ESX management IP is 10.0.0.x and is on NIC1 which is physically connected to the LAN side switch.

The firewall is a multi-interface one so all traffic is firewalled. So for example, 172.16.30.x traffic flows to it's switch and anything which needs access to LAN side things such as say sql, mail, dns services etc, flows through the firewall like any other public traffic would.

While this isn't the most effective use of the NICs, it's fine for now and adding NICs isn't possible on these chassis because all of the module bays are in use. Two are used for Ethernet and two are used for fibre channel. Though, I am thinking that I might be able to change one of the FC bays for an Ethernet which could carry ESX management traffic.

So basically, one NIC is used for the guests traffic while the other should be for ESX management only. Problem is that I'm new to ESX and haven't had enough time to learn how to deal with the management side of it, in order to protect it. I can't get into vlans just yet as it would complicate things too much so I have to work with physical interfaces at the moment.

I tested this setup to see if ESX traffic was flowing over the 172.16.30.x side and it doesn't, it is blocked at the firewall. Problem is that when I reboot the blade, it seems to change now and then in terms of which NIC goes into standby. When vmnic1 is in standby and vmnic0 is in use, then everything works as it should but sometimes, it seems to flip flop and I end up losing access to esx plus, public can't reach the guests until I fix the problem by manually moving the vmnicx up or down.

I wish I could better explain this :).

Reply
0 Kudos
rightfoot
Enthusiast
Enthusiast
Jump to solution

BTW, I can live without the NICs being bonded or in fail over mode and dedicated to just one function. One for ESX, the other for the guests. I don't need the redundancy because I always run multiple servers of the same services with a front end load balancer.

Sorry, forgot to mention that. So if there is a way that I could use the NICs separately, that would be the perfect quick fix for my setup.

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

You have stated your problem. In this situation you cannot BOND the pNICs. You really want 802.1q NOT 802.3ad. Each Network NIC0 and NIC1 needs to be active, so a bond will not work. THis is not the way to achieve redundancy.

YOu can use VLANs to achieve this using both the pNICs, but you cannot bond the pNICs to achieve this level of redundancy. You can use one vSwitch and 2 portgroups both on their own VLAN/NIC and still have redundancy as well. Only when there is a pNIC failure would you have both networks on the same wire (which is really why you want VLANs involved and not subnets).

Or setup 2 vSwitches each talking to a different NIC and forego redundancy.

These must be old blades as 2 port blades are a thing of the past, most modern blades have 4 pNICs.


Best regards,
Edward L. Haletky VMware Communities User Moderator, VMware vExpert 2009

Now Available: 'VMware vSphere(TM) and Virtual Infrastructure Security'[/url]

Also available 'VMWare ESX Server in the Enterprise'[/url]

Blogging: The Virtualization Practice[/url]|Blue Gears[/url]|TechTarget[/url]|Network World[/url]

Podcast: Virtualization Security Round Table Podcast[/url]|Twitter: Texiwll[/url]

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos