VMware Cloud Community
liveammo
Contributor
Contributor

ESX 3.0.2 Service Console Security Issue

I have found what appears to be a fairly significant security issue, related to virtual switch isolation and multiple service consoles.

From a vanilla ESX 3.0.2 install, I created the following network topology which contains two multihomed VMs, each with vNIC1 and vNIC2:

192.168.0.0/24 -> vSwitch0 -> Service Console 192.168.0.2 -> VMkernel 192.168.0.3 -> Win2003K (vNIC1 192.168.0.10) / WinXP (vNIC1 192.168.0.11)

vSwitch0 is connected to one external pNIC.

172.16.0.0/24 -> vSwitch1 -> Service Console #2 172.16.0.2 -> Win2003K (vNIC2 172.16.0.10) / WinXP (vNIC2 172.16.0.11)

vSwitch1 is an isolated switch with no external connections.

Both VMs have IP forwarding disabled, so there shouldn't be anything being passed between the vNIC1 and vNIC2 interfaces. From the outside world, by setting an external station with an IP address within the internal 172.16.0.0/24 segment, Service Console #2 is directly accessible at least on port 902. I haven't yet done enough testing to determine how traffic is being passed through to vSwitch1, my initial thoughts are that vswif0/Service Console #1 is somehow forwarding frames through to the internal vSwitch1. Any ideas on this behavior? It looks like there are some sysctl variables that can be set for vswif0 but I haven't done any testing on that yet either.

Thanks in advance.

Reply
0 Kudos
54 Replies
JDLangdon
Expert
Expert

My first question would be "Why do you have two service console ports?"

Jason

Reply
0 Kudos
liveammo
Contributor
Contributor

Many folks are using more than one service console, for a variety of reasons - as a failsafe in the event the primary service console goes down; to bind specific services to a secondary service console interface for security reasons (to sandbox SNMP traffic for example, or to bind SSH to a specific interface); and for the ability to manage the ESX host from multiple administrative networks.

Obviously bridging Ethernet frames or routing IP datagrams between service consoles entirely breaks the concept of isolated internal switches, and opens up many different attack vectors including but not limited to virtual source routing attacks (if IP datagrams are being routed between service consoles via IP forwarding of some sort), 802.1q VLAN encapsulation attacks (if Ethernet frames are being bridged somehow between service consoles)... just to name a few.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

In your configuration each vswitch has a service console or vswif device connected to it. You therefore have within your system a vswif0 and vswif1. Each vswif device has a separate IP address and therefore network. All the services within the Service Console allow connections on ANY vswif device. This is not a security issue per say but a misunderstanding on how it all works.

The iptables rules block INPUT and OUTPUT paths based on source and target ports for ALL vswif devices. If there are more than one then the rules apply to all vswifs not just the first one. This is why you can connect to the Service Console services over either IP address.

For example:

On of the rules is to accept all incoming traffic of state NEW on port 902. The vswif device is not mentioned in this rule and therefore the rule applies to ALL network INPUTs into the system. There is no forwarding between vswif0 and vswif1.

If you want to as you say sandbox SNMP traffic to vswif1 you will have to modify the firewall by hand to enforce this. In general the only use for the 2nd vswif device I have seen so far is for a backup network not for redundant management networks. That is handled within the vSwitch itself using failover mode settings.

If you only have 2 pNICs, then you will want to use VLANs to segment traffic and ensure no portgroup allows promiscuous mode ethernet devices. As well as making 1 pNIC the primary and the second one the failover within a single vSwitch. I also manage to put my SC on an Administrative network that is itself firewalled and access to it is limited.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education.

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
liveammo
Contributor
Contributor

Thank you for your response Edward.

Would it be possible for you to provide me with the VMware documentation that describes "how it all works" as you say; from the VMware documents and whitepapers I have read that describe virtual switching and segment isolation, there should be no connectivity between service consoles or vSwitches in any fashion. What concerns me about this behavior is that by simply adding a service console to a vSwitch, that vSwitch by default becomes attached to the same Layer-2 broadcast domain as the production network (external network), which raises a multitude of security concerns for organizations that are relying upon virtual switching to isolate and segment Layer-2 broadcast domains.

Given the many pronouncements within the VMware documentation about the command line interface being deprecated (and even discouraged for general system administration tasks), I don't believe that your suggestion of a customized iptables/netfilter policy to sandbox traffic between service consoles is well taken or within the skillset of most individuals that administer ESX hosts.

More importantly, if each vswif device that is created with each service console is in effect a Layer-2 Ethernet bridge or a Layer-3 routing device, then given the architecture you have described it will be possible to bridge Ethernet frames and/or route IP datagrams between isolated virtual switches (even so-called internal vSwitches) by simply adding a gateway route statement for the internal vSwitch service console IP from any other network that has a service console attached...

Hopefully this isn't the case and I have something configured wrong.

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

There is no internal documentation on this, it is the nature of the iptables rules that are in place. Every time you create a service console portgroup on a vSwitch you are not creating a new service console, but tying into the existing service console. The existing service console protects all input and output devices using iptables. A good iptables reference is: "LINUX iptables Pocket Reference" from O'Reilly and Associates. Also, you can look at www.netfilter.org.

Since there is only one instance of a service console, all you are doing by providing more than one Service Console port group is making more ethernet interfaces on the service consoles, these are the vswif devices.

You have not really found a bug in ESX, but the fact that there are 1001 ways to do networking with LINUX. Consider this, would you rather have a specific rule that says everything allowed on vswif0 and that vswif1 is absolutely wide open? If they were to implement interface level blocking on the default iptables rules, that is generally what results. Or would you like absolutely nothing on vswif1? THat is the other option. Note that the VIC and esxcfg-firewall commands do not know anything about the devices in use only the rules to be used. This is a General setup that meets most peoples needs.

So, for iptables you have 3 default chains on the primary table. Those chains are INPUT, FORWARD, and OUTPUT. INPUT covers everything coming in, FORWARD is for forwarding between interfaces and OUTPUT is for anything going out. There is also a nat, mangle, and some other tables. None of these are used by ESX and should not be used by ESX. For INPUT, the general rules are written like:

iptables -A INPUT -p with the state argument and the --state argument of NEW. This is matching on the state of a new connection only.

iptables -A INPUT -m state --state NEW -p tcp -m tcp --dport 902 -j ACCEPT

You can also have a sourcePort in use, a sourceAddr, destinationAddress, and a host of other elements at work including which interface upon which this rule applies. The interface lock down is NOT part of the ESX firewall rules, nor should it be as any vswif should be able to use all its services. If you want to change this then you will need to play with the firewall rules.

IPtables is extremely complex and incredibly powerful. To fully understand it you will need to do some reading of the manual page, a reference and play with it.

In addition there is only one default gateway. THere can not be more than one default gateway, however, it is trivially easy to add a gateway that is specific to a given subnet if you are using more than one vswif device. There is no cross routing between vswif0 and vswif1 if you look at your network routes (netstat -rn) you will see the default route plus a node specific route for your vswif1 device. Anything coming in on vswif1 will go back out vswif1.

As for your comment about it ties the vswitch portgroup to the Service Console, you are absolutely correct. If you are not using VLANs the traffic is not segmented. If you are allowing promiscuous mode ethernet devices on your vswitch then things are even worse. Virtual Networking has some basic rules to follow and you are crossing some of them.

1) vMotion should be its own private vSwitch or portgroup

2) Service Console should be its own private vSwitch or portgroup. This includes all possible service consoles ports, not just the primary (use VLANS to segment networks on the SC vSwitch). This is your Administrative network which should also be firewalled. VC is on this network. As is any management tool whether running as a VM or otherwise.

3) Virtual Machines should be their own private vSwitch or portgroups

4) Each should have redundancy (2 pNICS per vSwitch)

5) VLANs should be used when dealing with reduced number of ports

6) vMotion should be its own private network including routing if you are to doing this. Only ESX vMotion ports should be allowed on this network.

7) iSCSI requires that a service console vswif participate in its network, yet iSCSI for VMFS should be separated from the VM network.

😎 NFS should be separate from the VM Network.

9) Never enable promiscous mode on your vswitches but you can for specific portgroups that only contain your IDS... No other VM should be in this portgroup.

10) Using VLANs will lower the amount of pNIC you require.

Ideally I like having a minimum of SIX physical network adapters so that I have everything segmented nicely even if I do use VLANs as well. This way I gain redundancy and functionality.

You have not found anything that is a problem with ESX, you have found that there are 1001 ways to do networking within LINUX and that some are better than others. Also you have found that generic firewalls like what ESX uses is only useful in generic cases, you have a SPECIFIC need and will have to adjust things to meet that SPECIFIC need. The iptables firewall is only part of the answer, you can also use /etc/hosts.allow and /etc/hosts.deny to implement tcpwrappers, or implement tcpwrappers in /etc/xinetd.conf; these are just some of the ways to limit access based on IP and or network and are a 2nd line of defense for all LINUX machines. Changing the default ESX security rules are not for the faint of heart, you can really mess things up extremely easily. Nor do I expect most people to do so, however, if they do not understand how the rules work, then they could easily mess up.

One thing you can do is to deny access to anything on vswif1 using the following in /etc/rc.d/rc.local

/sbin/iptables -I INPUT 1 -i vswif1 -j DROP

Which would insert this rule at the top of the chain denying all access to vswif1 from outside. Adding in specific rules requires even more tweaking and you need to really understand firewalls to do this plus iptables.

Then to allow port 902 or anything to come back in on vswif1 that you specifically start you can use:

/sbin/iptables -I INPUT 1 -i vswif1 -m state --state NEW -p tcp -m tcp --dport 902 -j ACCEPT

/sbin/iptables -I INPUT 1 -i vswif1 -m state --state ESTABLISHED,RELATED -j ACCEPT

You must use the 3 lines in the order given to get only port 902 and what you have started to go out vswif1.

If anyone does not understand these concepts or know how to implement them, then I suggest taking a LINUX Security Class, There is a very good version of this presented at each Linux World as well as the one SANS puts on. Also my upcoming book has a chapter just on security and how to implement some of the concepts I discussed. The last option is to hire a consultant that does understand these concepts.

How ESX 3i changes these above, I am unsure as I have yet to play with ESX 3i. I imagine it will to a certain extent but hopefully not very much. A firewall for the CLI is extremely important.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education.

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
biniam
Contributor
Contributor

Sorry for jumping the queue. Do you have the online version of your book available?

Could you please explain to me about the following points.

1) VLANs should be used when dealing with reduced number of ports

2) iSCSI requires that a service console vswif participate in its network, yet iSCSI for VMFS should be separated from the VM network.

does service console need to be used more than once?

3) FS should be separate from the VM Network.

4) Using VLANs will lower the amount of pNIC you require.

Could you please explain how?

regards

Ben

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Sorry for jumping the queue. Do you have the online version of your book available?

Not a problem. See my signature for a link to the online versions.

1) VLANs should be used when dealing with reduced number of ports

While it is possible to see all VLAN traffic on a vSwitch if Promiscuous mode is enabled (not the default) VLANs without promiscous mode will split the traffic in a secure method. Normally for full redundancy, performance, and security you would want 2 pNICs per Network. Of which the default ESX server in a cluster has 3... Service Console, vMotion, and VM Network. When you add in SAN, you use FC-HBAs which have their own security mechanisms, when you add in iSCSI it is another network (whether they are 1 Gbe devices or iSCSI-HBAs), add NFS and you have one more network. VM Networks can be split a number of ways. Since a 4 pNIC system (common blade) does not have 10 pNICS (assuming everything is enabled included iSCSI/NFS) you have to make some choices on what you will place within a VLAN to segment your traffic.

If you do not segment your traffic it is possible for vMotion, Administration traffic to be seen by the VMs for example. So the use of VLANs in low port density situations affords some level of protection. There are some MITM attacks that are possible even without promiscous mode network adapters. This is not necessarily a problem in ESX as much as it is a common network issue. The attacks work even outside ESX.

2) iSCSI requires that a service console vswif participate in its network, yet iSCSI for VMFS should be separated from the VM network.

does service console need to be used more than once?

Let us look at some networks and netmasks.... If the SC has the following: 10.0.0.18/255.255.0.0 if can participate in a Class B network. Assume the switching allows this. If I had two iSCSI networks (10.0.1.18/255.255.255.0 and 10.0.2.18/255.255.255.0) then the SC would participate in both with no need for a secondary SC port. However if the SC did not participate in a full class B but a class C then I would need to have an SC port for both the 10.0.1.18/24 and 10.0.2.18/24 networks. Which could be different from the 10.0.0.18/24 network used for administration. In this last case I would need 3 SC links.... So you really have 3 situations:

  • Default SC participates in all iSCSI networks.... use a broad enough netmask.

  • Default SC participates in one iSCSI network, 2nd SC participates in second iSCSI network with restrictive netmasks.

  • Default SC participates in Administrative network, 2nd SC participates in one iSCSI network, 3rd SC participates in second iSCSI network with restrictive netmasks.

This all depends on your network situation. Now since the only use of the SC on the iSCSI network is the authentication aspect of the iSCSI protocol you can possibly limit this links to just a single pNIC and not duals. This link is required whether you use authentication or not actually, so you can not get rid of it but you can reduce the number of ports for it pretty safely (not 100% redundant but doable)

3) FS should be separate from the VM Network

The VM network is a hostile environment, it should be separate from anything that relates to Administration, Storage, vMotion, etc. used by the ESX server. If your VMs will use iSCSI initiators or NFS mounts, then I would do that through a totally different network. I would not want the VMs to get a hold of the base VMDKs, etc.

4) Using VLANs will lower the amount of pNIC you require.

Could you please explain how?

VLANs allow you to run multiple networks over the same wire using tags in the IP headers to say what VLAN/network the packets belong to assure deliver to the properly participating machine. This is the 802.1q Standard (http://en.wikipedia.org/wiki/IEEE_802.1Q). If you want more details I suggest reviewing the standard as it explains everything in great detail. Because of this, you can use less pNIC in some cases.

This is all predicated upon how much security you really want, can afford, and must do. I like to setup physical barriers so that I can keep all Administrative and ESX internal networks apart from each other and the relatively hostile environment of the VMs.

Why do I say this is a hostile environment? Consider that these VMs are in the DMZ or customer facing whether internal or external customers... You can never know what is happening on these systems and there are some invisible root kits for various flavors of Operating Systems. I just consider all VMs to be hostile, so that I secure my systems properly. As a virtualization administrator I look at anything I do not control directly or allows Users to connect to be hostile.

May I use your questions in a blog post I am working on? They are good ones.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
biniam
Contributor
Contributor

Many thanks for your comments they are great help. and Yes you can use it,

Sorry to nag you on this. We had a very long discussion on our IT dept with regards to all this. A lot of people are confused with vmwares security and performance issue.

What is the ideal pNIC distribution and how many do you need. A company proposed us the following and I disagree with them.

And another member of staff proposed

2 pNIC for Service Console

2 pNIC for vMotion

2 pNICs for iSCSI + SC

1 pNICs for VMs ( Production)

1 pNICs for VMs ( DMZ)

2 pNICs for NFS

I proposed to have the following

2 pNIC for Service Console 10.44.0.0/255.255.0.0 Vlan 10

2 pNICs for VMs 10.44.1.0/255.255.255.0 Vlan 11

2 pNIC for vMotion 10.44.2.0/255.255.255.0 Vlan 12

2 pNICs for iSCSI 10.44.3.0/255.255.255.0 Vlan 13

Which why do you go with this choices.

Regards

Biniam

Reply
0 Kudos
JDLangdon
Expert
Expert

It all comes down to how comfortable you are with the setup.

In my environment I have:

2pnics teamed and being utilized by the SC and Vmotion with both being on the same VLAN.

2pnics teamed and configured as a VLAN trunk hosting all VM VLANs.

1pnic for backup network 1

1pnic for backup network 2

1pnic configured for iSCSI with the initator being installed within the VM's not on the SC.

Jason

Reply
0 Kudos
biniam
Contributor
Contributor

>2pnics teamed and being utilized by the SC and Vmotion with both >being on the same VLAN.

There are a lot of discussion on security issue. Do see any disadvantage on using both on the same network?

>1pnic for backup network 1

>1pnic for backup network 2

what do i you mean by network 1 and 2?

>1pnic configured for iSCSI with the initator being installed within the >VM's no

I guess you have hardware Initiator or FC to do this?

regards

biniam

Reply
0 Kudos
JDLangdon
Expert
Expert

2pnics teamed and being utilized by the SC and Vmotion with both >being on the same VLAN.

There are a lot of discussion on security issue. Do see any disadvantage on using both on the same network?

In our environment I do not see any issues with having both the SC and VMotion on the same network.

1pnic for backup network 1

1pnic for backup network 2

what do i you mean by network 1 and 2?

We are hosting VM's for two different clients and each client has their own backup environment. VM's belonging to a specific client is connected to their own backup network and the VM production network. If both backup networks were routable I would have designed it so that both backup networks were using the same pnics but different VLANs.

1pnic configured for iSCSI with the initator being installed within the >VM's no

I guess you have hardware Initiator or FC to do this?

There are no hardware initiators. The iSCSI is connected to a switch which is connected to each Esx host. The VM's that are attached to the iSCSI each have two vnics. One vnic is attached to the production network and the other vnic is attached to the iSCSI network.

Jason

Reply
0 Kudos
biniam
Contributor
Contributor

The last comment; you are getting iSCSI working in the guest, so using the windows initiator to connect to a lun. But your are using Vmware iscsi for os.

Regards

Biniam

Reply
0 Kudos
JDLangdon
Expert
Expert

The last comment; you are getting iSCSI working in the guest, so using the windows initiator to connect to a lun. But your are using Vmware iscsi for os.

Regards

Biniam

We are not using iSCSI for the OS. The OS is on a fiber storage. We are using the iSCSI as second tier storage for e-mail archives.

From what I understand by talking to iSCSI venders, if you are using iSCSI as your primary storage you should be using VMware iSCSI for the os and then use the Windows initiator to connect to each LUN.

Jason

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

A few comments and thoughts.

2 pNIC for Service Console

2 pNIC for vMotion

2 pNICs for iSCSI + SC

1 pNICs for VMs ( Production)

1 pNICs for VMs ( DMZ)

2 pNICs for NFS

There is no redundancy for the DMZ and Production networks, I would not do this.

I proposed to have the following

2 pNIC for Service Console 10.44.0.0/255.255.0.0 Vlan 10

2 pNICs for VMs 10.44.1.0/255.255.255.0 Vlan 11

2 pNIC for vMotion 10.44.2.0/255.255.255.0 Vlan 12

2 pNICs for iSCSI 10.44.3.0/255.255.255.0 Vlan 13

Note that your SC and vMotion networks overlap, as does your VM network overlap with SC. This means that if someone gets onto the SC Network they could possibly snag vMotion data even with VLANs. Penetration testing software can and will ignore VLAN tags so all the network is open. I would place the vMotion into a 172.16.x.x network and keep it 100% private from anything else. This network holds the memory image of running VMs. For vMotion I would implement External Switch Tagging or EST (no VLAN numbers used on the vSwitch in question).I would also use a second set of pNICs just for you DMZ network, do not mix your Production and DMZ networks on the same set of hardware. I would assume there are separate physical switches for DMZ vs Production so in effect you have a completely split network?If iSCSI Initiators are to be used from VMs that is ANOTHER network not related to the iSCSI pNICs used by the ESX server. You never want your VMs to have a chance of getting access to the LUNs for the ESX server. Might as well let them into the SC if this is the case. I would also use a different iSCSI server for this if possible or at least require some form of authentication. So your network looks like this:


SC Portgroup

<- vswitch0

<- pNIC

<- Admin pSwitch0

<- pFW

<- iSCSI Server




<- pNIC

<- Admin pSwitch1



iSCSI Portgroup

<- vSwitch1

<- pNIC

<- iSCSI pSwitch0




<- pNIC

<- iSCSI pSwitch1



VM vNetwork w/VLANs

<- vSwitch2

<- pNIC

<- Prod VM pSwitch0

<- Production

<- pFW




<- pNIC

<- Prod VM pSwitch1




vMotion Portgroup

<- vSwitch3

<- pNIC

<- vMotion pSwitch0

-> Other ESX Servers




<- pNIC

<- vMotion pSwitch1

<- pNIC


pFW ->

DMZ pSwitch0 ->

pNIC ->

vSwitch4 ->

DMZ vNetwork w/VLANS


DMZ pSwitch1 ->

pNIC ->





vFW ->

Internal Network



VM iSCSI Network

<- vSwitch5

<- pNIC

<- iSCSI pSwitch2

<- 2nd iSCSI Server




<- pNIC

<- iSCSI pSwitch3



DMZ iSCSI Network

<- vSwitch6

<- pNIC

<- iSCSI pSwitch4

<- DMZ iSCSI Server




<- pNIC

<- iSCSI pSwitch5


Note the direction of the arrows, LHS is external, RHS is internal. Granted while the number of pSwitches required is large you can combine these using VLANs at the pSwitch layer and only use VLANs for the two networks mentioned. Keeping things separated like this is the most secure. Note the VM/DMZ iSCSI networks recommend separate iSCSI servers or at least ones that require authentication at all levels possible, unfortunately unless you are using IPSEC with IPV6 on your iSCSI servers you do not get encryption. You do not want a DMZ VM to get a hold of the ESX server LUN for all VMs. Assume anything in the DMZ will be attacked successfully and plan accordingly. At the pSwitch Level, the DMZ pSwitches absolutely need to be separate IMHO. I do penetration testing and courses related to this, it is incredibly trivial to break into some systems, others more difficult, but assume they are in a hostile environment is the best method. By breaking out the number of pNICs needed, you reduce the overall need for VLANs within the vNetwork and push them onto the pNetwork where the switches can do the monitoring, vSwitches are not very intelligent. The VM Network and DMZ networks can have VLANs if necessary. If you are planning for the most security keep things separate is best. This will also have the advantage of the best performance for iSCSI and vMotion.

Also, is the NFS network to be used by the ESX server or the VMs? If it is used by the ESX Server add another vSwitch just for NFS traffic. Remember this is a clear text protocol so allowing VMs access to this is a high risk item.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at

Message was edited by: Texiwill

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
biniam
Contributor
Contributor

Hi Edward,

Could you please check the following figure if you could see if this will work? The majority of the design was based on your book. I am not sure how can I implement the DMZ ISCSI and is this going to work with vMotion?

Biniam

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

Assuming the DMZ VMs will not use iSCSI Initiators internally then you will not need the DMZ iSCSI Portgroups. These are not necessary as the Storage presentation is to the ESX server and not the VMs. So you only need one set of iSCSI networks. These are unique and no VM even knows the Backend storage, it just sees SCSI drives. SO those are not needed.

The only time those would be needed, is if the VMs will run iSCSI initiators. I would not have DMZ VMs do this if you only have one iSCSI server, networking being what it is, I would not let my DMZ VMs access anything but themselves, or present RDMs to them for larger storage areas. Granted some DMZ VMs have to access internal hosts....

General networks with DMZs look like this:

Internet <-> Firewall <-> DMZ <-> Firewall <-> Internal network

THe idea is to limit direct access to the internal network. Sometimes a DMZ machine must query an internal DB for some bits of data. Not often but this will allow for that in a protected fashion. The external firewall is slightly more open than the internal, perhaps allowing web traffic where the internal firewall does not, etc.

And not:

Internet <-> Firewall <-> DMZ

Internet <-> Firewall <-> Internal Network

This just allows at least 2x attack points. Force all traffic through the DMZ so that you can monitor everything within there. And have a secondary firewall to add more protection. It is sometimes quite hard to bust through firewalls but not impossible.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
biniam
Contributor
Contributor

Hello,

I have only one ISCSI and with software Initiators. We will have our portal and Front End exchange server on dmz and i need these two servers with high availablity; vMotion

Regards

Biniam

Reply
0 Kudos
Texiwill
Leadership
Leadership

Hello,

So your VMs will be using Windows iSCSI initiators or will ESX be using the software initiator? This is the real question. I hope you imply the later as the former has several security issues.

Best regards,

Edward L. Haletky, author of the forthcoming 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', publishing January 2008, (c) 2008 Pearson Education. Available on Rough Cuts at http://safari.informit.com/9780132302074

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
biniam
Contributor
Contributor

Hello,

I am using ESX software initiator.

Regards

Biniam

Reply
0 Kudos