VMware Cloud Community
jfields
Enthusiast
Enthusiast
Jump to solution

Questions about adding a NAS to existing environment

Hello,

We have an existing VI deployment with an EMC SAN and three ESX hosts. We wish to add additional storage for the purposes of testing NAS and for ISO storage. To do so, we were going to provision a RHEL 5 server, as others on the forum have done and use NFS to provide a datastore for ESX. However, I have some questions that I have only found partial answers for on the forum.

1) Is there a performance hit in adding NAS to the IP storage network for the exisiting VI deployment? Our EMC SAN is using iSCSI to deliver storage to the boxes currently. Is mixing iSCSI and NFS on the IP storage network a bad idea in turns of performance and/or stability? Is anyone else doing this? It seems like a logical thing to do as many smaller deployments may not have the resources to purchase enterprise equipment for test/dev and ISO storage.

2) What are the security implications of placing a Red Hat-based NFS server on the IP storage network? How does one do this safely? All of the security documentation and forum posts seem to emphasize the need to separate IP storage traffic (which is clear text) from running VMs or physical servers. In order to add NAS to the storage network, we would have to allow the Red Hat server access to the IP storage network.

3) Is there a way to restrict the traffic on the network interface that faces the IP Storage network to NFS-only? We would two interfaces on the Red Hat server to control it, since our IP storage network is not routed back to other networks in any way. It is an isolated network by design, for security reasons. Thus, we would need one interface connected to the IP stroage network and one connected to one of our management networks.

Thank you. I appreciate your assistance.

James

0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I can understand that from a pure load issue. What I meant is that is there a greater load placed on the servers merely from having to use two different storage protocols (NFS & iSCSI)? Or is it better to use iSCSI on the test/dev storage device so that we are only configuring and using one storage protocol?

There is some overhead within the vmkernel, but that is also to be expected.

I think this gets to the heart of the matter. I am not sure using a full OS is the way to go for me. I am really a Windows admin and my knowledge of Red Hat is somewhat lacking. Based on that, it might be best to use a storage OS such as Openfiler or NexentaStor. Have you any opinion on which one is more stable and/or performs better?

Openfiler is Linux or FreeBSD. I have yet to look into in much detail, but in either case they are both full OS's. Openfiler is much easier to use however and can speak iSCSI or NFS. If it was me and I knew nothing of Linux, I would use Openfiler.

The VMs would access the storage indirectly though ESX.

Great then it can participate in your existing IP Storage network. Setup the web access (management) for Openfiler to be on the same network as the management appliance for the existing SAN. Then have your data run on the IP Storage network.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

0 Kudos
3 Replies
Texiwill
Leadership
Leadership
Jump to solution

Hello,

1) Our EMC SAN is using iSCSI to deliver storage to the boxes currently. Is mixing iSCSI and NFS on the IP storage network a bad idea in turns of performance and/or stability?

It can affect performance as you are now pulling more data over the same link as you were before. So yes, it could affect performance.

Is anyone else doing this?

Yes

It seems like a logical thing to do as many smaller deployments may not have the resources to purchase enterprise equipment for test/dev and ISO storage.

Yes it is.

2) What are the security implications of placing a Red Hat-based NFS server on the IP storage network? How does one do this safely? All of the security documentation and forum posts seem to emphasize the need to separate IP storage traffic (which is clear text) from running VMs or physical servers. In order to add NAS to the storage network, we would have to allow the Red Hat server access to the IP storage network.

Well that depends on how you setup your IP storage network? If it was me, and I was concerned about security, I would place the NFS server in its own security zone and have new NFS only vmkernel portgroups on their own vSwitch (dedicated pNIC) on my ESX hosts. This will lower the overall impact on performance within the existing vmkernel IP Storage network. Also, since NFS does not require participation by the SC as iSCSI does, you do not cross security zones this way.

If your VMs must access this NFS store directly over the network instead of indirectly through the CDROM interface within the VM, I would 100% separate this from your other IP Storage. VMs are hostile critters to the virtual environment.

RHEL can be hardened quite handily using either the CISecurity Benchmark or Bastille-Linux. At the very least I would do this!

3) Is there a way to restrict the traffic on the network interface that faces the IP Storage network to NFS-only? We would two interfaces on the Red Hat server to control it, since our IP storage network is not routed back to other networks in any way. It is an isolated network by design, for security reasons. Thus, we would need one interface connected to the IP stroage network and one connected to one of our management networks.

Yes that can be done, but unlike a normal NAS/SAN device RHEL if broken into can be used as a router to now pivot attacks into the IP Storage network. If you are really concerned place a firewall between the NFS Server and the 'ESX hosts' to limit access to just NFS.

Look at this via security zones:

IP Storage is one Security Zone, SC is another usually.

iSCSI -> requires SC participation and depending on how you are doing this, you may have created another attack point into the ESX host.

NFS does not require SC participation so NFS can not cross security zones....

How do the VMs access the ISOs?

Indirectly -> then use your normal IP Storage network

Directly -> then use a dedicated network

Also note that the use of VCB can also cross security zones, so that needs to be considered and used very carefully.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
jfields
Enthusiast
Enthusiast
Jump to solution

Edward,

Thank you for your quick reply.

+It can affectperformance as you are now pulling more data over the same link as youwere before. So yes, it could affect performance.+I can understand that from a pure load issue. What I meant is that is there a greater load placed on the servers merely from having to use two different storage protocols (NFS & iSCSI)? Or is it better to use iSCSI on the test/dev storage device so that we are only configuring and using one storage protocol?+Well that depends on how

you setup your IP storage network? If it was me, and I was concerned

about security, I would place the NFS server in its own security zone

and have new NFS only vmkernel portgroups on their own vSwitch

(dedicated pNIC) on my ESX hosts. This will lower the overall impact on

performance within the existing vmkernel IP Storage network. Also,

since NFS does not require participation by the SC as iSCSI does, you

do not cross security zones this way.+

+If your VMs must access this NFS store directly over the network

instead of indirectly through the CDROM interface within the VM, I

would 100% separate this from your other IP Storage. VMs are hostile

critters to the virtual environment.+

RHEL can be hardened quite handily using either the CISecurity Benchmark or Bastille-Linux. At the very least I would do this!

I think this gets to the heart of the matter. I am not sure using a full OS is the way to go for me. I am really a Windows admin and my knowledge of Red Hat is somewhat lacking. Based on that, it might be best to use a storage OS such as Openfiler or NexentaStor. Have you any opinion on which one is more stable and/or performs better?

IP Storage is one Security Zone, SC is another usually.

+iSCSI -> requires SC participation and depending on how you are

doing this, you may have created another attack point into the ESX

host.+

NFS does not require SC participation so NFS can not cross security zones....

How do the VMs access the ISOs?

Indirectly -> then use your normal IP Storage network

Directly -> then use a dedicated network

Also note that the use of VCB can also cross security zones, so that needs to be considered and used very carefully.

The VMs would access the storage indirectly though ESX. This device would be presenting storage to the ESX hosts. I believe I get what you are saying regarding security zones and how iSCSI provides an additional need for a SC, which is a security issue. However, my concern had more to do with the fact that this secondary storage device will have access to the storage network. Unfortunately we are limited in creating security zones and additional networks because I am in an university environment where individual organizations cannot change the network design or add security zones. We must leave a SC on the OP storage network, because it is not possible to route the IP storage network back to our main network. We must keep our IP storage network and vMotion networks on switches that are isolated from the main university networks to be complaint with them. Thanks.

0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

I can understand that from a pure load issue. What I meant is that is there a greater load placed on the servers merely from having to use two different storage protocols (NFS & iSCSI)? Or is it better to use iSCSI on the test/dev storage device so that we are only configuring and using one storage protocol?

There is some overhead within the vmkernel, but that is also to be expected.

I think this gets to the heart of the matter. I am not sure using a full OS is the way to go for me. I am really a Windows admin and my knowledge of Red Hat is somewhat lacking. Based on that, it might be best to use a storage OS such as Openfiler or NexentaStor. Have you any opinion on which one is more stable and/or performs better?

Openfiler is Linux or FreeBSD. I have yet to look into in much detail, but in either case they are both full OS's. Openfiler is much easier to use however and can speak iSCSI or NFS. If it was me and I knew nothing of Linux, I would use Openfiler.

The VMs would access the storage indirectly though ESX.

Great then it can participate in your existing IP Storage network. Setup the web access (management) for Openfiler to be on the same network as the management appliance for the existing SAN. Then have your data run on the IP Storage network.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
0 Kudos