VMware Cloud Community
WillGreen
Contributor
Contributor
Jump to solution

VI & Network Separation

I'm currently planning a new virtualized infrastructure for some business critical applications. The current design includes two HP c-class blade chassis in separate racks for resiliency and NFS-based storage (probably NetApp). My question concerns how I should connect the various components together. The HP blades I plan to use (BL495s) include two 10 gig ports with the option of additional ports via two mezzanine cards.

I imagine I need separate networks for:

  • Application traffic to corporate LAN

  • NFS storage traffic

  • VM management by vCentre and VMotion (or should these two be separate?)

My question is basically, how much do I need to separate these? Corporate LAN traffic will definitely go via separate physical network ports, but can I combine the remainder on one redundant gigabit network using VLANs? In such a set up each blade would have four network ports, 2 for application traffic and 2 for everything else. I would connect the two storage/VMWare switches on one chassis to those on the other to provide a redundant self-contained network.

The alternative is to use an additional network card with 2 or 4 gigabit ports and separate all the traffic, however this will significantly add to the cost as it would entail buying extra switch modules.

Tags (3)
Reply
0 Kudos
1 Solution

Accepted Solutions
Texiwill
Leadership
Leadership
Jump to solution

Hello,

It is always best to separate all networks from a security perspective, yet some networks could be overlapped with no issues. You have 4 basic networks to worry about.

SC/Management Network

VMotion Network

Storage Network

Production Network

Refer to the my Network Topology Blogs on how best to use 2,4, and 6 pNIC configurations.

Remember that use of VLANs does NOT guarantee security, subnets absolutely do not. The blogs will help you to lay out the proper networking.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill

View solution in original post

Reply
0 Kudos
6 Replies
Texiwill
Leadership
Leadership
Jump to solution

Hello,

It is always best to separate all networks from a security perspective, yet some networks could be overlapped with no issues. You have 4 basic networks to worry about.

SC/Management Network

VMotion Network

Storage Network

Production Network

Refer to the my Network Topology Blogs on how best to use 2,4, and 6 pNIC configurations.

Remember that use of VLANs does NOT guarantee security, subnets absolutely do not. The blogs will help you to lay out the proper networking.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
Reply
0 Kudos
WillGreen
Contributor
Contributor
Jump to solution

That's very helpful. Having six physical NICs per host does look like the best balance for my project if budget permits (the blades could be configured with 8 NICs, but I don't think the extra separation is justified for this project).

In the four NIC configuration: If pNIC1 was to fail am I right in thinking storage traffic would failover to pNIC0?

Message was edited by: WillGreen to clarify four NIC configuration question.

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

That is correct, you have to define this when you create the IP Storage portgroup however.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
WillGreen
Contributor
Contributor
Jump to solution

One final point. If I want to be able to mount my NFS storage within the VMs and have it available for ESX to boot VMDKs from do I want a separate fifth network and how would this affect the design of a 6 pNIC set up?

Thanks again,

Will

Reply
0 Kudos
Texiwill
Leadership
Leadership
Jump to solution

Hello,

One final point. If I want to be able to mount my NFS storage within the VMs and have it available for ESX to boot VMDKs from do I want a separate fifth network and how would this affect the design of a 6 pNIC set up?

Yes that would be a separate network. Actually it is not something I would do. If you needed to mount NFS within your VMs I would use a different NFS server than the one I assigned to ESX. There are two concerns with using the same server for both ESX and the VMs. The first is performance, there will be an impact. How much I am not sure. The other is mainly an issue with security. NFS is not the most secure that it may be possible for a VM to spoof an IP address and gain access to the file share that hosts the VMDKs given the default security state of the portgroups and vSwitch. Lastly, you open several more attack vectors into the hypervisor. Each VM could act as an attack vector through the NFS server.

If it was me, I would add another pair of pNICs JUST for this new NFS Server and consider it as a fifth network. Or perhaps use a private vSwitch that has an virtualized NFS server hanging off it so that you do not require more physical network, just virtual network (this would depend on load). If you do this and use VMotion you need to Allow Private Networks to Participate in VMotion.


Best regards,

Edward L. Haletky

VMware Communities User Moderator

====

Author of the book 'VMWare ESX Server in the Enterprise: Planning and Securing Virtualization Servers', Copyright 2008 Pearson Education.

Blue Gears and SearchVMware Pro Blogs: http://www.astroarch.com/wiki/index.php/Blog_Roll

Top Virtualization Security Links: http://www.astroarch.com/wiki/index.php/Top_Virtualization_Security_Links

--
Edward L. Haletky
vExpert XIV: 2009-2023,
VMTN Community Moderator
vSphere Upgrade Saga: https://www.astroarch.com/blogs
GitHub Repo: https://github.com/Texiwill
WillGreen
Contributor
Contributor
Jump to solution

It seems like it might be simpler and safer to store everything in VMDKs. I'll discuss this with the storage vendors.

Thanks for all your advice, I'm so glad I found this community while I'm still at the planning stage.

Reply
0 Kudos