VMware Cloud Community
RaniBaki
Contributor
Contributor

Virtual Machine Network behind a firewall

I'm building a set of new ESX 3.5 servers. I need to setup the virtual machine networks behind a firewall due to IP addressing conflicts between 2 networks. I would like some advise on the network setup proposed below.

The servers are HP 685c's. I have 12 NICs and 6 ethernet switches to work with. I also have an dual port hba with 2 FC switches. I will need to connect to a EMC frame and a NetApp filer using NFS. Will this solution work? Are there any other suggestions. More importantly will vmotion have any issues?

Network 1 will have 8 NICs connected to it and will also have the VC server and the NetApp Storage.

NIC 1 - Service Console 1

NIC 2 - Vmotion 1

NIC 3 - Service Console 2 (backup)

NIC 4 - VMotion 2 (backup)

NIC 5 - 8 - NFS

Network 2 is where the VMs will reside and is behind a NATed firewall.

NICs 9 - 12 - trunked to multiple VLANs

Of course the design will have pairs of ports redundant on multiple switches, etc.

Thanks

Reply
0 Kudos
11 Replies
weinstein5
Immortal
Immortal

One thing I notice - is I do not think you can have two vmotion vmkernel ports -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
Reply
0 Kudos
RaniBaki
Contributor
Contributor

Yes, sorry, meant it as a second interface for vmotion.

But is it an issue to have the vmotion interfaces on an isolated network from the VMs?

Reply
0 Kudos
ctfoster
Expert
Expert

But is it an issue to have the vmotion interfaces on an isolated network from the VMs?

No issues. In fact this is the recommended setup.

Reply
0 Kudos
kjb007
Immortal
Immortal

You have NICS 5-8 dedicated for NFS. NFS datastores can't really be load-balanced across multiple pNICs. If you have multiple NFS datastores, then you will have pretty much one datastore per pNIC, but you can't spread one datastore over multiple pNICs. You can, however, use the extra NICs for redundancy in case of NIC failure.

Also, you're teaming 4 NICs together. Are they going to the same physical switch? If they go to the same switch, those switch ports should be set to 802.3ad link aggregation port/ether channel. If you have some NICs going to one switch and others going to a different switch, then you won't really see the benefits of the aggregation, unless you put some pNICs into standby.

Can you clarify how many physical switches the teamed NICs will be going to?

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
RaniBaki
Contributor
Contributor

I have a total of 6 ethernet switches.

Each set of pNICs are mapped to a set of 2 physical switches for redundancy.

I will have multiple datastores, so do you recommend mapping each pNIC to a datastore or set of datastores. If I do that, how can I have redundancy across pNICs?

Thanks

Reply
0 Kudos
kjb007
Immortal
Immortal

Yes, if you have multiple datastores, then you will be able to spread the load over the pNIC, but you will need to have multiple IPs on your NFS server. Meaning, in order to spread the load, you have to have a unique src-dst combination. So, if you have 1 vmkernel port and 1 NFS ip, then you have 1 src-dst combination, and so no matter how many NICs you have, there's no way to spread the load. If you have 1 vmkernel and 2 NFS IPs, then you have 2 src-dst combinations, and you can use at most 2 different NICs. I hope that makes sense.

How many VLANs are you trunking for your VM networks? It may be better to use 1 vSwitch with 1 NIC in 1 physical switch and 1 NIC in another physical switch, rather than 2 in each, to cut down on some of the complexity. If you team 4 NICs together, with 2 NICs per physical switch, then you'll have to configure ether channel.

Since the channel is only good within a switch, you'll have 2 separate channels. In order to use this efficiently, you'll have to put the 2 ether channel NICs into active/active, and the other 2 as standby, either per switch or per portgroup, otherwise, your load balancing may get skewed if you split traffic between the physical switches.

Hope that makes sense.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
RaniBaki
Contributor
Contributor

The NFS server is currently configured with 3 trunked interfaces. I'll find out from our storage team if we can split them. But my concern is if I do that, I may not have redundancy in case of a pNIC or pSwitch failure.

I have about 12 vlans to trunk. So I don't think I'll be able to map a single pNIC to each.

Reply
0 Kudos
kjb007
Immortal
Immortal

If you have trunked interfaces on your NFS server, I can only assume that you have multiple VLANs. If you have multiple VLANs, then you should have multiple IPs, so that may be ok, but in this case, I would use 2 pNICs instead of the 4.

With 12 VLANs, I would use a three virtual switches, with 2 NIC in vSwitch, going to separate physical switches. I would then have both NICs as active/active, and have 4 VLANs per vSwich. I would also go into the portgroup configuration, and override failover, and spread the primary and secondary NICs over the portgroups. Meaning, have vmnic4/5 for a set of portgroups, and have vmnic5/4 for the others on the same vSwitch.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
RaniBaki
Contributor
Contributor

No, on the NFS server, they're trunked as an aggregate to a single VLAN. I am checking with the storage team though.

Although I kind of understand what you're saying about the VM network, I've never setup multiple vlans, multiple vswitches or portgroups in ESX. I would also guess that I would need a IP address on each VLAN to configure. Is that correct? How do I set the NICs as active/active? Is there a good document out there that goes over all this?

Thanks

Reply
0 Kudos
kjb007
Immortal
Immortal

Your switches are uplinks, so they don't require UP addresses. They just have to be on the same VLAN that the VM's need to connect to and have IP addresses in.

Setting NICs active/active is done by editing the vSwitch properties, and modifying the values in the NIC teaming section.

On the NFS side, yes, it would benefit the NFS server to have multiple NICs because they have clients that have different IPs. In your case, it is different, because you, as a client, want to load balance as well. Instead of them splitting anything on the server, it would be better to add an additional IP address to the NFS server, which you can then utilize. This may all be a moot point if you're not going to be running a large number of vm's. You may not need the additional throughput afforded by link aggregation, and may be just fine with the standard 1 GB link to the network.

-KjB

vExpert/VCP/VCAP vmwise.com / @vmwise -KjB
Reply
0 Kudos
RaniBaki
Contributor
Contributor

Thanks for all your help.

Reply
0 Kudos