VMware Cloud Community
FrostyatCBM
Enthusiast
Enthusiast
Jump to solution

ESXi 4.1 management + vMotion LAN (best practice?)

Am just in the process of setting up some new ESXi 4.1 hosts (Dell R710's with 6 NICs each). We are using shared SAS for storage, so I don't need NICs for iSCSI. Was thinking I would set up like this:

2 x NICs for LAN segment (e.g. 192.168.100.0/24)

2 x NICs for DMZ segment (public IP range)

2 x NICs for vMotion/vmKernel VMLAN segment (e.g. 10.10.10.0/24)

All these physical network segments will be tied together by our Cisco router (we will not be using VLANs). My questions concern firewalling and the IP addresses to give:

(a) to the ESXi hosts; and

(b) to our vCenter server

Am I right in giving both my ESXi hosts and my vCenter server addresses in the 10.10.10 range?

What ports will need to be open between my LAN and the VMLAN so that we can access everything whilst keeping the VM stuff relatively secure?

Reply
0 Kudos
1 Solution

Accepted Solutions
jpdicicco
Hot Shot
Hot Shot
Jump to solution

Your original setup was great, actually.  I wouldn't recommend putting a dedicated vMotion vSwitch with its own NICs, unless you intend to do a lot of vMotion on a daily basis.  However, if you intend to use FT, then you will need NICs for that.  But since you don't mention FT, I would move your vMotion back up to vSwitch0.  I am assuming that your "Management VMs" network is for vCenter and is the same VLAN as your host management network.

In the config for vSwitch0, you should set the vMotion network to use nic1 first.  That way when both NICs are available, the vMotion traffic has a dedicated NIC as the other port groups will use nic0.  This should provide adequate speed without using an additional pair of interfaces.

As for routing in ESXi, there is only 1 default gateway.  Note that both nics (management and vmotion) are marked vmk#.  They are both VMKernel nics.  This is because there is no COS in ESXi, which is where the 2nd gateway lived in ESX.  However, you don't need one.  Just put all vMotion interfaces in 1 VLAN on a subnet that is large enough to accommodate 1 address for each host as you planned.  So, in a /24, you will have enough for 254 hosts.  You don't need to route traffic on that interface, merely be able to talk to vMotion nics on other hosts.  You can use a dedicated physical switch if you want the performance benefits, but it is unnecessary just for vMotion.

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.

View solution in original post

Reply
0 Kudos
9 Replies
jpdicicco
Hot Shot
Hot Shot
Jump to solution

Short answer, yes this looks like a good arrangement of IP addressing. And having vCenter on the same segment as your host/mgmt interfaces is a good idea.

You can find a list of the ports needed for VMware products as they connect to each other here: http://kb.vmware.com/kb/1012382. Take a look at that article and/or the install guide for vCenter to answer questions regarding firewalling and what ports you may need open.

It makes sense to keep your host networking segregated from your guest networks from a security perspective. However, this can get tricky if you have vCenter/VUM installed on a VM for obvious reasons. Other than that, you don't need vCenter or your hosts to talk to your guests over the network.

If you have a high security environment, you want to look at the vShield line of products from VMware as well as traditional firewalls.



Happy virtualizing!

JP

Please consider awarding points to helpful or correct replies.

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
FrostyatCBM
Enthusiast
Enthusiast
Jump to solution

Assuming that my vCenter server is a VM, then this means that I would need to permit VM networking on my 10.10.10.0/24 network as well ... I assume this is what you mean by it being tricky ... so ...

Does that in turn mean that I'm actually not achieving much in the way of added security? Would I be better off, from a simplicity p.o.v., to put all of that on the LAN segment and only have the vMotion stuff on its own segment?

I suppose an alternative is that I could have a small physical server as my vCenter server, but I would rather avoid that if I can. What do most people do? (physical or virtual vCenter server?).

Reply
0 Kudos
chadwickking
Expert
Expert
Jump to solution

Most people use vcenter in a vm due to the flexibility and high availibility that vsphere gives you.






Regards,

Chad King

VCP

"If you find this post helpful in anyway please award points as necessary"

Cheers, Chad King VCP4 Twitter: http://twitter.com/cwjking | virtualnoob.wordpress.com If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
FrostyatCBM
Enthusiast
Enthusiast
Jump to solution

After being distracted with a whole bunch of other projects, I am back on to this one.  I decided to add some more ports to my servers, so I am hoping to achieve the following configuration, with all network segments being physical (no VLANs):

2 nics for LAN

2 nics for DMZ

2 nics for MANAGEMENT (vmKernel + the vCenter VM)

2 nics for VMOTION traffic

This is the first time I have worked with ESXi 4.1 and I've had a stab at setting up some vSwitches to test whether this will work or not:

Networking.jpg

The setup seemed really straight forward until I got to trying to add the VMOTION Network.  I had intended to provide vMotion with its own gigabit switch and to physically isolate it from the other network segments (i.e. basically, no "gateway" / no uplink which would physically connect it to other network segments).

But when I added vSwitch3 (see above) and configured the IP Address and Subnet Mask, it set a gateway with the same gateway as the other VMkernel port, even though the subnets are different.  In other words:

VMkernel vmk0 is 10.20.30.21/24 with gateway 10.20.30.1 (for ESXi management traffic)

VMkernel vmk1 is 10.99.99.21/24 with gateway 10.20.30.1 (for ESXi vMotion traffic)

Now I tried changing the gateway on vmk1 from 10.20.30.1 to 10.99.99.1 but this immediately killed management control of the host, so I had to go to the server room console to change the gateway back again.

Wondering what is the approved method for separating out Management and VMotion traffic with ESXi 4.1 ... am I on the right track, or should I not have physically separated these networks ... and secondly, will it work (as configured above), or will VMotion fail because it cannot find the gateway?

Reply
0 Kudos
jpdicicco
Hot Shot
Hot Shot
Jump to solution

Your original setup was great, actually.  I wouldn't recommend putting a dedicated vMotion vSwitch with its own NICs, unless you intend to do a lot of vMotion on a daily basis.  However, if you intend to use FT, then you will need NICs for that.  But since you don't mention FT, I would move your vMotion back up to vSwitch0.  I am assuming that your "Management VMs" network is for vCenter and is the same VLAN as your host management network.

In the config for vSwitch0, you should set the vMotion network to use nic1 first.  That way when both NICs are available, the vMotion traffic has a dedicated NIC as the other port groups will use nic0.  This should provide adequate speed without using an additional pair of interfaces.

As for routing in ESXi, there is only 1 default gateway.  Note that both nics (management and vmotion) are marked vmk#.  They are both VMKernel nics.  This is because there is no COS in ESXi, which is where the 2nd gateway lived in ESX.  However, you don't need one.  Just put all vMotion interfaces in 1 VLAN on a subnet that is large enough to accommodate 1 address for each host as you planned.  So, in a /24, you will have enough for 254 hosts.  You don't need to route traffic on that interface, merely be able to talk to vMotion nics on other hosts.  You can use a dedicated physical switch if you want the performance benefits, but it is unnecessary just for vMotion.

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
Reply
0 Kudos
FrostyatCBM
Enthusiast
Enthusiast
Jump to solution

Thanks for the extra info.

Yes, in vSwitch0 the MANAGEMENT VMs is for the vCenter Server and is VLAN(0) as I am not using VLANs anywhere.  I have now set up vSwitch0 as follows:

MANAGEMENT Network NIC Teaming (vmk0):  vmnic2 Active; vmnic6 Standby

MANAGEMENT VMs (for vCenter) NIC Teaming: vmnic6 Active; vmnic2 Standby

I think I'm going to keep my vMotion stuff fully separated, just because I can ... I have the NICs and a spare physical switch, so no real reason not to do it I guess.  I don't expect a huge volume of vMotion traffic, except in cases where we have a loss of power to one of the ESXi hosts.  I'm using Eaton UPS gear and I believe they have vCenter integration such that I can configure automatic vMotion of VMs off a host in the event that the host falls back to battery power.  Haven't gotten around to that yet.  But its possible that in the event of a power disruption I could get a bunch of VMs all wanting to move at once and its might be helpful to isolate the vMotion traffic just in case of that scenario.

Re: FT ... we're not using that right now, but I suppose its possible we might want to do so in the future.  Each of my servers has an additional 2-port gigabit card which is currently unused (vmnic8 and vmnic9) ... so if I ever need to do FT I guess I can put them into use.

So it looks like I'm all set for a full test.  Fingers crossed!  Using the Tech Support SSH console, I've turned on Jumbo Frames for the vmnics/vswitch on my VMOTION network (vSwitch3).  I'm heading off on leave for 4 weeks, so the testing of all this will have to wait until I get back.

Thanks again for the input; much appreciated.

Reply
0 Kudos
jpdicicco
Hot Shot
Hot Shot
Jump to solution

Glad to help.  As I next step, I recommend you spend some time with a vMA.  You probably want to have it on-hand for upgrades and other management tasks, and it's better to have it deployed ahead of time for that purpose.

Good luck!

JP

Happy virtualizing! JP Please consider awarding points to helpful or correct replies.
Reply
0 Kudos
JediMasterMero
Enthusiast
Enthusiast
Jump to solution

hi guys,

i am currently going through the same issue, the difference is i have a 10 GB CNA which provides me with 2*10GB NICs and i would like to utilize the 10GB feature for vMotion however i had many issues configuring a dedicated VMkernel NIC for vMotion on the same MGMT subnet, one of the issues i faced is the traffic from one host is going through the 10GB interface and heading to the 1GB interface on the other host where the MGMT(VMK0) resides the result was temporary loss of management interface since the 1GB is flooded, i am thinking of having a seperate VLAN for vMotion and the gateway remains the same for management.

Reply
0 Kudos
J1mbo
Virtuoso
Virtuoso
Jump to solution

As is demonstrated, always best to keep the management LAN clear and dedicated for that, not because it needs any particular bandwidth but because there needs to be certainty of available bandwidth.

Reply
0 Kudos