I was looking for some recomendations on configuring the networking of our new ESX 3.0.1 server, we are running it as a single host. We have not implementing Virtual Center yet, and just have this new standalone box. We want to upgrade our 2.5.3 servers to 3.0.1, and we have a new box.
It has 8 NICs installed in it, and we have 2 seperate switches the server is connected to. I was wondering if you guys think I should implement a load balancing/fail over setup for the virtual switches in vmware?
I know with our HP Servers that are running windows, we are running Load Balancing/Fail over using the HP driver. But doing this in VMware is new to me.
I was thinking of creating a new virtual switch in vmware and add two NICS to the virtual switch (one connect to each switch) and then whatever virtual machines I add to that switch should be able to load balance/fail over?
Thanks
Glenn
Glen you are along the right lines...
If you wanted to in this environment you could dedicate 7 nics to the virtual machines
&
1 nic for the service console
and you can do this because you state you have a stand alone ESX server so no nic requirement for Vmotion or no need to provide redundancy for the service console as you're not running HA aam.
you can configure the vswitches to use these nics using a esxcfg-vswitch run a --h on it to bring up the options...
In short, yes, your idea on creating a vSwitch and adding 2 NICs with each NIC going to each switch is fine.
I would be more inclined to assign 2 NICs to the Service Console. Yes, as Conyards mentions you don't run HA (yet?), but it is always beneficial to have SC redundancy anyway especially with the amount of NICs you have.
Also:
1. What swiches are you connecting to?
2. What speed are the NICs going be configured as
Answers to these can affect the NIC usage. For example, if you are connecting to Cisco switches at 1GBps it is best practice to set auto/auto at both ends.
Also you should make a channeling on your switch to use the "route based on ip hash" policy, when you are connecting one vswitch with two outbound adapters to two different switches.
So you becoming a more efficient loadbalancing.
Your switches must speak the IEEE 802.3ad.
How many vm's will you be looking to run - and what load of network traffic will be expected. The beauty of ESX networking is its configurability. I have 4 dual port NICs in our DL585's. I bind one port on 3 physical NICs to one vSwitch and the other port on each physical NIC to another vSwitch. So 2 vSwitches with 3 physical ports each. I then use one NIC port as a spare, ready to be drawn in to a bond should a nic port fail somewhere. By plugging cables into the different physical switches, i can lose 2 out of 3 physical NICs and one Physical Switch and still have network connectivity on all my VM's (albeit obviously reduced performance). The level of resilience you get all depends on the number of NICs you can assign to the vm's
Thanks for the input guys. We are connecting back to Cisco Catalyst 6509 on each switch.
We are going to be moving to Vmotion in say a year or so, so would I be good to plan my NIC configuration now on the future needs when we implement the full Virtual Infrastructure?
Should I create a NIC team dedicated to the Service Console and Vmotion can use that same team once implemented?
My other questions is, am I best to create 1 Virtual Switch with 6 NIC's assigned to it, and then just add ALL virtual machines on that box to use that virtual switch. Or would I be better to create 3 virtual switches, with 2 NIC's assigned to each virtual switch, and then I would manually have to disperse the virtual machines among those virtual switches?
-Glenn
-Glenn
How about this, (very much a framework and is influenced by VM numbers and network loads):
NIC1 - Service Console - going to physical switch1
NIC2 - Standby Service Console - going to physical switch2
NICs 1 and 2 above can use 100Mbps switchports to save premium GB ports (if applicable in your network). This gives you NIC and switch resiliency
NIC3 - Vmotion - 1GBps
If you configure all Vmotion ports to go to the same switch you will get optimum performance (traffic goes over same backplane). Though you would end up with a single point of failure (the switch). So its a balance between performance and resilience.
NIC4 - VM traffic - going to physical switch1
NIC5 - VM traffic - going to physical switch1
NIC6 - VM traffic - going to physical switch2
NIC7 - VM traffic - going to physical switch2
Standby NIC
NIC8 - could be used for VM traffic or perhaps for Vmotion above to mitigate single point of failure
Bear in mind that for each vSwitch you create this adds a small memory overhead in the Service Console.
So do you think I should add a second NIC to the vSwitch1 for the Vmotion connection? Granted we can always steal the two NIC's from the vSwitch2 to go back to the vmotion vswitch1.
Also, I noticed a tab in the configuration that allows you to overide the default settings and specify the team as being active/active, or having one nic be standby while the other is active. What is the benefit of doing that, instead of just keeping everything active?
As I said, its a balancing act so you could add the second NIC to Vmotion, personally I would leave 1 NIC for Vmotion and deal with the failure of the NIC or switch when it happens.
I only use standby adapters for the Service Console as this only uses one pNIC and if that fails I have a standby adapter. For VM traffic I would have all adapters active to offer best network utilisation for the VMs while also offering resiliency at the same time.
To save on Switch Management you can make / put vSwitch 0 as follows
NIC1 - Service Console - going to physical switch1
NIC3 - Vmotion - 1GBps
Anyone see any issues with that?
I think its just easier for management.
chicagovm are you saying you can use the same nic for the service console and vmotion?
Also, what do you guys use for link failure? Do you use link status, or beaconing? What's the pro/cons of either or?
Nope,
I am stating to make it easier for vSwitch management , why create soo many different vSwitches.
You can create one vSwitch for SC and VMotion using different pNICs.
So you would have less vSwitches, just create a port group for SC and vMotion in the same vSwitch using different nics.
Hope that makes sense.
How do I create a port group within a virtual switch. When I go to the virtual switches' properties and select "Add" I only have the options of "Connect Types: Virtual Machine, VMKernel, Service Console? Am I looking in the wrong place?
I might have figured out where you were trying to tell me to go.. let me know if this is right..
I added two nics to vSwitch0, (vmnic0 and vmnic7).. within the properties of that switch I have a vSwitch, a Services Console, and a VMKernel configured.
I then went into the Services Console and edited it's NIC Teaming Tab. And set vmnic0 as the Active Adapter, with VMNIC7 as the standby.
I then went into the VMKernal options and edited it's NIC teaming Tab and set vmnic7 as the active, and VMNIC0 as the standby. Is that what you were recommending me doing?
Exactly!! Nice!!
Yah, So you should have alternate opposing vmnics used ( standby ) per port group. ( SC , VMotion)
The final result is only one vSwitch with 2 port groups. The port groups are using differnt NICs so no contention.
Thanks!
How much memory are vswitches using in the SC/VMKernel?
Is that a big value?
We are using a DL-585 with 20 Gigabit Ports an many vswitches.
Is it better to work with less vswitches and therefor more portgroups and vlan tagging?
The server have 64 GB RAM, so I think that's no so a big problem, when the SC using a little bit more memory. Or is that the wrong acceptation?
Steffen
I haven't found any published figures as to the memory overhead but I have read in one of the VMware PDFs that the more virtual ports a virtual switch has the more memory is needed to support this. I don't think its a great overhead but one to be aware of.
Ok. That´s nice to hear, because the changing of the switches in one or two vswitches witch portgroups are very much work.
SteffenHKA
If you configure more ports then the default number on a VS and they get utilized (Let's say with Broadcast transmissions on that VS) then the Vmkernel will consume more memory. so I wouldnt increase the number of ports without a reason.
Hi all,
is it possible/recommended to use a physical NIC for each VM?
I have a PowerEdge 2800 with 6 NICs and 3 VMs, connected to a PowerVault 5324 switch. This is a semi-productive server, so I don't need failover NICs. I had in mind to configure a physical NIC for each VM and one NIC for service console. 2 NICS should be spare for forthcoming VMs.
Thanks!
Florian