I have seen this error appearing after the upgrade to ESX 3.5 / VC2.5 on HA-enabled systems. Somehow ESX thinks it is not redundant enough for HA I guess. Does anyone know the exact parameters for ESX to come up with this? My setup has one service console connected to a vswitch, which has two physical uplinks. Obiously, ESX does not think this is redundant enough. Anyone got more info on how to get rid of this message (eg how you should design network setup in the eyes of ESX HA)?
I think you can move one of the two VM port group, for istance that one of the vSwitch2, in the vSwitch0 (same of the service console port) and then move the vmnic of that vSwitch2 in the vSwitch0. So now you have the vSwitch0 with two vmnics and the message will disappear.
You should consider moving to dot1q trunks (with VLANs). If you a=only have three NICs available, you could create a single big vSwitch with three uplinks (all dot1q trunks). The vSwitch should have several portgroups. You can also prioritise traffic along certain physical uplinks. Works great, only if an uplink fials it will failover to another link.
If you do not have the option for vlans, I would recommend to create one switch with two uplinks, and combine your service console and VMs there. Use the third uplink for vmotion and you'll be fine...
hey thanks for your reply's
all 4 pNic are dot1q trunk ports.
I thought it would be a good idea to separate the Vmkernel (vmotion) traffic from vm machines traffic. And also the backend (backup) form the frontend traffic.
vSwitch0 is for vMotion (1 pNic)
vSwitch1 is for VM Machine Fortend Traffic (2pNic)
vSwitch2 is for VM Machine Backend & Backup Traffic (1pNic)
maybe I have to move the vmkernel and service console port from vSwitch0 to vSwitch2 and add vmnic0 to this Switch.
than I have to vSwitch each with to pNic's.
or VMware will correct this with a Patch
It is not something you would like to have patched. Always make sure you have more than one physical connection!
As I understand, you have 4 pNICs. In that scenario, with dot1q, I would recommend one of the following:
1) create a single vswitch with four uplinks, put all pNICs into one big team, then prioritize your connections (for example, PNIC=SC, PNIC1=VMOTION, PNIC2,3=VMs) and set standby NICs for each in order have a failover
2) create two vswitches, each with two pNICs. One vswitch hanling Vmotion and SC (prioritize both one link and standby them on the other), and the second vSwitch to handle all VM activity.
I am in the same boat except I only have 2 pNICs per server (I only have 2 available expansion slots which are filled with HBA cards) Can someone detail how I can set this up with only two NICS? Currently I am currently setup this way...
vSwitch0 -> VMNetwork + Service Console -> vmnic0
vSwitch1 -> VMKernel Port (iSCSI) -> vmnic1
This is a class C network split in half to isolate iSCSI traffic on the EMC SAN from the VMware ESX (broadcast) traffic. I am hoping we can work around this issue somehow.
There is only possible setup you can build having full redundancy, and that is by creating only a single vSwitch which has both pNICs connected. I would recommend to use VLANs and dot1q tagging to isolate traffic yet be able to mix them over single connections. After that you should prioritise your traffic toi make sure they don't get in each others way. If you want to hang on to your current setup, be gain redundancy you could do this:
vSwitch0 --> VMnetwork + Service-console + VMkernel --> vmnic0, vmnic1
VMnetwork portgroup --> vmnic0=active, vmnic1=standby
Service-consoel portgroup --> vmnic0=active, vmnic1=standby
VMkernel portgroup --> vmnic1=active, vmnic0=standby
Using the setup above, you see that if both NICs are up, the traffic will be nicely split (because the standby NICs are standby). If any of the links fail, all traffic that actively went through this NIC will failover to the standby NIC, effectively keeping all portgroups reachable.
do you know if Vmare is going to improve that, like putting some tick so you have choice. Customers are usually annoyed even if you explain them why that happens. There shoud be some option to disable of monitoring in management interface.
Not sure if this was answered as I recently had this error. I currently have 2 clusters in 2 cities. 1 does have network redudancy(3 Nics) and it resides within my vmnetwork. I still had the error and did a reconfigure for HA on each host to remove the alarm. All good there!!! The other cluster continued to have the error because no redudancy existed. After much research, I found the following:
1.) Go to cluster and right click, then click Edit settings
2.)Click VMware HA then click Advanced Options on the right bottom side
3.)Add the following as an option line "das.ignoreRedundantNetWarning" and value "True". Then of course click ok....
4.) If the error is still there, either Reconfigure for HA on each host or remove HA and then add back on the cluster level.
The above resolved my error on the non-network redudancy cluster. This just removes the alarm....however, if and when you do put redudancy you can either put the value to false or remove the line all together.
Did you check to see that both nics had a non-zero linkSpeed? HA only counts those nics that are up.
Btw, your second SC can be on your VMotion network, even if that is private. This second network is used by HA for redundant network communications, as it relies on the network for host failure detection. For your regular management activities, you won't be using this second network.
It can also be that when the NICs (assuming more then one) were assigned to the vSwitch with the SC (ESX) or Management network (ESXi), one of them may have been set to UNUSED. To fix, from Configure > Networking select the vSwitch properties > Management Network > NIC Teaming and move the NIC in the "unused" designation up to "assigned". Then on that host do another "Reconfigure for VMware HA".