Recently we upgrade our VirtualCenter from 2.0 Patch 2 to 2.5. After the upgrade, our clusters reports for every host the following message: "Host <hostname> could not reach isolation address <service console gateway IP>" We have entered a das.isolationaddress in the Advanced Options of HA for every cluster. Before the upgrade, we wouldn't get this message.
If I remove the das.isolationaddress value, HA can't be enabled. Therefore I think the das.isolationaddress option does work.
My problem is two fold. On one hand I want to know for sure that each host uses the correct address for isolation detection. On the other hand I don't want the cluster to report our hosts can't reach the IP of the service console gateway, because than I will not be triggered if another warning or error shows up.
Anyone any ideas?
Exactly same here. Upgraded 3.01 2 cluster servers to 3.5 with all latest patches & when my default GW server is down, I get an error on one ESX host stating that isolationaddress can not be reached
This does not show for the 1st ESX host in the cluster & also I have the correct entry das.isolationaddress which points to the switch both ESX are connected to
So where is the second host taking this address from? and more important how to change it permanently?
Thanks
Seb
We found the solution:
In the same screen where you enter das.isolationaddress, enter the following line:
das.usedefaultisolationaddress
Set the value to "false"
See my last posting for the solution
Thanks, would be nice to get this options in ie help file...
Now it is like underground knowledge sharing....
By the sound of things you only have one service console configured on each host?
Yes, single console on each host
Service Console & VMotion on a single vSwitch in Failover & Load Balancing config
Seb
If you create a secondary service console on each of your ESX hosts on a different subnet/vlan, you should find that you wont get the error. You will need to remove the change mentioned previously in a few posts above about disabling the default isolation address however.
You want service console redundancy in a HA cluster. I have two default isolation addresses set..... the first is the primary SC gateway, the secondary is secondary service console gateway (in my case - the gateway of the vmotion vlan).
Why would you want a redundant service console on a single host? Did you make a redundant VMotion LAN as well I wonder?
Why? Simple, it is redundant, as are ALL parts in my VI3
Server (cluster), power supply, FC cards, FC, switches
Still working on getting SAN to be mirrored
Service Console/VMotion are redundant/balanced together
Why would you want a redundant service console on a single host?
I never said that.
You want a redundant service console on all hosts within the same HA enabled cluster.
No I don't make a redundant vmotion lan.
scerazy - If you run esxcfg-vswif -l on your ESX server. Do you have one vswif or two?
ONE, single SC
Seb
I think there is some miscommunication here. We have one service console on each host. These hosts make up the HA cluster. Is this similar to your setup?
Edit: Ah, I now see where it went wrong. What I meant was: Why would you want a redundant service console on each host in yor HA cluster? Isn't that what HA is for, that when there's a problem with your SC, your VM's are made available on another host? That's why I asked wether you had a redundant VMotion LAN. In my opinion, it's more logical to build a redundant VMotion LAN (to ensure that your hosts always can VMotion to another host) than redundant SC's (in which case VM's will stay on a host with a defective SC...)
Message was edited by: Jeff1981
Exactly same, 2 host in a cluster, each with single SC (teamed with VMotion NIC for failover)
Seb
The reason you want a redundant service console is so that HA never kicks in (if there's an issue networking wise for example). The last thing you want is your VM's being powered off, then powered on another host because an issue with the primary service console (as an exmaple). Whereas if you have a secondary service console on a seperate vlan, you'd have to have issues on both vlans in this scenario before host isolation.
The above is only a networking example. If it's simple hardware failure on your ESX host..... obviously a second SC will do nothing.
Logon to VI client
- Configuration - Software - DNS and Routing - Default Gateways
Make sure that Service Console and VMKernel have the correct addresses.
Thats how i fixed mine
I didn't have to do anything once I installed the patches, just reboot the ESX server while in maintenance mode and bring it back up. However I am using redundant consoles on different virtual switches but they map to the same vlan only a different IP address.