VMware Cloud Community
brucecmc
Contributor
Contributor
Jump to solution

Manual Cluster removal

Hi folks...

Got myself in a pickle here...

I created a cluster to demo the vmotion and HA functionality...Began getting some errors, so decided to remove the hosts from the cluster, and step back and execute a "redo".

one of the hosts houses my VC VM server. In order to remove the host from the cluster, i have to put it in maint mode...of course, when i put it in maint mode, i lose connectivity to my VC VM...

So, I got this idea to install VC on another machine and do it...of course you cant do that cause the db that holds the VC info from the VC VM isnt present...DOH!!!

So, I think I either need to move the DB that holds the VC info from the VM to the new VC server or remove the cluster manually.

I was going to remove the cluster through VC, but got this big warning concerning removing all the VM's, etc...didnt want to do that really...

Sooooo...I'm in a pickle....

Any suggestions?

Bruce

0 Kudos
1 Solution

Accepted Solutions
letoatrads
Expert
Expert
Jump to solution

http://www.vmware.com/pdf/vi3_301_201_san_cfg.pdf

Page 61 in case you haven't seen it.

I apoligize as my FC experience is thin ( mostly used ISCSI with VI3). I think you probably have a zoning issue, and you need to make sure you have storage groups created that have all the LUNS and servers in one area. Once done and attached you should be able to just rescan a formatted VMFS partition on one server and it will show up as storage.

View solution in original post

0 Kudos
11 Replies
letoatrads
Expert
Expert
Jump to solution

Why not use the VI client to log directly onto your ESX server using the console IP and remove it from maintenance mode?

Once done, you should be able to restart the VC VM and re-connect to it.

Now, once that is said and done...is VMotion working for you? If yes, just VMotion the VC off whichever server you want to pull from the cluster.

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

Well, actually, I'm trying to place it in maintenance mode to remove it from the cluster...

But, once I place it into maintenance mode, I cant power up the VC VM in order to manage the cluster that I created using VC that is running on the VC VM

catch 22...

Where does VC store the information concerning the clusters? directory on ESX host?

bruce

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

to answer that second part...No, VMotion is not functioning at this time...

I was trying to go back to a clean slate...part of which was dissolving that cluster...

bruce

0 Kudos
letoatrads
Expert
Expert
Jump to solution

Right, but what is keeping you from VMotioning your VC to the host you aren't in the process of removing from the cluster?

Esx host 1 - VC +cluster

ESX host 2 - +cluster

Log in to the host, make sure it is NOT in maintenance mode so VC comes up. Put remove host 2 from cluster. Vmotion VC to host 2 once you remove from cluster and bring out of maintenance mode.

ESX host 1 - + cluster

ESX host 2 - VC

Now put host 1 in maintenance mode, and remove from cluster. Hosts don't have to be in a cluster for VMotion to work, just HA and DRS.

0 Kudos
letoatrads
Expert
Expert
Jump to solution

Ahhh, VMotion not working....I assume you have both hosts attached to a SAN correct?

Same thing, different dance.

ESX host 1 VC +cluster

ESX host 2 +cluster

Pull host 2 out of cluster, bring out of maintenance mode. Connect to ESX 1 directly, shutdown VC VM, and put ESX 1 in maintenance mode.

Connect to ESX host 2 directly, browse datastore, add VC VM to inventory, start up and remove ESX host 1 from cluster.

Poor man's Vmotion - the cold migration SANS VC.

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

understood...

tried that...but, vmotion wasnt working properly...errors concerning inaccessible floppy, nic, disks, etc...

not exactly sure what that was...but thought I'd be able to work around doing it, so didnt pursue the vmotion problem as yet...

figured i'd get the cluster removed, get myself back to ground zero and start over...

I guess while i'm working on this...I was under the impression that when i presented the shared storage to the ESX hosts, that in the storage applet of VC, I would see both ESX datastores in the main window /configuration/storage.... I do not...If I were to "add storage" the shared storage is there, but of course, gives me a big alert about whiping out the disk config...so, didnt want to do that either...

oh, and by the way, I'm a newbie...AND all this is dev stuff, so i'm not concerned with having things shut down or accidentally whacking something...

bruce

0 Kudos
letoatrads
Expert
Expert
Jump to solution

Ok, well that Shared datastorage hitch is going to be at the core of your problems I think. What type of storage are you using ( FC SAN, ISCSI target, etc). If we can get that resolved, I think alot of other pieces will fall into place for you.

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

yea...its FC SAN...Clariion CX400...SATA Drives...

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

alright..

here's what i have.

Storage presented to both ESX hosts (esx1 / esx2).

When I go into the configuration area, rescan the storage on esx1, the datastore from esx2 never shows in the configuration area. I thought it would.

If I click on the "add storage" option, add a disk/lun, select the esx2's storage, there is a warning:

"all disk layout will be lost. all file system and data will be lost permanently"

same situation when in revers (esx2 using esx1's storage)...

Of course, that doesnt sound good...

My Vmotion network is setup, vmkernel is configured, Vmotion was selected, a cross-over cable ran between GigE NIC's esx1 to esx2.

When i elect to migrate from esx2 to esx1, i get the following error(s):

unable to migrate from X to X. unable to access the virtual machine configuration. unable to access file \[esx2] /name/xxxxx.vswp...

obviously this is due to the storage not being present, but not sure what is preventing the storage from being visible?? the host knows the disks are there, but doesnt seem to be able to use them to go back and forth??? have i not turned something on???

bruce

0 Kudos
letoatrads
Expert
Expert
Jump to solution

http://www.vmware.com/pdf/vi3_301_201_san_cfg.pdf

Page 61 in case you haven't seen it.

I apoligize as my FC experience is thin ( mostly used ISCSI with VI3). I think you probably have a zoning issue, and you need to make sure you have storage groups created that have all the LUNS and servers in one area. Once done and attached you should be able to just rescan a formatted VMFS partition on one server and it will show up as storage.

0 Kudos
brucecmc
Contributor
Contributor
Jump to solution

dude, thanks for the help...

But what it has turned out to be is the lvm.disallowsnapshotlun being set rather than being turned off...

Once i modified that setting (configuration/advanced settings/lvm) all was good...

0 Kudos