tharpy's Posts

I had the exact same problems noted here by coax.....trying to use host update utility to patch (or update, neither worked) ...build 181792. ...so the answer is "reinstall ESXi"? Can you ... See more...
I had the exact same problems noted here by coax.....trying to use host update utility to patch (or update, neither worked) ...build 181792. ...so the answer is "reinstall ESXi"? Can you tell company's now run by an ex-Microsoft dude?
Standalone ESXi (installable) host (build 181792)....host was shut down hard due to power failure. Came up with "no operating system", so performed a repair, using 164009 install disk, then use... See more...
Standalone ESXi (installable) host (build 181792)....host was shut down hard due to power failure. Came up with "no operating system", so performed a repair, using 164009 install disk, then used host update utility to patch to current level. On console, entered static IP, DNS, etc... Everything looking good at this point. Connected using client, entered my license key....registered my vm's...all ok. Reboot the host and system reverts to defaults (dhcp, no dns, no time, no license, no vm's, etc.). Besides this, the only other symptom is that I cannot see the local vmfs (on the boot drive). The only thing I can think of is that after the repair, the system mentioned something about being in "audit mode". Could that have something to do with this? Any ideas before I rebuild this from scratch? Anyone else tried a "repair"? Usually, we just rebuild a sick host, but was kinda hoping to recover what I could.... Thanks
Can a host profile created from an ESXi reference host be applied to an ESX (full version) host?
Running release code (VCMS Build 162856 and ESX Build 164009).... Cluster warning icon...in cluster Summary tab view, the system reports "configuration issues" that all three of our vSphere ho... See more...
Running release code (VCMS Build 162856 and ESX Build 164009).... Cluster warning icon...in cluster Summary tab view, the system reports "configuration issues" that all three of our vSphere hosts do not have management network redundancy. However, all three hosts have multiple uplinks on the vswitch that the management network is on... AND If you highlight the host and look at the summary tab, no such warning is evident. Removed all the hosts and re-added them, problem went away....for awhile...now back. Sometimes the system reports that only one of the hosts has no redundancy when, in fact, all are configured exactly the same.
just want to make it clear to our reading audience....that was not my quote. As I found it contrary to the best practices we employ in our service delivery, I was looking for information that wo... See more...
just want to make it clear to our reading audience....that was not my quote. As I found it contrary to the best practices we employ in our service delivery, I was looking for information that would either support or refute what's being told to our customers. Thanks for responding. Michael
The following quote re ESX Host network configuration came from one of our IBM technical pre-sales reps in the Vancouver area.... "IBM and VMware have long ago jointly published a best practic... See more...
The following quote re ESX Host network configuration came from one of our IBM technical pre-sales reps in the Vancouver area.... "IBM and VMware have long ago jointly published a best practice of 2 Ethernet interfaces (1Gb/s each) teamed for redundancy and bandwidth - segregated in to 3 VLANs - to service the three channels (management, vmotion, and vmnetwork)"... Does anyone have a link to this document? I can't seem to find it and I'd really like to read it. Thanks.
We're having a similar problem last night....three new (identical) HP hosts installed 3.5 update 1 (build 64607a). Using VC 2.5 and UM to patch after initial build, host #3 remediated fine and... See more...
We're having a similar problem last night....three new (identical) HP hosts installed 3.5 update 1 (build 64607a). Using VC 2.5 and UM to patch after initial build, host #3 remediated fine and is now "compliant". Host # 1 went into maintenance mode and appeared to install several patches before hanging at 85%. Had to reboot the host before reconnecting to VCMS; UM shows 17 patches not compliant....then removed the host from the cluster, attached the baselines to the individual host and remediated again. A few more patches installed, then the remediation stalled again....now showing only 7 non-critical patches not compliant. Rebooted the machine and re-inserted into the cluster. Ran the remediation on the Cluster again and Host #2 stopped at 81% with an error message on VCMS "Unable to access the specified host". In both cases where UM appeared to have stalled, we could ping the server by name and ip....
Found another solution for this problem. At our most recent deployment, the vmotion vlan had been created on a cisco 3750 and although we don't know if network dude did this knowingly or not, ... See more...
Found another solution for this problem. At our most recent deployment, the vmotion vlan had been created on a cisco 3750 and although we don't know if network dude did this knowingly or not, bottom line was that a cisco management interface was created with an ip of X.X.X.1; same address as the vmkernel address assigned to first host. Found it by clicking on the bubble next to the virtual switch and noticed that the cisco management address showed up on the vmkernel switch of all three ESX hosts....that got us looking closer. we're using ESX 3.5 and VC 2.5; we originally thought patching was to blame, but were able to duplicate it on a non-patched host. changed the ip address on host1 and vmotion worked cleanly from then on. It's important to note that removing the vSwitch and reinstalling it seemed to fix the problem in the short term (i.e. vmotion would work)....but if left alone without traffic, problem would come back. (thought I'd add this as I read other posts with a similar gist) in retrospect...shutting down the bad interface and vmkping'ing it should have uncovered the duplicate ip on the vmotion subnet. ....credit goes to VMware support for chasing this down. Cheers!
...and the answer is.... ESX hosts running in Workstation 6 VM's. Second host was originally created as a linked clone of the first.....after I had modified the vmx file and was still ha... See more...
...and the answer is.... ESX hosts running in Workstation 6 VM's. Second host was originally created as a linked clone of the first.....after I had modified the vmx file and was still having other problems, I rebuilt the VM...but kept the vmx cuz I was too lazy to re-do the mods per Xtravirt doc, keeping pointer to disk file (which was based on a linked clone). What got me looking there was that when I ran the vdf -h command on both hosts, both hosts showed the same UID for their local vmfs. Blew ESX2 away and built it from scratch and all works perfectly ('cept for the multimonitor glitch in the VI Client which, by the way, I've since upgraded to the most recent release. live and learn.
get this.....I selected ESX2, configuration, storage and renamed local1 back to local2 as it should be. then, through SC, I create a text file on that vmfs named ip.txt then, back in ... See more...
get this.....I selected ESX2, configuration, storage and renamed local1 back to local2 as it should be. then, through SC, I create a text file on that vmfs named ip.txt then, back in VCMS, select ESX1, configuration, storage. Local volume is labeled "local2" and if I browse it in VC, I can see the file I put on ESX2's local disk. log into SC on ESX1 and local is, in fact, renamed to local2, but disk is empty. ....so it appears that VCMS is only actually seeing the local disk on ESX2, regardless of which host is selected in the inventory view. (unrelated side note) VI client right click on an object doesn't work when running VI client on monitor two of multi-monitor setup....Vista64. that one gave me fits for awhile until I figured it out.
Rebuilt database (and vcms) from scratch. Same thing happened. Second host's local vmfs gets named same as first host's local vmfs.
In our lab, we are running ESX 3.0.2,52542 on two hosts and VCMS 2.0.2, 50618 using MSDE I add first ESX host (ESX1) to cluster1 and it comes in fine with local vmfs named "local1" and ISC... See more...
In our lab, we are running ESX 3.0.2,52542 on two hosts and VCMS 2.0.2, 50618 using MSDE I add first ESX host (ESX1) to cluster1 and it comes in fine with local vmfs named "local1" and ISCSI vmfs named "vmfs1". View from VI client through VCMS shows names ok. I create second host (ESX2, not yet connected to VCMS); it's local vmfs is named "local2" and can also see ISCSI vmfs named "vmfs1". This is verified via console and VI client connected to ESX2 directly. I add ESX2 to cluster1 in VCMS and it's local vmfs gets renamed to same as ESX1's local vmfs....i.e. "local1". Subsequent VI client direct or console session verifies that label has indeed been changed on ESX2. VC datastores only shows a total of two volumes....vmfs1 and local1 If I change name of ESX2's local vmfs while attached to VCMS, local vmfs on both hosts is changed to the new name. While I rebuild my vcms database from scratch, does anyone have any ideas why this is happening? thanks
Here's an update. Problem is caused by snapshots left running for several weeks. So, we go into the snapshot manager within VC for that VM and delete the snapshot. Bummer is, although the snap... See more...
Here's an update. Problem is caused by snapshots left running for several weeks. So, we go into the snapshot manager within VC for that VM and delete the snapshot. Bummer is, although the snapshot manager shows no snapshots left, the delta file does not go away (as it should) and it just keeps growing.... So now, I've got the original state vmdk and the delta.vmdk and the delta is still growing every day. We ran a command line hassnap command and the esx host doesn't see that the vm has a snapshot. Question...will ESX let us cold-clone a VM that has both vmdk and delta.vmdk files and if it does, will it copy both, leaving us in the same situation? Remember, ESX and VC don't see that there's a snapshot. OR is it that we just didn't wait long enough for the delta to committ? Maybe we'll come to work tomorrow and see all back together? The file server has total disk approaching 500G. Current environment: ESX 3.0.1 patched, Latest VC w/patches, IBM 3650 and IBM DS-4700 storage. Any insight? Message was edited by: tharpy
actually, it wasn't the log files that are the problem...I just got a screenshot of their storage browser...they're running snapshots of their file server and have not been committing the snaps. ... See more...
actually, it wasn't the log files that are the problem...I just got a screenshot of their storage browser...they're running snapshots of their file server and have not been committing the snaps. that'll do it. thanks
I have a customer that is seeing his vmfs fill up with log and dump files (in the vm's directory on vmfs3). It appears that this goes on until the vm eventually will not start because of the lac... See more...
I have a customer that is seeing his vmfs fill up with log and dump files (in the vm's directory on vmfs3). It appears that this goes on until the vm eventually will not start because of the lack of space on the vmfs. a week ago, we cleared out almost 20G of files and moved some VM's to make space....this week, it happened again. Guest is Windows 2003Standard VM running as a file server. Just wondering if anyone's seen something like this before.