SomeClown's Posts

One other interesting thing: I lost the ability to manually migrate (High or Low Availability) virtual machines once I did this VC upgrade to 2.5. But... if I unconfigure HA, the errors referenc... See more...
One other interesting thing: I lost the ability to manually migrate (High or Low Availability) virtual machines once I did this VC upgrade to 2.5. But... if I unconfigure HA, the errors referenced in this thread go away and I get back my ability to migrate. Seems as if 2.5 is locking-up my migration ability if I don't have the redundancy in management.
Ahh... makes sense I suppose. I'll have to look at my structure and see how much I want to do that. As it stands right now I have the 1st Service Console on a dedicated NIC which is plugged i... See more...
Ahh... makes sense I suppose. I'll have to look at my structure and see how much I want to do that. As it stands right now I have the 1st Service Console on a dedicated NIC which is plugged into a switchport assigned to the appropriate VLAN, and the VMkernel is on a different dedicated NIC which is on a different VLAN. The rest of the NICs are assigned to a group and are connected to switchports in trunking mode so that I can dynamically assign networks. I've attached a screenshot for anyone interested. At least now I know why that message is there... though I wasn't immediately thinking MPIO when I heard redundancy... I just figured it wanted another console on a different NIC. Thanks!
Just upgraded my VC to 2.5 and have this error. Actually, to be fair I had a bunch of DRS/HA errors and a host that wouldn't migrate or do anything. So I unconfigured DRS and HA, then reconfi... See more...
Just upgraded my VC to 2.5 and have this error. Actually, to be fair I had a bunch of DRS/HA errors and a host that wouldn't migrate or do anything. So I unconfigured DRS and HA, then reconfigured, and life is good except for this annoying exclamation mark on my cluster. I read somewhere else that this was due to a lack of secondary Service Consoles, so I added those (on different physical NICs) but still have the error. Thoughts?
I'm currently getting this error after having just upgraded to VC 2.5 and just to get rid of the nag, I added a second Service Console to each box and the error is still there. I even restarted ... See more...
I'm currently getting this error after having just upgraded to VC 2.5 and just to get rid of the nag, I added a second Service Console to each box and the error is still there. I even restarted VC and can't make it go away. Thoughts?
Incidentally, I also have my VMKernel on the same VLAN as my iSCSI SAN Network. Don't see too much of a problem with this, but I'm floating it out here while I'm here. The switch fabric is ... See more...
Incidentally, I also have my VMKernel on the same VLAN as my iSCSI SAN Network. Don't see too much of a problem with this, but I'm floating it out here while I'm here. The switch fabric is all Cisco 4503 (9) in a Fiber full-mesh. Console is on the same VLAN as the Servers. And, finally, the vswitch handling all of the VMs is a bundle of 5 GB NICs and all of that is set to 802.1q trunking so I can use any VLAN I want inside on the VMs.
I'm not entirely certain how or why this happened... though I suppose since one of the ESX Servers crashed out with a Proc failure, it may just be that something got corrupted somewhere and still... See more...
I'm not entirely certain how or why this happened... though I suppose since one of the ESX Servers crashed out with a Proc failure, it may just be that something got corrupted somewhere and still showed correctly. What I did was to delete all of the VMKernel config, the vswitch it was on, etc., and recreated on both servers. Now it all works again.
Two things: (1) I answer to your question, no, I cannot vmkping <ip_of_other_hosts_vmkernel_IP>. I can run vmkping -D and come out allright, however. (2) The very odd thing is ... See more...
Two things: (1) I answer to your question, no, I cannot vmkping <ip_of_other_hosts_vmkernel_IP>. I can run vmkping -D and come out allright, however. (2) The very odd thing is this: nothing has changed. And by nothing, I mean we're SAN-booting using QLogic HBAs off of an Equallogic SAN... so the Server Image, configuration, etc., came back up EXACTLY as it was configured before the crash.
Greetings, I'm hoping someone here can help me, because I can't seem to figure this one out. I've had VMotion, HA, DRS, etc. all working just beautifully for a while now. On Sunday, I ha... See more...
Greetings, I'm hoping someone here can help me, because I can't seem to figure this one out. I've had VMotion, HA, DRS, etc. all working just beautifully for a while now. On Sunday, I had one host machine go down with a Processor failure. The system did its job, migrated the VMs to the other host, and downtime was minimal. Now that I have my second server rebuilt, however, I can't migrate any of the servers back. So, I have all VMs sitting on one server and the other one empty. I get the timeout error at 10% that everyone seems to get when VMotion is setup incorrectly but, again, this was all working prior to the server crash. I'm looking at shutting some machines down now and bringing them back up on the rebuilt server, just to distribute some load, but that's not my preferred choice since some of these VMs are Oracle Database Servers. Thoughts?
Hey everyone, I'm hoping someone can help here, because I'm not entirely sure where to go with this. I recently snapshotted all of my VMs on my Infrastructure 3 system (twin ESX, Virtual... See more...
Hey everyone, I'm hoping someone can help here, because I'm not entirely sure where to go with this. I recently snapshotted all of my VMs on my Infrastructure 3 system (twin ESX, VirtualCenter, VMotion, etc.) and then one of our Oracle DBAs started copying directory structures for some maintanence... when I come in this morning all of the VMs he was working on are down with error messages like so: "msg.hbacommon.outofspace" What I did to get around the problem on two of the VMs (one windows, one Linux) was to add an extent from VirtualCenter (after expaning the storage pool on the SAN) but that only fixed those two machines. The third machine (SLES with Oracle) still had the problem. Finally I just reverted to the last known good snapshot, the reiserfs freaked out but came up after some time, and now I'm whole again. So, based upon all of this, my questions are below... and I sincerely appreciate anyone who can help! (1) How to avoid this problem in the future? (2) When I spec out a VM on a storage pool... what is the relationship between disk size, memory size, and "extra?" In other words, given a 60GB space on my SAN, should I format the VM for 50GB? Leaving 2GB for memory and 8GB for other stuff? How much room does the "other stuff" need? Thanks in advance! Oh, and I have been pouring over all of the PDF files on the VMware site, as well as for my SAN, but haven't come across anything definitive as regards best practices for question 2 above.