I just installed Vmware Workstation 10.0.1 with the intention of standing up a new Centos 6.5 x64 vm.
Everytime after the install occurs and reboots for the first time, it immediately crashes with the error:
"Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)"
It appears to be the related to the issue inVMware Workstation-10.0.1 fails after RedHat-6 software update.
however the workarounds in that thread dont work on a fresh install.
Any ideas appreciated.
Have you enabled hardware-vt in pBIOS and guest-settings?
Yes it is enabled in BIOS, it wasnt initally and vmware couldnt create the guest until I enabled it.
I played around in the guest settings (under CPU) to no avail.
My CentOS 6.5 x86_64 guest works very well under WS 10.0.1 on host Windows 7.
Well that sucks... for me
I've tried on a Win 8 and Win 7 build with the same results, though Centos 6.4 x64 works fine on both.
I dont know whether it could be relevant that I'm creating the split vmdk files on a 4TB drive attached via USB.
The splitted vmdk should be no problem. Independed of the speed of your USB device the problem seems to be, that CentOS6.5 was released on Februay, 26th and much later than WS10.0.1 (October, 24th 2013). I think VMware will fix this problem with the next WS release.
I've the same experience, running a clean install Workstation 10.0.2 on Win7-64bit. I've tried installing CentOS 6.3 and 6.5 x64 and both behave the same - after the install the first boot into the OS succeeds, then on reboot, the kernel panic.
Has anyone been able to figured this out?
same issue on Workstation 10.0.1,2,3 and with cent 6.2/5 and 7.
Frustrating.
Hi,
The problem that I see is that the kernel at the VM is not seeing the hard disk on which it booted.
It could be for several reasons related to the disk controller configured and/or the VMware tools AFTER updating the kernel (usually by an automatic update).
ALWAYS try to run "vmware-config.pl --default" after a kernel upgrade. It doesn't hurts and most of the time prevent errors like this.
I think that a failsafe boot is choosing the disk as SCSI with a Lsilogic controller, and then it would be boot always because the drivers are always in the kernel.
Could you check at your VM configuration/vmx file what kind of disk controller is using the VM?
I got this:
scsi0.present = "TRUE"
scsi0.virtualDev = "lsilogic"
scsi0:0.present = "TRUE"
scsi0:0.fileName = "CentOS-01.vmdk"
and I don't remember having any problem installing/updating my VMs
Regards,
Luis.