VMware Cloud Community
radman
Enthusiast
Enthusiast

Converted Solaris image panics "cannot mount root path"

I'm migrating some machines from VMware Server to ESX. I converted a Solaris 10 machine that had an IDE disk, using Converter.

The failsafe kernel boots fine. But, when I try to boot the real kernel, it immediately panics with "cannot mount root path". I took a look (under the failsafe kernel) at the vfstab, and it looked funky (the / device name was something like /dev/dsk/c1td1s0), so I changed it to match the name mounted for root under the failsafe kernel: /dev/dsk/c1t0d0s0, and even rebuilt the boot archive, but the same panic recurs. I also tried booting with -r, but that didn't help either.

There was some kind of message during the conversion saying that the image couldn't be customized I think but I didn't write it down I'm afraid. It said it completed successfully.

Anybody have any idea what might be going wrong here? The Hard Disk is "SCSI(0:0) Hard Disk 1" according to Settings. Could the device naming be different between the failsafe and regular kernels?

It's panicing when vfs_mountroot calls rootconf.

Any tips much appreciated!

Reply
0 Kudos
6 Replies
radman
Enthusiast
Enthusiast

I figured this out. Much searching locally and across the internet turned up little info but with what I found and some experimentation I found this to work for migrating a Solaris 10 Update 3 IDE-based VM to ESX, so am posting it so others don't have to go through the same pain I did (maybe a KB article would be appropriate?).

1) Run VMware Converter

2) Boot the Failsafe Kernel under ESX

3) Allow it to mount the root disk (under /a, the default)

4) Run "mount" and note the /devices/... path used for the root disk.

In my case this was /devices/pci@0,0/pci1000,30@10/sd@0,0:a

I don't know if this will vary, it may always be this path after Converter runs.

I'll use XXX to represent your pathname after /devices/ below just in case

(e.g. XXX=/pci@0,0/pci1000,30@10/sd@0,0:a in my case)

5) TERM=sun-color; export TERM

This will make using vi easier.

6) vi /a/boot/solaris/bootenv.rc

update property "bootpath" (from above):

setprop bootpath XXX

in my case it was:

setprop bootpath /pci@0,0/pci1000,30@10/sd@0,0:a

7) cp /etc/path_to_inst /a/etc/path_to_inst

😎 rm /a/etc/devices/*

9) update bootarchive:

bootadm update-archive -R /a

10) Edit /etc/vfstab

Find the line for your root disk. Replace the /dev/dsk/??? path with /devices/XXX, and replace the /dev/rdsk/??? path with /devices/XXX,raw (e.g. /devices/pci@0,0/pci1000,30@10/sd@0,0:a,raw)

This is necessary because before device reconfiguration occurs the kernel will try to see if the / filesystem needs checking, by reading /etc/vfstab, but until reconfiguration occurs there is no /dev entry for the / filesystem. So you need to use the absolute /devices path until reconfiguration can be performed and a /dev alias path is created.

11) Reboot

12) At the Grub screen where you choose the kernel you want, select the normal kernel, but press 'e' to edit it.

13) Move down to the 2nd (kernel) line, and type 'e' to edit it

Append "-r -s" to force device reconfiguration and boot single-user, then press Return

to return to the previous screen

14) Type 'b' to boot this temporarily-modified configuration

15) When you enter single-user, do an ls -l /dev/dsk to find the c?txdxsx disk that corresponds to the /devices path you are using. It's probably c2t0d0s0

16) Touch up /etc/vfstab to use the /dev/* paths which correspond to your disk (don't forget to update the swap entry as well, using the same txdxsx with the correct c? value for your disk).

17) Type ^D to go to multi-user

Done! Be careful typing those long /devices path names!

Reply
0 Kudos
tom_elder
Contributor
Contributor

step 10) should read /a/etc/vfstab

but thanks for figuring the rest out; very helpful

Tom

Reply
0 Kudos
publish_or_peri
Contributor
Contributor

That was an extremely helpful starting point. Though most of it was relevant, the instructions

didn't work for me migrating my Solaris 10 Update 4 (w/Trusted Extension) IDE-based VM from

Fusion 1.1.1 to ESX 3.5. It took quite bit of research experimentation to figure make it work.

The following are a combination of your recommendations, along with the changes & clarifications

I added to get it to work for my VM migration from Fusion to ESX.

Reply
0 Kudos
acruizu
Enthusiast
Enthusiast

Thank you very much. Your post was very helpfull.

Reply
0 Kudos
MPowerLabs
Contributor
Contributor

The above posts where very helpful. This is what I ended up doing when I had a similar problem after updating ESX from 3.0 to 3.5:

  1. boot to failsafe

  2. add swap space --- swap -a /dev/dsk/cxtxdxsx

  3. cp /etc/path_to_inst /a/etc/path_to_inst

  4. edit /a/boot/solaris/bootenv.rc with the proper boot device

  5. rm /a/etc/devices/*

  6. rm /a/dev/rdsk/c*

  7. rm /a/dev/dsk/c*

  8. rm /a/dev/cfg/c*

  9. devfsadm -v -r /a

  10. reboot /w -arvs

If the reboot fails, go back and check bootenv.rc. I had a problem with the file reverting back to the original boot device.

Reply
0 Kudos
ebonick
Contributor
Contributor

I have a similar issue where after a short while the vm will go into a reboot loop but will boot into failsafe fine. I tried all of the above steps, but nothing helped. I do a see W real quick before the vm reboots. I have never used Solaris before and am in charge of keeping this vm running, but the reboot loop is stunmping me. Anyone have any suggestions?

Reply
0 Kudos