First of all, I'm sorry if I'm not in the right area.
To explain the context : I inherited an old OVH dedicated server with ESXI 5 from a previous tech here.
Everything was ok, I had access to it and all until one hard drive crashed. It was the the second one (sdb) so server was still up, it only missed one datastore.
After opening a support ticket at OVH, the failed hard drive was replaced and since then, I can't boot on ESXi.
According to the support, the message "select a proper boot device" shows at startup so they keep saying that the boot partition should have been on the failed hard drive, which is wrong cause server was still up even with that hard drive and because parted find the boot partition on sda when we boot on the rescue system:
Model: ATA Hitachi HDS72302 (scsi)
Disk /dev/sda: 2000GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 32.8kB 4194kB 4162kB fat16 boot, esp
5 4211kB 266MB 262MB fat16 msftdata
6 266MB 528MB 262MB fat16 msftdata
7 528MB 644MB 115MB
8 644MB 944MB 300MB fat16 msftdata
2 944MB 5238MB 4294MB fat16 msftdata
3 5238MB 2000GB 1995GB
But they keep saying me that fdisk does not detect a boot partition...which is... Well ok guys ! ^^
I don't have any KVM on that server (and of course no physical access) so I asked them if they can check the bios to see if it still boots on sda and not sdb and they keep saying that boot partition should have been on the replaced hard drive...
If you have any idea to fix this, it would be much appreciated !
From a remote ssh console and with user root
To find out your disk adapter, run one of the following commands
# esxcli storage core adapter list
# esxcfg-scsidevs -a
To check your VMware build and version run
# vmware -vl
# esxcli system version get
publish your results in the post
First of all thank you for your reply but unfortunately I can't get the esxi cli to work because i can only boot the server on a linux rescue OS...
If I get the server on a normal boot I have no access at all anymore.
Your story is very strange, If the ESXi was running and failed a disk that caused the loss of access to a datastore. It is very likely that the datastore was created on that failed disk or on a failed disk array.
To understand what happened, it is necessary to know how many physical disks the server has or had.
Was any kind of array configured with the server disks ?
The server has only internal disks or it has assigned one or several luns in an external storage.
You should have access to the server hardware logs to try to find out what happened.
A last test you could do is using an ISO (boot) of VMware vSphere (preferably the same version that was installed) boot the server and start the installation until the select disk section, there you can verify that the disks or volumes that are shown as available to install have VMFS partitions.
VMFS partitions, they would be marked with the *.
It will show something like this, select the disk or volume and press F11 to see the partition details.
If none of the disks or volumes have a * mark it means that there are no VMFS partitions so we have no VMware data.
Do not continue with the installation, it is only a test.
parted shows somewhat unexpected filesystems.
Can you please use gdisk instead ?
The disk you show seems to be a disk with an esxi-installation plus a 2 TB datastore.
Can you read the datastore with Linux ?
Hello to both of you,
Once again, thanx for your answers.
I can confirm that there are 2 physical disks : 2TB each, sda with system partitions and one 1.8TB datastore, and sdb which has been replaced and which is not yet formatted, installed and all cause I initially planned to do it with vsphere.
Enrique, maybe I can try to boot on a installation disk but I have nothing ready for this and I'm not even sure I can without a kvm...
But I'm sure that the partitions on sda are VMFS cause I already succeeded to mount the EFI one via vmfs-fuse and it was ok.
So Ulli, the answer is yes, and it's exactly this : the disk I showed (sda) is the disk with esxi installation plus a datastore which I perfectly can mount with vmfs-fuse too.
Here is gdisk result :
Disk /dev/sda: 3907029168 sectors, 1.8 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): F619A420-B121-4BD8-9041-FE80C22FC855
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 32-sector boundaries
Total free space is 158 sectors (79.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 64 8191 4.0 MiB EF00
2 1843200 10229759 4.0 GiB 0700
3 10229760 3907029134 1.8 TiB FB00
5 8224 520191 250.0 MiB 0700
6 520224 1032191 250.0 MiB 0700
7 1032224 1257471 110.0 MiB FC00
8 1257504 1843199 286.0 MiB 0700
I remembered an option we used in versions 4 and 5, when ESXi could not boot due to disk problems.
Install a 4GB or 8GB USB Key (preferably branded) on the physical server.
Install VMware vSphere with a boot ISO with the ESXi version that was installed on the server or at least the closest one.
Choose the option to install preserving VMFS on the USB key (verify that you have chosen the USB key).
Configure in the Bios that the server boots from the USB key.
When the ESXi starts, it should automatically mount the VMFS it finds, but if it does not see them, you can perform a scan of HBA and Storage. If the format of the volumes is OK you should have no problems accessing them.
Finally you can export the VMs
If you can't get access, simply shut down the server and remove the USB key from the server, returning to the original situation.
I completely agree with Enriques suggestion. Thats the way to go.
When you already know that the vmfs-volume is still readable via vmfs-fuse then the straight forward path is:
- install same esxi-5-version to usb-stick and make sure you do not accidentaly overwrite partition 3
- once booted from usb-stick deal with the vmfs-volume - chances are good that it is still readable
- if not - extract the VMs via vmfs-fuse to a temporary location
- then use the original disk and format it as a new datastore and upload the VMs again
Unfortunately as I mentionned previously I have no physical access to the server so I guess I'm s****d.
Think I'm gonna start over as I have nothing really important to back up (I still can backup the VMs to FTP from the linux rescue mode though) but I wanted to know if there was a way to reconstruct the boot from this rescue mode which seems impossible after reading your replies.
Anyway thanx for your help and for your time !