VMware Cloud Community
jetaa
Contributor
Contributor
Jump to solution

VSAN 无法法删除磁盘组中的磁盘

各位老板请教一下,

       VSAN用两块硬盘作RAID0 VD 作为容量盘,现在一快硬盘foregin 一块硬盘faild,

        在web client下显示的磁盘组信息如下,

       pastedImage_1.png

         但是登陆esxi主机用 “esxcli vsan storage list”并没有显示Absent vsan disk (vsan uuid:xxxx)这块VD信息

       

       在磁盘组中无法删除这块raid0 VD,显示错误 如下,

pastedImage_0.png

各位大神如何删除这个absent vsan Disk信息呢。

Reply
0 Kudos
1 Solution

Accepted Solutions
jetaa
Contributor
Contributor
Jump to solution

Thank you, Bob.

     I tried replacing the hard disk and deleting the bad disk, but it didn't work.

After the host is in maintenance mode, the problem is resolved by removing the disk group.

View solution in original post

Reply
0 Kudos
6 Replies
TheBobkin
Champion
Champion
Jump to solution

Hello jetaa​,

Welcome to Communities.

Just a reminder - this is an English-speaking forum and thus you should write in English or use translate before posting (or post in a native-language sub-forum).

From Google-translate:

"The bosses ask,

       VSAN uses two hard disks as RAID0 VDs as capacity disks. Now a fast hard disk foregin a hard disk faild,

        The disk group information displayed under the web client is as follows.

         However, the esxicli vsan storage list does not display the VSD information of Absent vsan disk (vsan uuid:xxxx).

       This raid0 VD cannot be deleted in the disk group. The error is as follows.

How do you delete this absent vsan Disk information?"

If the device is PDL (thus the UUID as opposed to naa referencing it) then removing it can sometimes be problematic.

First check that your data has been rebuilt elsewhere via Cluster > Monitor > vSAN > Health > Data - if this is green then it should be safe to place this node in Maintenance Mode with 'Ensure Accessibility' option and then reboot it.

If it comes up with the device accessible again then you should be able to remove it (before taking the host out of Maintenance Mode).

Further information regarding cause of failure should be noted in the vmkernel.log e.g. 0x3 0x11 sense codes.

Bob

Reply
0 Kudos
jetaa
Contributor
Contributor
Jump to solution

Thank you, Bob

我试过主机进入"维护模式",重启主机后 ,磁盘还是识别为"Absent vsan disk (UUID:xxxxx)",还是无法从磁盘组删除磁盘。

I tried to enter the host into "maintenance mode ", but after restarting the host, the disk was still identified as "Absent vsan disk (UUID: XXXXX)", which still failed to remove the disk from the disk group.

vmkernel.log日志显示硬盘没有找到。

the vmkernel.log display"cpu47:37057 opID=f36cf424)WARNING: PLOG: PLOG_ExecVSIOp:1551: Disk xxxxxx1-05xx-1xxx1-03ac-xxxxxxxxxxx not found in plog list"

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello jetaa​,

"but after restarting the host, the disk was still identified as "Absent vsan disk (UUID: XXXXX)", which still failed to remove the disk from the disk group."

Have you attempted removing this disk via the GUI? (Cluster > Manage/Configure > vSAN > Disk Management > Select device > Delete button > No Action (ONLY use this option if all data is healthy))

Are you positive the device hasn't come back referenced with naa as well as this PDL UUID persisting?

Check how many devices have vSAN partitions on them (should be 2 on each device) in /dev/disks/ - if the device is PDL then you should have total number of disks minus one.

#vdq -q    should also indicate whether a device is PDL.

Do you have physical access to the servers? You could try reseating the device and/or checking for other signs of life e.g. blinky lights on boot etc.

Have you checked if the device is visible in the controller BIOS settings and/or whether the RAID0 VD is intact?

Bob

Reply
0 Kudos
jetaa
Contributor
Contributor
Jump to solution

in then RAID configuration, the RAID0 VD Non-existent

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello jetaa​,

Can you try reseating the device?

Bob

Reply
0 Kudos
jetaa
Contributor
Contributor
Jump to solution

Thank you, Bob.

     I tried replacing the hard disk and deleting the bad disk, but it didn't work.

After the host is in maintenance mode, the problem is resolved by removing the disk group.

Reply
0 Kudos