VMware Cloud Community
b1izzard
Contributor
Contributor
Jump to solution

vSAN adding disk fails

I am trying to add a disk to vSAN and am getting the following error: 

Initialize disks in the vSAN cluster

Status:

A general system error occurred: Failed to reserve disk naa.5000cca03c6757dc with exception: Failed to write vSAN intent log Failed to write partition

On the ESXi host, I also get a status of 'Normal, Degraded' for the disk.  See attachments.

I have tried 2 different disks.  Any ideas?  Thanks

1 Solution

Accepted Solutions
Lalegre
Virtuoso
Virtuoso
Jump to solution

Well the thing is that some disks or I/O devices simply will not work on vSAN as they are not supported but in some cases not even compatible.

I did a quick look at your disk without having all the IDs on the VMware Compatibility Guide and it is not there not even for the older vSAN Versions. If you fix it with help from this forum will be best effort as you will never know if it will ever work.

View solution in original post

9 Replies
Lalegre
Virtuoso
Virtuoso
Jump to solution

Could you please share the detail of the disk that you are using and also the vSAN version? Also how are you presenting this disk using Passthrough or RAID0?

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello b1izzard

Can you test if you are even able to temporarily put a VMFS partition on it or clear partition on it using dd?

I ask as the error message may indicate the devices are in a read-only state. If this is the case (and they are not broken) reboot usually resolves this.

What preceded this issue? E.g. are these new devices or ones with pre-existing partitions?

Bob

Reply
0 Kudos
lucasbernadsky
Hot Shot
Hot Shot
Jump to solution

Hi, can you SSH into the ESXi and show us the output of the esxcli vsan storage list command?

If a diskgroup is already created, try removing it with esxcli vsan storage remove -u <VSAN Disk Group UUID> command. (More info https://kb.vmware.com/s/article/2150567​)

Also, try deleting VSAN partitions that may have been left over from the previous tries - http://vpirate.in/2019/01/07/how-to-delete-previous-vsan-partitions-on-the-disk/

Then recreate the diskgroup from the UI

Reply
0 Kudos
b1izzard
Contributor
Contributor
Jump to solution

I think the drive itself is probably the issue.  I am learning vSAN, so this is for a test lab setup.  I may have made the fatal mistake of assuming I could skate by with these drives.  After some research, they appear rather ancient.  I am running passthrough.  It is a Hitachi Ultrastar C10K600 600GB 10K SAS Model: HUC106060CSS600

Reply
0 Kudos
Lalegre
Virtuoso
Virtuoso
Jump to solution

Well the thing is that some disks or I/O devices simply will not work on vSAN as they are not supported but in some cases not even compatible.

I did a quick look at your disk without having all the IDs on the VMware Compatibility Guide and it is not there not even for the older vSAN Versions. If you fix it with help from this forum will be best effort as you will never know if it will ever work.

b1izzard
Contributor
Contributor
Jump to solution

The partition looks clear.  See attachment. These are used eBay drives I am throwing in.  I did try rebooting the ESXi host and it still shows Normal/Degraded. 

chrome_glQyffkjvZ.jpg

Reply
0 Kudos
b1izzard
Contributor
Contributor
Jump to solution

The drive does not appear running esxcli vsan storage list

Reply
0 Kudos
b1izzard
Contributor
Contributor
Jump to solution

I think I will cut my loses with this as it sounds like it won't work or could be very challenging to make it work (plus not officially supported), so everyone please don't waste anymore of your time on this.  I appreciate your input.  

To move forward, I checked the VMware HCL for vSAN 7 compatible HP drives, then cross checked it with the HP DL380 Gen 8 options list to find a match https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c03235291 .  One option I came up with was 652583-B21.  Then found some cheap ones on eBay... so they have to work!  Thank you all for chiming in.

Reply
0 Kudos
TheBobkin
Champion
Champion
Jump to solution

Hello All,

"Well the thing is that some disks or I/O devices simply will not work on vSAN as they are not supported but in some cases not even compatible."

Lalegre, While I wouldn't go so far as to say that any modern disk can be used in a homelab, if ESXi can interact with correctly and consume it then it should work - whether they will be reliable or have poor performance (see vSAN 6.7 U3 on 3 nodes HPE xl170r Gen 9 - weird write latency  for a detailed example of this) or burn out after a relatively short space of time (as they were not intended for these purposes) is the question and the main point of why we have the vSAN HCL. Odd that it is seen in some manner but clearly non-functional (in the current state anyway).

"These are used eBay drives I am throwing in."

b1izzard, I would be incredibly wary of buying any used components that can be easily worn out via various means (e.g. any type of disk device, GPUs, PSUs) - are you sure these are in a proper functional state? e.g. test them with a Linux/Windows server OS and run some basic benchmark/error testing on them.

If they are functional then it could be any number of things such as the current configuration of the disk/controller, for example:

https://www.reddit.com/r/homelab/comments/98x1sh/h700_drives_failed_when_creating_a_virtual_disk/

I had a quick google of it and *looks* like this status may appear if not all the paths to the device are functional (which should be easily validated from the storage device section of UI or using esxcfg-mpath -l) - this could be due to a number of things but unsure of what to advise to debug it further.

"so everyone please don't waste anymore of your time on this."

This isn't necessarily a waste of time, this is what we do for fun :smileygrin:

"so they have to work!"

Please reference the part I mentioned above with regard to buying used devices - sure they could have had a careful, low-use owner, or they could have been absolutely melted to the breaking point with heavy workloads.

Bob