<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Single node vSAN cluster boot error after upgrade from 6.5 to 6.7 in VMware vSAN Discussions</title>
    <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383087#M4696</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Any resyncing components? My guess is that there may be a rebalance operation going on. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Tue, 04 Sep 2018 21:17:31 GMT</pubDate>
    <dc:creator>GreatWhiteTec</dc:creator>
    <dc:date>2018-09-04T21:17:31Z</dc:date>
    <item>
      <title>Single node vSAN cluster boot error after upgrade from 6.5 to 6.7</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383086#M4695</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have single node vSAN cluster used as a homelab for many nested environments. I upgraded ESXi 6.5 to 6.7 without any issues. I noticed that during boot there is error &lt;STRONG&gt;recovery progress - &lt;/STRONG&gt;see attached photo.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="2018-09-04 22_07_19-Resolution_1024x768 FPS _10.png"&gt;&lt;img src="https://communities.vmware.com/t5/image/serverpage/image-id/4198i1D4451B8D5F6D292/image-size/large?v=v2&amp;amp;px=999" role="button" title="2018-09-04 22_07_19-Resolution_1024x768 FPS _10.png" alt="2018-09-04 22_07_19-Resolution_1024x768 FPS _10.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;I found as well something related in logs&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;2018-09-04T20:07:53.291Z cpu0:2097506)WARNING: Local node faultDomain ID is changed from 00000000-0000-0000-0000-000000000000 to 59b83712-415a-dc44-993d-0025905e041e&lt;/P&gt;&lt;P&gt;2018-09-04T20:07:54.305Z cpu5:2097506)WARNING: LSOMCommon: LSOM_DiskGroupCreate:1481: Disk group already created uuid: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-04T20:07:54.313Z cpu5:2097316)WARNING: NFS: 1227: Invalid volume UUID t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2&lt;/P&gt;&lt;P&gt;2018-09-04T20:07:54.396Z cpu0:2097506)WARNING: NFS: 1227: Invalid volume UUID 59b837bc-f856ecbc-0c43-0025905e041e&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:07.381Z cpu0:2099225)WARNING: NTPClock: 1561: system clock synchronized to upstream time servers&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:08.281Z cpu1:2099253)WARNING: LSOMCommon: LSOM_DiskGroupCreate:1481: Disk group already created uuid: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:08.289Z cpu4:2097312)WARNING: NFS: 1227: Invalid volume UUID t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:08.510Z cpu4:2099257)WARNING: LSOMCommon: LSOM_DiskGroupCreate:1481: Disk group already created uuid: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:08.519Z cpu1:2097312)WARNING: NFS: 1227: Invalid volume UUID t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:09.711Z cpu2:2099345)WARNING: LSOMCommon: LSOM_DiskGroupCreate:1481: Disk group already created uuid: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-04T20:08:09.719Z cpu0:2097310)WARNING: NFS: 1227: Invalid volume UUID t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;After boot ESXi is ok, vSAN status looks good.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;[root@ESXi:/vsantraces] esxcli vsan storage list&lt;/P&gt;&lt;P&gt;t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Display Name: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is SSD: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN UUID: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN Disk Group UUID: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN Disk Group Name: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Used by this host: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; In CMMDS: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; On-disk format version: 5&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Deduplication: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Compression: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Checksum: 3722446061210474928&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Checksum OK: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is Capacity Tier: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Encryption: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; DiskKeyLoaded: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is Mounted: true&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Device: t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Display Name: t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is SSD: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN UUID: 52c93db2-c879-ce09-ea2a-664a0ae10485&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN Disk Group UUID: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; VSAN Disk Group Name: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Used by this host: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; In CMMDS: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; On-disk format version: 5&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Deduplication: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Compression: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Checksum: 12121898985191738183&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Checksum OK: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is Capacity Tier: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Encryption: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; DiskKeyLoaded: false&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Is Mounted: true&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;[root@ESXi:~] esxcli vsan cluster get&lt;/P&gt;&lt;P&gt;Cluster Information&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Enabled: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Current Local Time: 2018-09-04T20:11:10Z&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Local Node UUID: 59b83712-415a-dc44-993d-0025905e041e&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Local Node Type: NORMAL&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Local Node State: MASTER&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Local Node Health State: HEALTHY&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Master UUID: 59b83712-415a-dc44-993d-0025905e041e&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Backup UUID:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster UUID: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Membership Entry Revision: 0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Member Count: 1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Member UUIDs: 59b83712-415a-dc44-993d-0025905e041e&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Sub-Cluster Membership UUID: 22e58e5b-16de-fb32-0412-0025905e04e4&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Unicast Mode Enabled: true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Maintenance Mode State: OFF&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; Config Generation: None 0 0.0&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Any idea how to troubleshoot?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers&lt;BR /&gt;Wojciech&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 04 Sep 2018 20:34:38 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383086#M4695</guid>
      <dc:creator>wmarusiak</dc:creator>
      <dc:date>2018-09-04T20:34:38Z</dc:date>
    </item>
    <item>
      <title>Re: Single node vSAN cluster boot error after upgrade from 6.5 to 6.7</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383087#M4696</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Any resyncing components? My guess is that there may be a rebalance operation going on. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 04 Sep 2018 21:17:31 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383087#M4696</guid>
      <dc:creator>GreatWhiteTec</dc:creator>
      <dc:date>2018-09-04T21:17:31Z</dc:date>
    </item>
    <item>
      <title>Re: Single node vSAN cluster boot error after upgrade from 6.5 to 6.7</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383088#M4697</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello Wojciech,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Display messages for PLOG recovery during boot (e.g. initializing the Disk-Groups) has a % number added now, this doesn't indicate an issue.&lt;/P&gt;&lt;P&gt;You can verify that it completed this process by looking in the boot.gz or vmkernel.log for PLOG recovery being successful.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;B&gt;GreatWhiteTec&lt;/B&gt;&amp;nbsp; - how much data do you think could be out of sync and balance in a 1 capacity-tier device 1-node cluster?&amp;nbsp; &lt;img id="smileywink" class="emoticon emoticon-smileywink" src="https://communities.vmware.com/i/smilies/16x16_smiley-wink.png" alt="Smiley Wink" title="Smiley Wink" /&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Bob&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 05 Sep 2018 07:08:22 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383088#M4697</guid>
      <dc:creator>TheBobkin</dc:creator>
      <dc:date>2018-09-05T07:08:22Z</dc:date>
    </item>
    <item>
      <title>Re: Single node vSAN cluster boot error after upgrade from 6.5 to 6.7</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383089#M4698</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I checked vmkernel.log during boot and it doesn't look good.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="2018-09-16 21_55_43-Resolution_1024x768 FPS _29.png"&gt;&lt;img src="https://communities.vmware.com/t5/image/serverpage/image-id/4564i4590E07930435BEF/image-size/large?v=v2&amp;amp;px=999" role="button" title="2018-09-16 21_55_43-Resolution_1024x768 FPS _29.png" alt="2018-09-16 21_55_43-Resolution_1024x768 FPS _29.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Those are the last entries I see in vmkernel.log&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;2018-09-16T18:32:18.733Z cpu4:2099026)PLOG: PLOGMapMetadataPartition:2607: SSD acks (2) 1 healthy MDs&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.735Z cpu4:2099026)PLOG: PLOGProbeDevice:5728: Probed plog device &amp;lt;t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:1&amp;gt; 0x430ab0920d40 exists.. continue with old entry&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.735Z cpu4:2099026)WARNING: LSOMCommon: LSOM_DiskGroupCreate:1481: Disk group already created uuid: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.735Z cpu4:2099026)LSOMCommon: SSDLOG_AddDisk:877: Existing ssd found t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.735Z cpu4:2099026)PLOG: PLOGAnnounceSSD:7268: Successfully added VSAN SSD (t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2) with UUID 522288b0-d99b-a8f8-6dd5-44b7ef7354e4. kt 1, en 0, enC 0.&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.735Z cpu4:2099026)VSAN: Initializing SSD: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 Please wait...&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.736Z cpu5:2098374)PLOG: PLOGNotifyDisks:4546: MD 0 with UUID 52c93db2-c879-ce09-ea2a-664a0ae10485 with state 0 formatVersion 5 backing SSD 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 notified&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.736Z cpu5:2098374)PLOG: PLOG_Recover:884: !!!! SSD 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 already recovered&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.736Z cpu4:2099026)VSAN: Successfully Initialized: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.736Z cpu4:2099026)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.737Z cpu4:2099026)PLOG: PLOGInitAndAnnounceMD:7737: Successfully announced VSAN MD (t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2) with UUID: 52c93db2-c879-ce09-ea2a-664a0ae10485. kt 1, en 0, enC 0.&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.821Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.821Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.821Z cpu2:2097312)Vol3: 2674: Could not open device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2' for probing: Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.821Z cpu2:2097312)WARNING: NFS: 1227: Invalid volume UUID t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.821Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.822Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.822Z cpu2:2097312)Vol3: 1201: Could not open device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2' for volume open: Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.822Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.822Z cpu2:2097312)Vol3: 1201: Could not open device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2' for volume open: Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.823Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.823Z cpu2:2097312)Vol3: 1201: Could not open device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2' for volume open: Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.823Z cpu2:2097312)PLOG: PLOGOpenDevice:4238: Disk handle open failure for device t10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2, status:Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.823Z cpu2:2097312)Vol3: 1201: Could not open device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2' for volume open: Busy&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:18.823Z cpu2:2097312)FSS: 6092: No FS driver claimed device 't10.ATA_____Crucial_CT2050MX300SSD1_________________________1651150F2144:2': No filesystem on the device&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.077Z cpu3:2099026)VC: 4616: Device rescan time 102 msec (total number of devices 6)&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.077Z cpu3:2099026)VC: 4619: Filesystem probe time 253 msec (devices probed 4 of 6)&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.077Z cpu3:2099026)VC: 4621: Refresh open volume time 3 msec&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=2, lun=0, action=0&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:No media&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=3, lun=0, action=0&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:No media&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=4, lun=0, action=0&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:No media&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:channel=0, target=5, lun=0, action=0&lt;/P&gt;&lt;P&gt;2018-09-16T18:32:19.203Z cpu5:2099027)vmw_ahci[0000001f]: scsiDiscover:No media&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;And some here&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)LSOMCommon: SSDLOGInitDescForIO:803: Recovery complete with Success... device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu1:2097600)ScsiDeviceIO: 3015: Cmd(0x459a40fae580) 0x85, CmdSN 0x0 from world 2097505 to dev "t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)LSOMCommon: LSOM_RegisterDiskAttrHandle:124: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2 is a non-SATA disk&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)LSOMCommon: LSOM_RegisterDiskAttrHandle:131: DiskAttrHandle:0x4313fcf6c7a8 is added to disk:t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2 by module:lsomcommon&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)PLOG: PLOG_DeviceCreateSSDLogHandle:1096: Registered APD callback for 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)LSOMCommon: IORETRY_Create:2585: An IORETRY queue for diskUUID 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 (0x4313fcf7e850) is NOT encrypted&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)LSOMCommon: IORETRY_Create:2624: Queue Depth for device 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 set to 920&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.266Z cpu2:2097506)ScsiEvents: 300: EventSubsystem: Device Events, Event Mask: 20, Parameter: 0x4313fcf7e850, Registered!&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.267Z cpu2:2097506)Created VSAN Slab PLOGIORetry_slab_0000000000 (objSize=272 align=64 minObj=2500 maxObj=25000 overheadObj=0 minMemUsage=836k maxMemUsage=8336k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:34.267Z cpu2:2097506)Created VSAN Slab PLOGIORetry_slab_0000000001 (objSize=272 align=64 minObj=2500 maxObj=25000 overheadObj=0 minMemUsage=836k maxMemUsage=8336k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.267Z cpu2:2097506)PLOG: PLOGAnnounceSSD:7253: Trace task started for device 522288b0-d99b-a8f8-6dd5-44b7ef7354e4&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.267Z cpu2:2097506)PLOG: PLOGAnnounceSSD:7268: Successfully added VSAN SSD (t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2) with UUID 522288b0-d99b-a8f8-6dd5-44b7ef7354e4. kt 1, en 0, enC 0.&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.267Z cpu2:2097506)VSAN: Initializing SSD: 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 Please wait...&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.268Z cpu4:2098374)PLOG: PLOGNotifyDisks:4546: MD 0 with UUID 52c93db2-c879-ce09-ea2a-664a0ae10485 with state 0 formatVersion 5 backing SSD 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 notified&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.268Z cpu4:2098374)VSANServer: VSANServer_InstantiateServer:3380: Instantiated VSANServer 0x430ab0922e18&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.269Z cpu5:2098134)Created VSAN Slab RcSsdParentsSlab_0x4313fcf83200 (objSize=208 align=64 minObj=2500 maxObj=25000 overheadObj=0 minMemUsage=668k maxMemUsage=6668k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.270Z cpu5:2098134)Created VSAN Slab RcSsdIoSlab_0x4313fcf83200 (objSize=65536 align=64 minObj=64 maxObj=25000 overheadObj=0 minMemUsage=4352k maxMemUsage=1700000k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.270Z cpu5:2098134)Created VSAN Slab RcSsdMdBElemSlab_0x4313fcf83200 (objSize=32 align=64 minObj=4 maxObj=4096 overheadObj=0 minMemUsage=4k maxMemUsage=264k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.270Z cpu5:2098134)Created VSAN Slab RCInvBmapSlab_0x4313fcf83200 (objSize=56 align=64 minObj=1 maxObj=1 overheadObj=14 minMemUsage=4k maxMemUsage=4k)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.270Z cpu2:2098386)Global: Virsto_CreateInstance:163: INFO: Create new Virsto instance (heapName: virstoInstance_00000001)&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.272Z cpu2:2098386)DOM: DOMDisk_GetServer:259: disk-group w/ SSD 522288b0-d99b-a8f8-6dd5-44b7ef7354e4 on dom/comp server 0&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.272Z cpu2:2098386)LSOM: LSOMSendDiskStatusEvent:5424: Throttled: Unable to post disk status event for disk 522288b0-d99b-a8f8-6dd5-44b7ef7354e4: Not found&lt;/P&gt;&lt;P&gt;2018-09-16T18:27:35.447Z cpu0:2098099)LSOMCommon: SSDLOGLogEnumProgress:1406: Recovery progress: 1500 of ~502343 (0%) log blocks. 0s so far, ~58s left. device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;vSAN CacheDisk t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2: Recovery progress: 1500 of ~502343 (0%)2018-09-16T18:27:35.612Z cpu3:2098099)LSOMCommon: SSDLOGLogEnumProgress:1406: Recovery progress: 3000 of ~502343 (0%) log blocks. 0s so far, ~56s left. device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;vSAN CacheDisk t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2: Recovery progress: 3000 of ~502343 (0%)2018-09-16T18:27:35.775Z cpu2:2098099)LSOMCommon: SSDLOGLogEnumProgress:1406: Recovery progress: 4500 of ~502343 (0%) log blocks. 0s so far, ~55s left. device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;vSAN CacheDisk t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2: Recovery progress: 4500 of ~502343 (0%)2018-09-16T18:27:35.937Z cpu2:2098099)LSOMCommon: SSDLOGLogEnumProgress:1406: Recovery progress: 6000 of ~502343 (1%) log blocks. 0s so far, ~55s left. device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;P&gt;vSAN CacheDisk t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2: Recovery progress: 6000 of ~502343 (1%)2018-09-16T18:27:36.102Z cpu3:2098099)LSOMCommon: SSDLOGLogEnumProgress:1406: Recovery progress: 7500 of ~502343 (1%) log blocks. 0s so far, ~54s left. device: t10.NVMe____WDC_WDS256G1X0C2D00ENX0__________________B10A46444A441B00:2&lt;/P&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Any idea what might be going on?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Sun, 16 Sep 2018 20:17:53 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Single-node-vSAN-cluster-boot-error-after-upgrade-from-6-5-to-6/m-p/1383089#M4698</guid>
      <dc:creator>wmarusiak</dc:creator>
      <dc:date>2018-09-16T20:17:53Z</dc:date>
    </item>
  </channel>
</rss>

