1 Reply Latest reply on Aug 12, 2019 2:59 AM by a.p.

    ESX 4  -  /vmfs/volumes   RED flashing stores

    Reyth Lurker

      Long story short.



      Running ESX 4



      Prior to a physical migration of my current network, i was required to gather data from all physical HDDs on my network .. thus resulting in a complete cold shutdown of my network after hours on a friday night and pulling all hard drives and scanning barcodes and physical ionventory of all HDDs. Once complete, powered everything back up.   on server "X" one drive indicated as a failure.  assuming it was a seating issue and after verifying it was not in an automatic rebuild state. i shut down the server and checked all drives and reseated them. regardless of the continuous cleaning and inspection of the servers over the past three years. a chunk of dust was on one of the connections.  cleaned the connector and everything was good ( so i thought)  brought the server back up. Slot 3 (where the dust was) caused an issue where once powered back up the adaptec array was rebuilding.  During the rebuild, all VMs operated just fine.  some time over night the two VMs on server X came into the state (inacessable). and the datastores were missing.


      Adaptec storage manager says Bad Stripes.   when on the phsyical server using the BIOS adaptec-> all drives are optimal. no issues. 


      I attempted to add the data stores back using vSphere. no joy: an immediate error stated" during the configuration: unable to create Filesystems, see vmkernel log for details" 


      going to the console for server X. in the /vmfs/volumes directory the ESX data store indicated normal ( aqua in color)   but the one datastore for the VMs and the two datastores for my BackupEXEC  drives where flashing RED.


      Help!!!! is my data lost? should I just rebuild my two VMs....

      I feel ( im still learning VMware )   there is mapping/resignature issue.    My data is still there, just not pointing to the correctly.