- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I know that it shows degraded on ESXi, but in the RAID Bios it shows as fine, although I'm running a consistency check now just to make sure. I wasn't sure if that meant that there are only 6gb drives connected to the controller rather than 12gb drives. I've read quite a bit about the pros and cons of running RAID 5 with SSDs say vs RAID 10, but as this server isn't going to be under huge I/O loads I figured that the extra space for RAID 5 made the difference, and the low likelihood that multiple drives would fail at once. I did notice that as I took the VMs down to reboot the server to go into the RAID Bios that I got the lost access message again. I don't know what it is about starting and stopping VMs versus transferring files (the files I transferred off the server were for one of the VMs that's causing this message, so if it's a bad block on a disk or an error in the indexes in the datastore I should have run into it while copying the files off) After this finishes, assuming it finds no problem I may try the VOMA thing to check the logical consistency of the datastore.
Regarding the Intel server board BIOS, it is the latest within the last 60 days. I also found that the FW in the RAID controller doesn't match exactly any of the FW versions listed on the driver page. I'm tempted to just install the latest driver version, assuming that I don't find any other issue.