So my configuration is this:
3 Physical devices, 1 hybrid 1TB that houses the OS-VMDK's, 2 2TB HDD for data storage.
I want to use the 2 2TB's purely for my Ubuntu Server 14.04.2 install.
So I've got Ubuntu running from the 1TB (SYSTEM). It's got a 200G virtual drive.
Everything is fine.
I hook up the 2 2TB's with fresh virtual drives to Ubuntu.
I format both of them with Linux Auto Raid using fdisk.
Then I proceed to start raid one with mdadm.
I can see in /proc/mdstat that the drives have begun syncing.
I mount the /dev/md0 drive to /media/raid1 and format it ext4.
Everything is working at this point, I can read and write from the raid.
The problem is that after a couple of minutes, my Ubuntu shuts down and vSphere gives me the message that there is no more space for it's VMDK.
What is going wrong?
When you say 3 physical devices, do you mean you have 3 ESXi hosts?
Are they all using local storage?
You said you hooked up the 2 - 2TB drives to Ubuntu server, are these HDD's attached to the physical hosts and then your creating vmdk's on the datastores and attaching them to Ubuntu?
Can you attach screenshots of your systems, the guest configuration, and datastore configurations, and the error message?
Are they all using local storage?
You said you hooked up the 2 - 2TB drives to Ubuntu server, are these HDD's attached to the physical hosts and then your creating vmdk's on the datastores and attaching them to Ubuntu?
Yes to both of those questions. I don't know if it had anything to do with it, but I tried the same method again with the only difference setting the disks to independent. This seems to have fixed it. It has now been running for some hours.