Questions about understanding and expanding a Red Hat disk mount
I'm not a big Linux guy, but we have a Red Hat 7 VM that currently has 2 disks attached to it that show the following in vSphere: one is 100GB and the other is 35GB - both are Thick-provisioned (lazy zeroed). However, under the shared data store tab in vSphere, the datastore both disks for this VM are located on show "Provisioned Space" as 243.08GB and "Used Space" is 148.21GB. There is a remote application that sends files to this server under a scheduled task, but the process fails because the disk where the directory it stores those files reports that it exceeds 90% capacity.
When I run a simple "df -h" command from the root in Red Hat, there's a long list of "file systems" and their sizes, space used, space available and their mount locations... and the "file system" associated with where this directory (/data) is mounted shows it's size to be 32GB with 78% used (25GB used, 7.1GB remaining). This mount, apparently, is located on the disk "sdb", which I learned is the 35GB disk. I found this out by running the "lsblk" command which shows the disk names, partitions, sizes and associated mount points. There appears to be only one partition on this disk (32GB). The other disk appears to have 3 partitions at 500M, 79.5GB and 20GB.
1. Why does it show 243GB provisioned and 148GB used when there are only 2 drives attached at 100GB and 35GB and none of the mounts show anything over or near 100% used?
2. How do I extend the 32GB disk/partition and how do I present/accept new space or a new drive in Red Hat
2a. What is the best approach to add space to correct this space shortage, preferably with minimal disruption?