1 Reply Latest reply on May 26, 2016 10:00 AM by mark.j

    Freeing up disk space in VROPS cluster

    kaufmanm Novice

      I have a two node cluster for testing running vRealize Operations 6.2.1 and am looking for a way to free up space. Each node has 200 GB, but node 1 is using 70 GB and node 2 is using 166 GB and caused the most recent upgrade to fail for lack of disk space and I had to move around some old pak files to get it to work. I tried doing a disk rebalance through cluster management but it had no effect. I've also adjusted the Global Settings to lower all the retentions significantly, but I don't know if that will actually slow or reverse the disk space growth. Ideally I'd like it to always keep 50 GB free on each node. The data isn't important and I'd be fine discarding most or all of it; I don't want to have to expand the storage for a cluster I just use to test new adapters and such.

       

      On the master node /storage/db/vcops/data is 27 GB and /storage/db/vcops/rollup is 13 GB.

      On the data node /storage/db/vcops/data is 103 GB and /storage/db/vcops/rollup is 42 GB.

       

      Is there a way to free up space on the nodes in vRealize Operations 6.2.1?

        • 1. Re: Freeing up disk space in VROPS cluster
          mark.j Master
          VMware Employees

          There isn't a way to explicitly manage the performance data (bits) on the FSDB and control which data node it resides on.

           

          DIsk rebalance works for adding new nodes, but won't ever make the disk distribution perfectly even.

           

          On the global retention settings, that will age off old data, however it won't necessary keep this exactly scenario form happening again.

           

          It would make sense that temp files (paks) threw a wrench in to the works, as you've got a pretty small disk space footprint right now. The most logical answer would be to add disk space and let the retention settings age off the slices of data.

           

          vR Ops manages the data on the node itself, and if it gets too low on the FSDB partitions, it'll start to drop the data slices automatically. In your case though, you just didn't have enough space (in GB) to accomplish the upgrade since you have such a small overall footprint IMO.

           

          *Also, note that depending on the upgrade path you did, some require more disk space than other. Upgrading to 6.2.* has some db upgrades in the mix, so it can require more than typical amounts of disk to succeed.