Hi all I've a setup that is now composed by 1 Master Node, 3 Data Node (2 added after the installation) and 2 remote collectors (one will soon become a Master Replica).
When I created the VM I added a 250 GB disk so that the total storage for vm is 500 GB each .
At this moment it seems that the disk space used in the /storage/db is really different between the nodes and the master node is becoming full really fast.
Master: /dev/mapper/data-db 463642mb 379658mb 60436mb 87% /storage/db
Data: /dev/mapper/data-db 463642mb 109511mb 330583mb 25% /storage/db
Data: /dev/mapper/data-db 463642mb 220762mb 219329mb 51% /storage/db
Data: /dev/mapper/data-db 463642mb 144517mb 295573mb 33% /storage/db
On the storage db I've moved also the logs because the log partition was getting full and searched for heapstack to delete but there's nothing.
On the master node this is the space distribution:
2.8G activity
136K alarmxdblog
98M blob
4.1M config_backup
299G data
4.0K heapdump
42G hisxdb
361M hisxdblog
19G rollup
6.8G vpostgres
589M xdb
339M xdblog
Another Data node:
2.8G activity
356K alarmxdb
136K alarmxdblog
4.0K blob
4.1M config_backup
117G data
17M heapdump
8.0G hisxdb
28M hisxdblog
7.2G rollup
5.0G vpostgres
876K xdb
348K xdblog
The /storage/db/vcops/data directory is the real difference between for spaces in the nodes.
I've tried a disk space rebalance that had run for almost 30 hours but this is the result I had at the end.
Is this normal and I should expand more the storage/data disks of all nodes or there's something I can do to rebalance the disk space better?
Thanks
Francescoo
have you checked the sizing documents on whether you cover the basics:
I've checked some documentation but the strange thing is that one server is 90% full while the other 3 are 10 to 40% full.
This seems at least strange and doing a "disk rebalance" will not equally redistribute the used space on the 4 nodes.
Did you ever figure out how this works? We have a four node cluster, a master node and three data nodes, all added before initially turning on the cluster. On ours 190 GB is used in /storage/db on the master node (21% used), while only 15 GB is used on each of the three data nodes (2% used). Does this indicate a problem or will it start distributing stuff as it gets more full?
When nodes are added to your vROPs cluster, a rebalance of the cluster must be completed.
Rebalance the vRealize Operations Manager Cluster
For better performance you can rebalance adapter, disk, memory, or network load across the vRealize Operations Manager cluster nodes.
Procedure:
1 | |
2 | |
3 | |
4 |
source: vRealize Operations Manager 6.0.1 Documentation Center