VMware Cloud Community
usa_kwilliamson
Enthusiast
Enthusiast

VSAN 6.6.1 Disk Balance Issues

We purchased a 4 node All Flash vSAN ready cluster.  We have enabled dedupe and compression, using raid 1(no erasure coding).  We are running esxi 6.5u1 with VSAN 6.6.1.  We have testing the performance with it using HCIBench and it seems to perform well.  So we moved 50 VMs on it see how it would behave, so far so good.  We decided to test out the redundancy of it by putting a host into maintenance mode and migrate the data to the other hosts.   That worked ok,  took it a little while to do.   When we took the host out of maintenance mode, data started to flow back to the 4th node.  We have kicked off multiple rebalance jobs from the GUI and from the rvc, but the we continue to see the  'vSAN disk balance issue'.  I do have an active case with VMware, but I don't see it going anywhere being they are doing nothing but kicking off more rebalance jobs.   Support wanted to upgrade the cluster(never gave a reason why), but I can't upgrade to 6.7 being several of the enhanced linked clusters are certified to run on 6.7.  All hardware passes the HCL test.

/

/localhost/ Datacenter/computers/ vSAN Cluster> vsan.proactive_rebalance_info .
2018-07-30 19:31:56 +0000: Retrieving proactive rebalance information from host    servers1.XXXXXX.XXX ...
2018-07-30 19:31:56 +0000: Retrieving proactive rebalance information from host    servers2.XXXXXX.XXX ...
2018-07-30 19:31:56 +0000: Retrieving proactive rebalance information from host    servers4.XXXXXX.XXX ...
2018-07-30 19:31:56 +0000: Retrieving proactive rebalance information from host    servers3.XXXXXX.XXX ...

Proactive rebalance start: 2018-07-30 16:32:11 UTC
Proactive rebalance stop: 2018-07-31 16:32:11 UTC
Max usage difference triggering rebalancing: 25.00%
Average disk usage: 35.00%
Maximum disk usage: 50.00% (34.00% above minimum disk usage)
Imbalance index: 19.00%
Disks to be rebalanced:
+----------------------+-------------------------+----------------------------+--------------+
| DisplayName          | Host                    | Disk usage above threshold | Data to move |
+----------------------+-------------------------+----------------------------+--------------+
| mpx.vmhba0:C0:T66:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T75:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T77:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T78:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T73:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
+----------------------+-------------------------+----------------------------+--------------+
| mpx.vmhba0:C0:T74:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T67:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T79:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T72:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
| mpx.vmhba0:C0:T76:L0 |    servers2.XXXXXX.XXX  | 9.00%                      | 76.2242 GB   |
+----------------------+-------------------------+----------------------------+--------------+

/localhost/ Datacenter/computers/ vSAN Cluster> vsan.disks_stats .            

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

|                      |                         |       | Num  | Capacity  |         |          | Physical   | Physical | Physical | Logical    | Logical | Logical  | Status  |

| DisplayName          | Host                    | isSSD | Comp | Total     | Used    | Reserved | Capacity   | Used     | Reserved | Capacity   | Used    | Reserved | Health  |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca0950143c8 |     servers1.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T72:L0 |     servers1.XXXXXX.XXX | MD    | 21   | 846.94 GB | 33.01 % | 4.78 %   | 4234.68 GB | 33.01 %  | 7.59 %   | 8942.50 GB | 7.86 %  | 0.45 %   | OK (v5) |

| mpx.vmhba0:C0:T79:L0 |     servers1.XXXXXX.XXX | MD    | 19   | 846.94 GB | 33.01 % | 1.80 %   | 4234.68 GB | 33.01 %  | 7.59 %   | 8942.50 GB | 6.81 %  | 0.17 %   | OK (v5) |

| mpx.vmhba0:C0:T67:L0 |     servers1.XXXXXX.XXX | MD    | 17   | 846.94 GB | 33.01 % | 9.19 %   | 4234.68 GB | 33.01 %  | 7.59 %   | 8942.50 GB | 5.19 %  | 0.87 %   | OK (v5) |

| mpx.vmhba0:C0:T74:L0 |     servers1.XXXXXX.XXX | MD    | 18   | 846.94 GB | 33.01 % | 19.92 %  | 4234.68 GB | 33.01 %  | 7.59 %   | 8942.50 GB | 12.37 % | 1.89 %   | OK (v5) |

| mpx.vmhba0:C0:T76:L0 |     servers1.XXXXXX.XXX | MD    | 17   | 846.94 GB | 33.01 % | 2.27 %   | 4234.68 GB | 33.01 %  | 7.59 %   | 8942.50 GB | 8.29 %  | 0.22 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca09501339c |     servers1.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T78:L0 |     servers1.XXXXXX.XXX | MD    | 16   | 846.94 GB | 35.02 % | 2.75 %   | 4234.68 GB | 35.02 %  | 8.76 %   | 8942.50 GB | 7.95 %  | 0.26 %   | OK (v5) |

| mpx.vmhba0:C0:T66:L0 |     servers1.XXXXXX.XXX | MD    | 16   | 846.94 GB | 35.02 % | 23.02 %  | 4234.68 GB | 35.02 %  | 8.76 %   | 8942.50 GB | 4.48 %  | 2.18 %   | OK (v5) |

| mpx.vmhba0:C0:T77:L0 |     servers1.XXXXXX.XXX | MD    | 20   | 846.94 GB | 35.02 % | 8.95 %   | 4234.68 GB | 35.02 %  | 8.76 %   | 8942.50 GB | 13.52 % | 0.85 %   | OK (v5) |

| mpx.vmhba0:C0:T73:L0 |     servers1.XXXXXX.XXX | MD    | 20   | 846.94 GB | 35.02 % | 3.23 %   | 4234.68 GB | 35.02 %  | 8.76 %   | 8942.50 GB | 13.29 % | 0.31 %   | OK (v5) |

| mpx.vmhba0:C0:T75:L0 |     servers1.XXXXXX.XXX | MD    | 21   | 846.94 GB | 35.02 % | 5.85 %   | 4234.68 GB | 35.02 %  | 8.76 %   | 8942.50 GB | 7.20 %  | 0.55 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca095013358 |     servers2.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T72:L0 |     servers2.XXXXXX.XXX | MD    | 17   | 846.94 GB | 49.52 % | 4.42 %   | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 4.99 %  | 0.42 %   | OK (v5) |

| mpx.vmhba0:C0:T79:L0 |     servers2.XXXXXX.XXX | MD    | 17   | 846.94 GB | 49.52 % | 31.38 %  | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 11.73 % | 2.97 %   | OK (v5) |

| mpx.vmhba0:C0:T67:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 30.66 %  | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 7.51 %  | 2.90 %   | OK (v5) |

| mpx.vmhba0:C0:T74:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 33.76 %  | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 7.32 %  | 3.20 %   | OK (v5) |

| mpx.vmhba0:C0:T76:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 1.80 %   | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 8.25 %  | 0.17 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca0950138f0 |     servers2.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T73:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 49.98 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 11.24 % | 4.73 %   | OK (v5) |

| mpx.vmhba0:C0:T78:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 30.31 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.36 %  | 2.87 %   | OK (v5) |

| mpx.vmhba0:C0:T77:L0 |     servers2.XXXXXX.XXX | MD    | 16   | 846.94 GB | 50.09 % | 4.18 %   | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.19 %  | 0.40 %   | OK (v5) |

| mpx.vmhba0:C0:T75:L0 |     servers2.XXXXXX.XXX | MD    | 16   | 846.94 GB | 50.09 % | 29.23 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.82 %  | 2.77 %   | OK (v5) |

| mpx.vmhba0:C0:T66:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 23.50 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.11 %  | 2.23 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca095013548 |     servers3.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T74:L0 |     servers3.XXXXXX.XXX | MD    | 19   | 846.94 GB | 37.06 % | 3.70 %   | 4234.68 GB | 37.06 %  | 4.51 %   | 8942.50 GB | 9.38 %  | 0.35 %   | OK (v5) |

| mpx.vmhba0:C0:T76:L0 |     servers3.XXXXXX.XXX | MD    | 18   | 846.94 GB | 37.06 % | 1.53 %   | 4234.68 GB | 37.06 %  | 4.51 %   | 8942.50 GB | 12.12 % | 0.14 %   | OK (v5) |

| mpx.vmhba0:C0:T72:L0 |     servers3.XXXXXX.XXX | MD    | 20   | 846.94 GB | 37.06 % | 12.05 %  | 4234.68 GB | 37.06 %  | 4.51 %   | 8942.50 GB | 8.71 %  | 1.14 %   | OK (v5) |

| mpx.vmhba0:C0:T79:L0 |     servers3.XXXXXX.XXX | MD    | 21   | 846.94 GB | 37.06 % | 3.23 %   | 4234.68 GB | 37.06 %  | 4.51 %   | 8942.50 GB | 14.02 % | 0.31 %   | OK (v5) |

| mpx.vmhba0:C0:T67:L0 |     servers3.XXXXXX.XXX | MD    | 19   | 846.94 GB | 37.06 % | 2.04 %   | 4234.68 GB | 37.06 %  | 4.51 %   | 8942.50 GB | 12.23 % | 0.19 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca095012b34 |     servers3.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T78:L0 |     servers3.XXXXXX.XXX | MD    | 16   | 846.94 GB | 34.83 % | 2.99 %   | 4234.68 GB | 34.83 %  | 5.66 %   | 8942.50 GB | 9.79 %  | 0.28 %   | OK (v5) |

| mpx.vmhba0:C0:T77:L0 |     servers3.XXXXXX.XXX | MD    | 19   | 846.94 GB | 34.83 % | 4.18 %   | 4234.68 GB | 34.83 %  | 5.66 %   | 8942.50 GB | 15.02 % | 0.40 %   | OK (v5) |

| mpx.vmhba0:C0:T66:L0 |     servers3.XXXXXX.XXX | MD    | 16   | 846.94 GB | 34.83 % | 4.89 %   | 4234.68 GB | 34.83 %  | 5.66 %   | 8942.50 GB | 10.61 % | 0.46 %   | OK (v5) |

| mpx.vmhba0:C0:T75:L0 |     servers3.XXXXXX.XXX | MD    | 23   | 846.94 GB | 34.83 % | 4.66 %   | 4234.68 GB | 34.83 %  | 5.66 %   | 8942.50 GB | 8.15 %  | 0.44 %   | OK (v5) |

| mpx.vmhba0:C0:T73:L0 |     servers3.XXXXXX.XXX | MD    | 18   | 846.94 GB | 34.83 % | 11.58 %  | 4234.68 GB | 34.83 %  | 5.66 %   | 8942.50 GB | 9.88 %  | 1.10 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca0950142e0 |     servers4.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T75:L0 |     servers4.XXXXXX.XXX | MD    | 7    | 846.94 GB | 16.42 % | 9.18 %   | 4234.68 GB | 16.42 %  | 4.03 %   | 8942.50 GB | 3.61 %  | 0.87 %   | OK (v5) |

| mpx.vmhba0:C0:T66:L0 |     servers4.XXXXXX.XXX | MD    | 11   | 846.94 GB | 16.42 % | 2.51 %   | 4234.68 GB | 16.42 %  | 4.03 %   | 8942.50 GB | 2.18 %  | 0.24 %   | OK (v5) |

| mpx.vmhba0:C0:T77:L0 |     servers4.XXXXXX.XXX | MD    | 10   | 846.94 GB | 16.42 % | 5.84 %   | 4234.68 GB | 16.42 %  | 4.03 %   | 8942.50 GB | 2.12 %  | 0.55 %   | OK (v5) |

| mpx.vmhba0:C0:T78:L0 |     servers4.XXXXXX.XXX | MD    | 6    | 846.94 GB | 16.42 % | 1.31 %   | 4234.68 GB | 16.42 %  | 4.03 %   | 8942.50 GB | 2.25 %  | 0.12 %   | OK (v5) |

| mpx.vmhba0:C0:T73:L0 |     servers4.XXXXXX.XXX | MD    | 7    | 846.94 GB | 16.42 % | 1.31 %   | 4234.68 GB | 16.42 %  | 4.03 %   | 8942.50 GB | 3.94 %  | 0.12 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca0950131bc |     servers4.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T74:L0 |     servers4.XXXXXX.XXX | MD    | 5    | 846.94 GB | 21.18 % | 1.78 %   | 4234.68 GB | 21.18 %  | 2.97 %   | 8942.50 GB | 5.66 %  | 0.17 %   | OK (v5) |

| mpx.vmhba0:C0:T79:L0 |     servers4.XXXXXX.XXX | MD    | 2    | 846.94 GB | 21.18 % | 1.30 %   | 4234.68 GB | 21.18 %  | 2.97 %   | 8942.50 GB | 2.01 %  | 0.12 %   | OK (v5) |

| mpx.vmhba0:C0:T76:L0 |     servers4.XXXXXX.XXX | MD    | 6    | 846.94 GB | 21.18 % | 5.57 %   | 4234.68 GB | 21.18 %  | 2.97 %   | 8942.50 GB | 1.97 %  | 0.53 %   | OK (v5) |

| mpx.vmhba0:C0:T72:L0 |     servers4.XXXXXX.XXX | MD    | 9    | 846.94 GB | 21.18 % | 1.31 %   | 4234.68 GB | 21.18 %  | 2.97 %   | 8942.50 GB | 7.17 %  | 0.12 %   | OK (v5) |

| mpx.vmhba0:C0:T67:L0 |     servers4.XXXXXX.XXX | MD    | 11   | 846.94 GB | 21.18 % | 4.89 %   | 4234.68 GB | 21.18 %  | 2.97 %   | 8942.50 GB | 2.46 %  | 0.46 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

Any ideas what I need to be looking for in the logs to look for problem?

Thanks

Reply
0 Kudos
3 Replies
TheBobkin
Champion
Champion

Hello usa_kwilliamson,

The reason this will likely never be able to rebalance is likely the disparity in Reserved and Non-Reserved 'used' space on the disks with the highest Imbalance index.

You can clearly see that the Disks on node 2 with the highest usage also has relatively high (proportionally) Reserved space e.g. Thick-provisioned Objects - you seem to have a high proportion of these in general in the cluster which are not going to be deduped so this is not a great idea either way.

| naa.5000cca095013358 |     servers2.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T72:L0 |     servers2.XXXXXX.XXX | MD    | 17   | 846.94 GB | 49.52 % | 4.42 %   | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 4.99 %  | 0.42 %   | OK (v5) |

| mpx.vmhba0:C0:T79:L0 |     servers2.XXXXXX.XXX | MD    | 17   | 846.94 GB | 49.52 % | 31.38 %  | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 11.73 % | 2.97 %   | OK (v5) |

| mpx.vmhba0:C0:T67:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 30.66 %  | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 7.51 %  | 2.90 %   | OK (v5) |

| mpx.vmhba0:C0:T74:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 33.76 % | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 7.32 %  | 3.20 %   | OK (v5) |

| mpx.vmhba0:C0:T76:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 49.52 % | 1.80 %   | 4234.68 GB | 49.52 %  | 20.40 %  | 8942.50 GB | 8.25 %  | 0.17 %   | OK (v5) |

+----------------------+-------------------------+-------+------+-----------+---------+----------+------------+----------+----------+------------+---------+----------+---------+

| naa.5000cca0950138f0 |     servers2.XXXXXX.XXX | SSD   | 0    | 372.61 GB | 0.00 %  | 0.00 %   | N/A        | N/A      | N/A      | N/A        | N/A     | N/A      | OK (v5) |

| mpx.vmhba0:C0:T73:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 49.98 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 11.24 % | 4.73 %   | OK (v5) |

| mpx.vmhba0:C0:T78:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 30.31 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.36 %  | 2.87 %   | OK (v5) |

| mpx.vmhba0:C0:T77:L0 |     servers2.XXXXXX.XXX | MD    | 16   | 846.94 GB | 50.09 % | 4.18 %   | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.19 %  | 0.40 %   | OK (v5) |

| mpx.vmhba0:C0:T75:L0 |     servers2.XXXXXX.XXX | MD    | 16   | 846.94 GB | 50.09 % | 29.23 % | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.82 %  | 2.77 %   | OK (v5) |

| mpx.vmhba0:C0:T66:L0 |     servers2.XXXXXX.XXX | MD    | 18   | 846.94 GB | 50.09 % | 23.50 %  | 4234.68 GB | 50.09 %  | 27.44 %  | 8942.50 GB | 6.11 %  |

I have often seen laziness with rebalancing Thick Objects as the space is not really used (just reserved).

Thus I would advise determining what is Thick and why (and that you intended it to be so) and consider thinning what you can and then try rebalance. By the way you can also set custom rebalance variance thresholds and how much MB to move per hour per node via RVC - otherwise this intentionally performs this process extremely slowly.

Bob

usa_kwilliamson
Enthusiast
Enthusiast

TheBobkin,

Thanks for pointing this out.   I went back and starting looking at some of the VMs.   It turned out there are several appliances that were deployed from ovf files on that cluster that come as thick provisioned objects.  I wished that the appliances that VMware delivers(such as vcenter, psc, loginsight) weren't delivered as thick provisioned objects and were 'vsan friendly delivered as thin'.  Do you happen to know where would be a good place to suggest ideas?  Support changed proactive rebalance job with a variance of 0.21 and rate 500 to see if that will help balance the cluster.   But this whole thing about thick provisioning has me thinking of future uses with vsan.  I have another cluster that has several databases that are thick provisioned but how is it going to stay balanced if I move that to vsan in a few years.

Thanks.

Reply
0 Kudos
TheBobkin
Champion
Champion

Hello usa_kwilliamson,

"I wished that the appliances that VMware delivers(such as vcenter, psc, loginsight) weren't delivered as thick provisioned objects and were 'vsan friendly delivered as thin'."

Likely what is outlined in the below article is occurring which can be resolved using PowerCLI:

https://www.virtuallyghetto.com/2016/06/heads-up-ovfova-always-deployed-as-thick-on-vsan-when-using-...

That or you are using a VxRail which come with Thick SPs applied to these.

I can't say if same still occurs in HTML5 Client as I generally work on the fixing-stuff-that-people-broke side of things as opposed to deployment but will check.

In my opinion any vital appliances such as vCenter and PSC should be Thick-provisioned as you do not want these to be impacted in an out-of-space situation (e.g. if someone goofs and manages to use up all the space on the cluster like changing the only SP in use from R1 to R5 and clicking ok to 'apply now'...). LogInsight does seem to come with excessively large disks considering whether people really use this a lot or not at all and/or want/need long-term historical logging data.

"Do you happen to know where would be a good place to suggest ideas?"

Here:

https://www.vmware.com/company/contact/contactus.html?department=prod_request

"Support changed proactive rebalance job with a variance of 0.21 and rate 500 to see if that will help balance the cluster. "

Yes, worth trying as I advised previously but maybe figuring out if other stuff is Thick (and not just the appliances you mentioned) and thinning these other Objects will help here. Depending on how the data got onto the cluster it is possible for data to have a Thin SP but be Thick, yes I know that may sound confusing but generally I more one to trust the RVC and cmmds-tool output for data (e.g. proportionalCapacity = 100).

"But this whole thing about thick provisioning has me thinking of future uses with vsan.  I have another cluster that has several databases that are thick provisioned but how is it going to stay balanced if I move that to vsan in a few years."

Issues with Disk Balance that aren't resolved by pressing the button in the GUI seem to almost solely be issues that occur in small-clusters and/or with (relatively) small capacity-disks - Max component size is 255GB so if your vmdks are Thick and large and your capacity-drives are small enough that one of these components (or sub-components) takes up 1/3 to 1/4 of these, it is pretty clear why this is going to pose a problem e.g. moving the data elsewhere (unless you have empty disks) will just move the imbalance with it. This is compounded by the limited options for component placement in smaller clusters while still being compliant with the FTT of the SP e.g. if you have only 3 nodes in a cluster then for a single Object each set of data-components is placed on 2 nodes and Witness on the other - if for instance you had a 3-node cluster with 5TB capacity on each node and then placed a 4TB vmdk on it obviously you are going to have a massive imbalance if 4TB for each R1 data-set gets placed on 2 of the nodes and 16MB witness-component on the remaining 3rd node (in reality it would likely use votes and split 2TB+2TB of one data-set over 2 nodes and 4TB on the 3rd node but this would still be imbalanced until more data was put on the cluster (and I am being hyperbolic to make my point :smileygrin: )).

However, if that 4TB vmdk would function just fine as 2x2TB vmdks then this would of course be more easily distributed and balanced so really it is just a case of understanding the implications of cluster size, type and layout and designing them to suit the needs of what you are going to store on them.

Bob

Reply
0 Kudos