I have just migrated a VSAN cluster from 6.7U3 to 7.0U1.
Apart of big network issue with Mellanox ConnetctX-4 LX cards that I'm trying to solve upgrading driver/firmware (now the cards go to 2Gbit/s and VSAN is super slow), I have noticed that VCenter (that has some disks Thick even on VSAN), has 2 disks of 1.4TB of size (99% free but thick), VMDK7 and VMDK13...
Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 932K 5.9G 1% /dev/shm
tmpfs 5.9G 1.2M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgroup
/dev/sda3 46G 5.2G 39G 12% /
/dev/sda2 120M 27M 85M 24% /boot
/dev/mapper/lifecycle_vg-lifecycle 98G 3.6G 90G 4% /storage/lifecycle
/dev/mapper/vtsdblog_vg-vtsdblog 15G 73M 14G 1% /storage/vtsdblog
/dev/mapper/core_vg-core 25G 45M 24G 1% /storage/core
/dev/mapper/vtsdb_vg-vtsdb 1.4T 108M 1.4T 1% /storage/vtsdb
/dev/mapper/archive_vg-archive 49G 1.2G 46G 3% /storage/archive
/dev/mapper/db_vg-db 9.8G 232M 9.1G 3% /storage/db
/dev/mapper/updatemgr_vg-updatemgr 98G 908M 93G 1% /storage/updatemgr
/dev/mapper/netdump_vg-netdump 985M 2.5M 915M 1% /storage/netdump
/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 37M 9.3G 1% /storage/imagebuilder
/dev/mapper/autodeploy_vg-autodeploy 9.8G 37M 9.3G 1% /storage/autodeploy
tmpfs 5.9G 7.2M 5.9G 1% /tmp
/dev/mapper/dblog_vg-dblog 15G 105M 14G 1% /storage/dblog
/dev/mapper/seat_vg-seat 1.4T 349M 1.4T 1% /storage/seat
/dev/mapper/log_vg-log 9.8G 1.7G 7.6G 19% /storage/log
tmpfs 1.0M 0 1.0M 0% /var/spool/snmp
How is that happened? And, more important, how can I solve this safely?
Moderator: Thread moved to the vCenter Server area.
It seems like you have an X-Large appliance deployed.
Storage Requirements for the vCenter Server Appliance
(if you want to confirm it should have 24 vCPU and 56GB RAM Hardware Requirements for the vCenter Server Appliance )
That's correct according to what VMware can provide, now, I'm not sure if that design is correct for your environment, but that should be checked with the team or people who implemented
Unfortunately I have found that my collegue has used the X-Large Storage Size with Thick disks, but the VCenter is configured for small enviroment (5 host).
What's the best way to change this configuration? Migration, switching to Thin (using storage vmotion) or something else?
I've seen many ways to increase resources on a VCSA but not to take resources for VCSA, you might try to do it, but I'm to100% sure if it's going to work. (also, if you are using vSAN there is no thick or thin, just storage policies and objects)
In my humble opinion: what you might do is this:
1. take a file based backup of you VCSA
2. deploy a new instance of VCSA in the SAME VERSION, this time please set it to small.
3. Restore backup files on the new VCSA (prior to the restore, please shut down original VCSA having taken note of which host is the VM running in case you need to power it back on.)
I know that in VSAN there are only storage policy, but actually the VCenter was configured as Thick, and as you see the 2 disks of 1.4TB are filles less than 1%, but on vsanDatastore the 2 disks are of 1.4TB.
The default storage policy is with 0% space reservation, so thin, but this configuration on VCenter seems to win VS storage policy.
Also, usually when I use VCenter Converter to convert VM that are not VSAN native, on Thick disks I have a VSAN Health Warning about the presence of Thick disks with Thin storage policy, but in this case of VCenter I doesn't have this warning.
I will try the Storage Vmotion to convert to Thin the VDisks and will update the conversation.
Apart of storage size, there are something different configured when you choose X-Large deployment? Like log retention or something else?