VMware Cloud Community
ManuelDB
Enthusiast
Enthusiast

VCenter 7 occupy more than 3TB

Hello,

I have just migrated a VSAN cluster from 6.7U3 to 7.0U1.

Apart of big network issue with Mellanox ConnetctX-4 LX cards that I'm trying to solve upgrading driver/firmware (now the cards go to 2Gbit/s and VSAN is super slow), I have noticed that VCenter (that has some disks Thick even on VSAN), has 2 disks of 1.4TB of size (99% free but thick), VMDK7 and VMDK13...

Filesystem                                Size  Used Avail Use% Mounted on

devtmpfs                                  5.9G     0  5.9G   0% /dev

tmpfs                                     5.9G  932K  5.9G   1% /dev/shm

tmpfs                                     5.9G  1.2M  5.9G   1% /run

tmpfs                                     5.9G     0  5.9G   0% /sys/fs/cgroup

/dev/sda3                                  46G  5.2G   39G  12% /

/dev/sda2                                 120M   27M   85M  24% /boot

/dev/mapper/lifecycle_vg-lifecycle         98G  3.6G   90G   4% /storage/lifecycle

/dev/mapper/vtsdblog_vg-vtsdblog           15G   73M   14G   1% /storage/vtsdblog

/dev/mapper/core_vg-core                   25G   45M   24G   1% /storage/core

/dev/mapper/vtsdb_vg-vtsdb                1.4T  108M  1.4T   1% /storage/vtsdb

/dev/mapper/archive_vg-archive             49G  1.2G   46G   3% /storage/archive

/dev/mapper/db_vg-db                      9.8G  232M  9.1G   3% /storage/db

/dev/mapper/updatemgr_vg-updatemgr         98G  908M   93G   1% /storage/updatemgr

/dev/mapper/netdump_vg-netdump            985M  2.5M  915M   1% /storage/netdump

/dev/mapper/imagebuilder_vg-imagebuilder  9.8G   37M  9.3G   1% /storage/imagebuilder

/dev/mapper/autodeploy_vg-autodeploy      9.8G   37M  9.3G   1% /storage/autodeploy

tmpfs                                     5.9G  7.2M  5.9G   1% /tmp

/dev/mapper/dblog_vg-dblog                 15G  105M   14G   1% /storage/dblog

/dev/mapper/seat_vg-seat                  1.4T  349M  1.4T   1% /storage/seat

/dev/mapper/log_vg-log                    9.8G  1.7G  7.6G  19% /storage/log

tmpfs                                     1.0M     0  1.0M   0% /var/spool/snmp

How is that happened? And, more important, how can I solve this safely?

Thanks

Manuel

Tags (1)
0 Kudos
7 Replies
scott28tt
VMware Employee
VMware Employee

Moderator: Thread moved to the vCenter Server area.


-------------------------------------------------------------------------------------------------------------------------------------------------------------

Although I am a VMware employee I contribute to VMware Communities voluntarily (ie. not in any official capacity)
VMware Training & Certification blog
0 Kudos
nachogonzalez
Commander
Commander

It seems like you have an X-Large appliance deployed.
Storage Requirements for the vCenter Server Appliance 
(if you want to confirm it should have 24 vCPU and 56GB RAM Hardware Requirements for the vCenter Server Appliance )

That's correct according to what VMware can provide, now, I'm not sure if that design is correct for your environment, but that should be checked with the team or people who implemented

Warm regards

0 Kudos
ManuelDB
Enthusiast
Enthusiast

Unfortunately I have found that my collegue has used the X-Large Storage Size with Thick disks, but the VCenter is configured for small enviroment (5 host).

What's the best way to change this configuration? Migration, switching to Thin (using storage vmotion) or something else?

Thanks

Manuel

0 Kudos
nachogonzalez
Commander
Commander

I've seen many ways to increase resources on a VCSA but not to take resources for VCSA, you might try to do it, but I'm to100% sure if it's going to work. (also, if you are using vSAN there is no thick or thin, just storage policies and objects)

In my humble opinion: what you might do is this:
1. take a file based backup of you VCSA

File-Based Backup and Restore of vCenter Server Appliance

2. deploy a new instance of VCSA in the SAME VERSION, this time please set it to small.
3. Restore backup files on the new VCSA (prior to the restore, please shut down original VCSA having taken note of which host is the VM running in case you need to power it back on.)

0 Kudos
ManuelDB
Enthusiast
Enthusiast

I know that in VSAN there are only storage policy, but actually the VCenter was configured as Thick, and as you see the 2 disks of 1.4TB are filles less than 1%, but on vsanDatastore the 2 disks are of 1.4TB.

The default storage policy is with 0% space reservation, so thin, but this configuration on VCenter seems to win VS storage policy.

Also, usually when I use VCenter Converter to convert VM that are not VSAN native, on Thick disks I have a VSAN Health Warning about the presence of Thick disks with Thin storage policy, but in this case of VCenter I doesn't have this warning.

I will try the Storage Vmotion to convert to Thin the VDisks and will update the conversation.

Apart of storage size, there are something different configured when you choose X-Large deployment? Like log retention or something else?

0 Kudos
ckutest
Contributor
Contributor

Hi,

i have the same or similar Problem but we deployed a Tiny with 2 CPUS and 12 GB Ram: 

 

Filesystem Size Used Avail Use% Mounted on
devtmpfs 5.9G 0 5.9G 0% /dev
tmpfs 5.9G 1008K 5.9G 1% /dev/shm
tmpfs 5.9G 1.3M 5.9G 1% /run
tmpfs 5.9G 0 5.9G 0% /sys/fs/cgrou p
/dev/mapper/vg_root_0-lv_root_0 47G 11G 35G 23% /
tmpfs 5.9G 6.1M 5.9G 1% /tmp
/dev/sda3 488M 40M 413M 9% /boot
/dev/sda2 10M 2.2M 7.9M 22% /boot/efi
/dev/mapper/lifecycle_vg-lifecycle 98G 3.5G 90G 4% /storage/life cycle
/dev/mapper/imagebuilder_vg-imagebuilder 9.8G 37M 9.3G 1% /storage/imag ebuilder
/dev/mapper/autodeploy_vg-autodeploy 9.8G 47M 9.3G 1% /storage/auto deploy
/dev/mapper/netdump_vg-netdump 985M 2.5M 915M 1% /storage/netd ump
/dev/mapper/vtsdblog_vg-vtsdblog 15G 57M 14G 1% /storage/vtsd blog
/dev/mapper/core_vg-core 25G 45M 24G 1% /storage/core
/dev/mapper/vtsdb_vg-vtsdb 541G 104M 513G 1% /storage/vtsd b
/dev/mapper/vg_lvm_snapshot-lv_lvm_snapshot 492G 73M 467G 1% /storage/lvm_ snapshot
/dev/mapper/updatemgr_vg-updatemgr 98G 329M 93G 1% /storage/upda temgr
/dev/mapper/log_vg-log 9.8G 3.0G 6.3G 33% /storage/log
/dev/mapper/db_vg-db 9.8G 1.1G 8.3G 11% /storage/db
/dev/mapper/dblog_vg-dblog 15G 11G 3.7G 75% /storage/dblo g
/dev/mapper/archive_vg-archive 49G 29G 18G 63% /storage/arch ive
/dev/mapper/seat_vg-seat 541G 407M 513G 1% /storage/seat
tmpfs 1.0M 0 1.0M 0% /var/spool/sn mp

 

As you can see we have e.x 541GB and 492GB with 1 % Use - during installation we could only choose Large Storage Size and i`m wondering why we couldnt choose Default Storage Size 500 GB.

 

Any ideas?

Thank You

 

0 Kudos
Ajay1988
Expert
Expert

/dev/mapper/seat_vg-seat 541G 407M 513G 1% /storage/seat
Seat being at 541GB is clear that you deployed with Large Storage selection. 

From a tiny environment with default storage size; below is what u will see. 
/dev/mapper/seat_vg-seat 49G 148M 47G 1% /storage/seat

If you think your queries have been answered
Mark this response as "Correct" or "Helpful".

Regards,
AJ
0 Kudos