ManivelR's Posts

  1) As this is a VSAN-enabled cluster with VSAN file services, we like to know whether we need to check anything as a prereq before the upgrade or not.   Please help.
  Thanks so much Tibmeister and Anagh B for your valuable responses. Im sorry for the late response.I thought,no one responded back and lately(this week only) I saw both your responses. Hi Tibmeis... See more...
  Thanks so much Tibmeister and Anagh B for your valuable responses. Im sorry for the late response.I thought,no one responded back and lately(this week only) I saw both your responses. Hi Tibmeister/Anagh B,   This is the VMware Engineer response. On checking the same,customer has disabled the ballooning on the guest operating system. Conclusion from VMware Engineer. Enable VM Ballooning to allow the OS to better offload unused memory to the hypervisor for load balancing resources. Thank you, Manivel RR
Thanks so much Tibmeister and everyone for your valuable responses. Hi Tibmeister/All,   This is the VMware Engineer response. On checking the same,customer has disabled the ballooning on the gue... See more...
Thanks so much Tibmeister and everyone for your valuable responses. Hi Tibmeister/All,   This is the VMware Engineer response. On checking the same,customer has disabled the ballooning on the guest operating system. Conclusion from VMware Engineer. Enable VM Ballooning to allow the OS to better offload unused memory to the hypervisor for load balancing resources. Thank you, Manivel RR  
Thanks both for your response. I'm still not clear and not getting any clue yet.
Hi All, Detailed explanation given here. 1) We are running 3 node VSAN cluster here. 2) Each node physical's memory is 765. So, 3 * 765 GB = 2295 GB of memory are present in the cluster wise.  ... See more...
Hi All, Detailed explanation given here. 1) We are running 3 node VSAN cluster here. 2) Each node physical's memory is 765. So, 3 * 765 GB = 2295 GB of memory are present in the cluster wise.  Only one customer VMs are there. Totally 18 VMs and their VM summary is given below, 15 VMs * 128 GB of memory = 1792 GB 2 VMs * 16 GB = 32 GB of memory 1 VM * 32 GB of memory In total, the above 18 VMs configured virtual memory are 1984 GB. Internal management are running on the same cluster and the total VMs count are 9(VC,VSAN FS,LB etc..) Those 9 VMs memory are 62 GB(complete configured virtual memory of all these 9 VMs) Total- 1984 GB + 62 GB = 2046 GB of memory( Customer VMs + Internal management VMs). VSAN itself is taking some memory for cache. As per the VSAN monitor stats, each node utilizes 50 GB of memory, so 150 GB is being used from all these 3 nodes.' Total- 1984 GB(configured) + 62 GB(configured) + 150 GB(current usage of VSAN) = 2196 GB of memory. Question here, 1) Overall the VMs active memory usage is around 100 GB out of 2295 GB(cluster memory) which means it is just 5 % usage.Then why the cluster usage is reporting more than 90 % ? Note- We are using resource pool here and there is no reservation set(on each VM->edit settings) 2) However the 3 node cluster memory usage reports 95 %,93 % 89 %. Why it is reporting this?   3) From ESXTOP, memory ballooning is ticked as yes and it is in high state. 4) As the customer VMs are RHEL-8,we have installed open-vm-tools not VMware tools. 5 We created a dedicated resource pool for this customer and enabled the expandable reservation.   Is this issue happens because of open-VM tools and expandable reservation on the resource pool settings?   Can someone please share the thoughts? Im not able to understand it clearly.   Thanks, Manivel    
I have a question hee. Anyone can help?
Thank you.
Thanks for the response.
Hi Team, I have a 3-node VSAN cluster and the total memory of all 3 ESXi hosts are around 2300 GB of memory. we have allocated some 2 TB of memory to all 21 VMs running on the cluster. Also the ac... See more...
Hi Team, I have a 3-node VSAN cluster and the total memory of all 3 ESXi hosts are around 2300 GB of memory. we have allocated some 2 TB of memory to all 21 VMs running on the cluster. Also the active memory of all the 21 VMs is using just 10 %.No VMs are not actively being used. why my cluster usage is more than 90% ?  how to conclude this? There is no reservation,no limit etc.. Cluster stats,     Thanks, Raj      
Thanks for your prompt response. I'm testing on the lab for VM encryption storage policies. first Steps:- 1) Created a default Native key provider. 2) While creating the VM encryption policy(all ... See more...
Thanks for your prompt response. I'm testing on the lab for VM encryption storage policies. first Steps:- 1) Created a default Native key provider. 2) While creating the VM encryption policy(all data store are visible - I have 5 shared datastores and all of them are compatible there) out of 5 Datastore,I will need to tag this storage policy to "New-VM-encryption-policy" to "ISCSI-ENCR-DATASTORE",so that the complete DS will be encrypted.Am i right or incorrect? May I know how to do this task?  
Hi All, I have few questions.Please someone help me on this regard. - 1) Before enabling encryption,we will need to setup/integrate the KMS server with vcenter server.After enabling the vSAN Encryp... See more...
Hi All, I have few questions.Please someone help me on this regard. - 1) Before enabling encryption,we will need to setup/integrate the KMS server with vcenter server.After enabling the vSAN Encryption at Rest or In Transit from the VSAN service level, we can utilize the VSAN encryption on the complete VSAN data store ?. Like VM encryption policy,can we use VSAN encryption only to the selected  VSAN policies? For example  5 VSAN policies with encryption and 5 policies without VSAN encryption. If that is possible(5 VSAN policies with encryption and 5 policies without VSAN encryption),will there be any impact on the current PROD VM?(after enabling  VSAN Encryption at Rest or In Transit from the VSAN service level) 2)Can we enable VSAN encryption on the new VSAN policy level(example – VSAN GOLD POLICY) without enabling VSAN encryption in the core services level ? It will not work right? 3) VM Encryption method is only for VMs and cannot use the same VM Encryption policy for VSAN file services.
Hi All,  I have a question related to cpu overcommit and numa.  Three nodes cluster.  Each node has 2 sockets and 20 cores per socket.  Physical cores on each node is 40. After hyperthreading, t... See more...
Hi All,  I have a question related to cpu overcommit and numa.  Three nodes cluster.  Each node has 2 sockets and 20 cores per socket.  Physical cores on each node is 40. After hyperthreading, the count is 80. Each node physical memory is 768 gb.    I created 6 vms and each vm vcpu is 48 vcpu with 1 socket and each vm memory is 128 gb. We accomadated 2 vms from each esxi node.  All these vms are high usage vm, and i have some confusion.  1) Esxi hosts are under utilized and its memory utilization on all three nodes are below 20% from vcenter. However, I see there is a ballooning on all the six vms. Checked it from esxtop and I don't know why? It's strange.  2) Esxi hosts are under utilized and its cpu utilization on all three nodes are below 20% from vcenter. However, from esxtop wait time and all goes above 100%.Ready value is less than 2%  3) If we exceed the vm cpu than physical cores(our case its 40), then this will create impact right?.  4) coming to numa, I hope there will be 2 numa nodes per esxi as per my setup.  1st numa is 20 cores per socket and another numa is 20 cores per socket.  If we exceed the numa count, for example vm vcpu 48 cpu, then we can expect vm slowness?    5) what is the best practices for vm cpu.  I mean cores per socket should be 1 by default, or can we go with 2?    Thank you,  Manivel rR  
Hi All, Good morning. I have a question on custom SSL certificate question on vCenter server 8.0. Please someone provide me some insights to fix this issue.   We are trying to install custom SSL ... See more...
Hi All, Good morning. I have a question on custom SSL certificate question on vCenter server 8.0. Please someone provide me some insights to fix this issue.   We are trying to install custom SSL certificate purchased from Godaddy(Standard SSL certificate) for our vCenter Server 8.0.We are using the below method to update the SSL certificate.(Snapshot 1)   We created the CSR  with bit size as 4096(done from vCenter)   Snapshot 1   Whenever we try to apply it, we get the below error like "provide strong signature algorithm certificate"     Thanks, Manivel RR        
Hi All, Yes.I also faced the same kind of issue. I sorted it out in the same way for VSAN. with command line vim-cmd hostsvc/advopt/update ScratchConfig.ConfiguredScratchLocation string /vmfs/vol... See more...
Hi All, Yes.I also faced the same kind of issue. I sorted it out in the same way for VSAN. with command line vim-cmd hostsvc/advopt/update ScratchConfig.ConfiguredScratchLocation string /vmfs/volumes/f9d66946-bdd7efac/scratchlogs/ESXi-01 f9d66946-bdd7efac-->This is my external NFS server id,created a folder called "Scratchlog/ESXi-01" After configuring/reboot of ESXi,that error message was gone in the skyline health(VSAN-Monitor)   Thank you, Manivel RR    
Ok thanks Depping.
Hi All   I have a doubt about NVME Namespaces for VSAN. I haven't seen any official doc from VMware regarding support. Does VMware supports the NVME namespace or not yet?   Thanks, Manivel RR ... See more...
Hi All   I have a doubt about NVME Namespaces for VSAN. I haven't seen any official doc from VMware regarding support. Does VMware supports the NVME namespace or not yet?   Thanks, Manivel RR  
Thanks depping for your feedback.
ok thanks depping.
Yes Depping. We use 100 G support cisco sitches
Hi All, I'm currently evaluating the performance with VSAN 8. All flash cluster 3 node cluster Per Node - 12, * 3.64 TB NVME PCI-E disks are available. All these disks are read-intensive disks. T... See more...
Hi All, I'm currently evaluating the performance with VSAN 8. All flash cluster 3 node cluster Per Node - 12, * 3.64 TB NVME PCI-E disks are available. All these disks are read-intensive disks. Total 36 NVME PCI-E disks * 3 nodes. I have tested with 3 disk groups(per node) with the same 36 NVME disks. I understand that we don't need to use 3.64 TB disks for the cache tier(there will be  a performance hit), however, I used and test the overall performance. Its ok now. As I said earlier,all 36 NVME disks are read intensive disks,so i like to move on our testing with write intensive disks for cache tier. What capacity Can I recommend for cache tier ? either it is 600 GB or 1.6 TB disks? Note->We are going to purchase new drives for cache tier only and like to test with the same 36 drives for capacity tier.   VSAN document. vSAN cache tier capacity is capped at 600GB currently. Starting with vSphere 8.0, vSAN supports higher cache tier capacities, up to 1.6TB. However, this is not enabled by default. By default, any new disk groups getting created will still use cache tier capacity of only up to the existing limit of 600GB.   Bobkin message is useful. https://communities.vmware.com/t5/VMware-vSAN-Discussions/Questions-about-all-flash-vSAN/m-p/2900819#M14121 1. Yes it is necessary to use a whole, unpartitioned, All-Flash Cache-tier certified device as Cache-tier (advisable to validate they are on the vSAN HCL and certified for that purpose before purchasing anything). There is no Disk-Group without exactly one 1 Cache-tier SSD/NVMe + 1-7 Capacity-tier devices. 2. No, a whole device is needed - using an 8TB device for this is not a good use of resources (as write-buffer in current versions of vSAN will only actively use max 600GB), better off using something smaller and faster (e.g. a 600-800GB write-intensive NVMe over a possibly worse performing 4-8TB read-intensive SSD/NVMe (also, bear in mind when I say "worse performing" I mean like for like e.g. a 8TB device using only 600GB isn't going to get the full device performance)). 3. Because this is how vSAN architecture has been designed. The intention is to use smaller, relatively faster devices as Cache-tier and then larger, less write-intensive devices as Capacity-tier.