Tibmeister's Accepted Solutions

What's worked for me for many moons is the following: Start small, 2 vCPU to start with Only ever use 1 Core per Socket, let vNUMA do it's thing Turn off CPU and Memory Hot Add, it's more trouble... See more...
What's worked for me for many moons is the following: Start small, 2 vCPU to start with Only ever use 1 Core per Socket, let vNUMA do it's thing Turn off CPU and Memory Hot Add, it's more trouble than it's worth Watch performance over several days and use 95th percentile to remove spikes Consider CPU usage from the hypervisor's perspective, not the Guest OS perspective Use VMXNET3 and Paravirtual adapters when possible Don't be afraid of a VM running between 70% and 80% at all times, a busy VM is an efficient VM Watch CO-STOP, RDY, and IO-WAIT and size accordingly Most importunately, understand your workload!!!!!
If you have the DRS rule set to keep the VMs separate then when Host-A comes back online one of the VMs will be immediately moved to that host. You can also pin a particular VM to Host-A to further m... See more...
If you have the DRS rule set to keep the VMs separate then when Host-A comes back online one of the VMs will be immediately moved to that host. You can also pin a particular VM to Host-A to further manage it.  min this configuration, if Host-A goes down, both VMs will be running on Host-B until Host-A comes back, at which time the VMs will be separated again. 
This is a warning saying "Hey, we notice you are using a VSS, so we cannot verify that every host in the target cluster has the network named 'VM Network' defined.  Please check, and if you are sure ... See more...
This is a warning saying "Hey, we notice you are using a VSS, so we cannot verify that every host in the target cluster has the network named 'VM Network' defined.  Please check, and if you are sure it exists, go ahead".  It's not an error, but an information warning.
A content library is only accessible by VM's on the same cluster.  What I've done is create my main content library on a cluster, then allow that to be published.  All other cluster subscribe to the ... See more...
A content library is only accessible by VM's on the same cluster.  What I've done is create my main content library on a cluster, then allow that to be published.  All other cluster subscribe to the main content library and any changes are done only to the main content library.  All my clusters are vSAN backed, so plan on space being consumed on every subscribed cluster equal to the main content library size.
So in the case of the disk that's > 255GB, it's getting striped into two objects of equal size, which is a single RAID0 object.  Those two components comprise the single object that is the disk, whic... See more...
So in the case of the disk that's > 255GB, it's getting striped into two objects of equal size, which is a single RAID0 object.  Those two components comprise the single object that is the disk, which is why you see two components under the RAID0 object that is then mirrored under the RAID1 object.  Essentially, the mirror happens at the object level, with is the RAID0 component, so both RAID0 components have to be mirrored.  It boils down to when you have a disk that is > 255GB and your policy is RAID1 FTT1 then that disk will become a RAID10 object, since that's the only way vSAN can effectively handle the multiple sub-components that make up the disk. Since the other disk is < 255GB, it's only a single component to comprise the object that is the disk, so there is no RAID0 component under the RAID1 component.
Every update bumps the version, so in short, there's no way to apply any update without bumping the version.  Is there a specific reason you must stay on 7.0 U2a?  The only thing I can think of is so... See more...
Every update bumps the version, so in short, there's no way to apply any update without bumping the version.  Is there a specific reason you must stay on 7.0 U2a?  The only thing I can think of is some third party vendor talking directly to ESXi that has some policy of "we haven't tested it so it's not supported", which with how far back U2a is at this point, I would evaluate this vendor's value vice the risk they are putting your org under by not mitigating security flaws and bug fixed to keep your production systems safe and secure.
Not as easily as you may think.  You would have to do some querying of the performance database then compile the data.  Then, you will have a dataset, but what are you wanting to use to build the cha... See more...
Not as easily as you may think.  You would have to do some querying of the performance database then compile the data.  Then, you will have a dataset, but what are you wanting to use to build the chart with?  There are modules to do this, but that's a bit of effort. In vCenter, you can create the option set then save it, but yes, you would have to do this for each VM.  The easy answer is use a solution designed to do this, such as vROps, that can aggregate the data and show all the pretty charts easily for any object that is a VM.  You can also schedule reports and even just give the boss access to view them on demand. Can it be done with a fair amount of work?  Sure, but, is your time worth that effort or is the effort better spent on implementing something like vROps, or even SolarWinds (trying to be fair) that will not only provide these types of reports but also provide a ton of information and alerting, including pro-active analysis?
Linux, and Windows, will switch out the kernel as needed based on the architecture.  While both AMD and Intel are x86, there are CPU features (extensions) that are different, and both OS kernels are ... See more...
Linux, and Windows, will switch out the kernel as needed based on the architecture.  While both AMD and Intel are x86, there are CPU features (extensions) that are different, and both OS kernels are intelligent enough to switch in and out.  This is also why it's very important to check the Compatibility matrix for ESXi version to CPU to Guest OS to EVC mode.  If you run a new OS on a version of ESXi that is not fully supported, on a brand new CPU, there could be issues.  This is part of the reason why VMware began removing support for certain CPU models, to prevent these odd situations from occurring and causing problems.  Microsoft even removes support for certain CPU's from time to time for the same reason, to prevent BSOD's and other issues.
Like @TheBobkin mentioned, in a 3-node vSAN cluster there is no dedicated witness node like we would see in a 2-node cluster.  The witness role "floats" between the hosts as needed, but when you look... See more...
Like @TheBobkin mentioned, in a 3-node vSAN cluster there is no dedicated witness node like we would see in a 2-node cluster.  The witness role "floats" between the hosts as needed, but when you look at the virtual objects, you will see the Witness piece of your components, which is just a small component and doesn't take a lot of space, it's just metadata. With your design, two disk groups with 4 1TB capacity disks, with a FTT1, should have something like 12TB total usable for the cluster.  Also, I would look at the 960GB high endurance disks for the cache, only 800GB of cache can be used per disk group, so the extra capacity is for wear of the disk.  As SSD's are used and wear, the cells become unusable which results in reduction of capacity. So all 3 hosts will contain data, and with FTT1 it will be mirrored, so each object will have two copies.  So, with FTT1 and SW1, and a disk size of < 255GB, you will have your object that represents your VMDK on a diskgroup on one node, and a copy of that component on a disk group on a different node.  In this scenario, if you lost a node, and of the components would be rebuilt on the remaining two nodes from the mirrored sets.  The thing to watch for is that you plan on the ability to loose a node, so plan your storage max as 66% of the total available, and the compute max also as 66% of normal.  In your case, I would not provision more than 7TB of data, which is ~58% of your total, as you will need some unused space for rebuilds, snapshots, etc.  This will ensure you can do maintenance on the cluster without issue, and that you can loose a host and not be in a degraded state, but continue to operate normally. Once you repair a failed node, or introduce a new node, the storage policy will continue to take effect, and some components will move to the new node as appropriate. Quick example, simple VM with a single, 100GB VMDK.  There's 3 objects that make up the VM; the VMDK itself, the VM Home (config files and such), and the VM swap object.  The VM itself is executing from node2. The VMDK object has a RAID-1 tree established between node2 and node1, meaning there's a component on node2 and the copy of that component on node1, and the RAID-1 tree mirrors the writes to both components, over the network, just like normal RAID-1 would do.  The Witness component, remember, just metadata, is on node3. The VM Home object is the same as the VMDK object in this case.  The VM swap object on the other hand is slightly different.  The RAID-1 tree is between node2 and node3, with the Witness component being on node1. So in this example, loosing node1, a new component for the VMDK object and VM home object will have to be created on node3, and the Wintess components will remain on node3.  The Witness component for the VM swap object will need to be re-created on either node2 or node3.  Once I get node1 back in service, the cluster will rebalance itself to ensure that everything's spread back out again. If you wanted to instead go with a 2-node cluster using a Shared Witness, remember that the Shared Witness CANNOT run on the 2-node cluster it is servicing, and the Shared Witness is best as a Virtual Appliance, not an expensive piece of hardware.  Also, with a 2-node cluster, the 66% estimates I provided above, they now become 50%, so you will loose 50% of your capacity instead of 33% using a 2-node cluster.
I don't have any RDM's to test with, but honestly, this sounds like a good SR for VMware.  Make them earn their money.
No, the SD cards are considered non-persistent storage, and in fact, going forward SD-Cards are not even supported for ESXi installs.  The best bet is a datastore, probably NFS, as the scratch locati... See more...
No, the SD cards are considered non-persistent storage, and in fact, going forward SD-Cards are not even supported for ESXi installs.  The best bet is a datastore, probably NFS, as the scratch location. If you are running blades and only have two drive slots, you will have some choices because ESXi needs to be installed onto those drives, in a mirror, for redundancy, leaving nothing for vSAN.
So the recommendation is to use Image Based patching for vLCM.  When using baselines, the baseline will update at some point with new ESXi versions.  Remember, vSAN is part of vmkernel, but not alway... See more...
So the recommendation is to use Image Based patching for vLCM.  When using baselines, the baseline will update at some point with new ESXi versions.  Remember, vSAN is part of vmkernel, but not always patched or updated with every ESXi update. Patching really doesn't change in regards to when and what, always, within reason and your own policies, keep your systems up to date.   You don't mention if you have stretched or 2-node clusters.  If so, there's some added steps with the whole witness appliance.
No, you can't decrease the size of a disk's allocation.  What you can do is convert Disk1 into a Thin Disk, which will only consume what it needs on the datastore and free space inside the guest OS w... See more...
No, you can't decrease the size of a disk's allocation.  What you can do is convert Disk1 into a Thin Disk, which will only consume what it needs on the datastore and free space inside the guest OS will not be consumed.  You can also expand any other disk as desired.  That's the closest you can come to moving size around between disks, but, remember that Disk1 will still be able to consume all 500GB if it wanted to.
The intent is to not use SD Cards or USB thumb drives as the boot drive.  If you have a HDD/SDD that is connected via USB interface I believe that is fine, it's just the devices that can't handle the... See more...
The intent is to not use SD Cards or USB thumb drives as the boot drive.  If you have a HDD/SDD that is connected via USB interface I believe that is fine, it's just the devices that can't handle the high write rates.
Yes, VMware releases a Tools Offline Bundle that can be imported into VUM then pushed as a patch.  This will update the Tools ISO on the hosts, which then can be pushed to the VMs.  I also believe th... See more...
Yes, VMware releases a Tools Offline Bundle that can be imported into VUM then pushed as a patch.  This will update the Tools ISO on the hosts, which then can be pushed to the VMs.  I also believe that it may be part of the main VUM repository now, haven't checked in a few as I just have gotten in the habit of downloading the offline bundle.
So, there's always been discussion around this, and supposedly Server 2016 and 2019 "support" CPU hot remove, but as far as I know that is only on Hyper-V, with it's dynamic allocation settings, not ... See more...
So, there's always been discussion around this, and supposedly Server 2016 and 2019 "support" CPU hot remove, but as far as I know that is only on Hyper-V, with it's dynamic allocation settings, not ESXi.  Honestly, I can't see that leading to anything good, as changing your NUMA node while a kernel is running would throw everything into a hot mess in regards to memory addressing and such.  I know Hyper-V does some, as they call it, magic to enable this.  What they are doing is making everything one large NUMA node and goofing with the way the CPU addressing is handled between client and host.  It really gives no benefit.   I have to ask, why would you want to Hot-Remove a CPU anyway?  What would the practical purpose be?  Also, you realize even having Hot-Add enabled messes with vNUMA and causes a performance hit just by having it enabled?  Todd Muirhead wrote a wonderful article on this and references the proper KB's from VMware on this (https://blogs.vmware.com/performance/2019/12/cpu-hot-add-performance-vsphere67.html).  Take a gander over it.  Also, one thing I've also learned over the last two decades of using VMware is that the vCPU(s) of the VM is not tied to a particular pCPU (unless you've pinned it, in which case Dante is waiting) so it can run much harder than the equivalent pCPU as long as you don't have the VM oversized and don't have the host overloaded with too many oversized VMs (Co-Stop hell as I like to call it). As for the direct answer to your question; they cannot list any and all features that are not supported/available.  The lack of documentation that officially states that CPU Hot-Remove is supported should be evidence enough that it is not supported.  Now if you are referring to that property of VirtualMachineConfigSpec called CpuHotRemoveEnabled that was found in the VISDK, if you try to run the code it will fail.  I'm pretty sure this property got added for some future use, but that has never come to light.  Funny that it is still in vSphere Web Services API v7.0 (https://code.vmware.com/apis/968). $VMSpec=New-Object -Type VMware.Vim.VirtualMachineConfigSpec -Property @{"CpuHotRemoveEnabled" = $true} $VM = Get-VM SomeVM $VM.ExtensionData.ReconfigVM_Task($VMSpec)  
So you can use Invoke-RESTMethod to call the REST API.  From that, it returns a json type of object which can be parsed through ConvertFrom-JSON.  On the inverse, you should be able to use Conver... See more...
So you can use Invoke-RESTMethod to call the REST API.  From that, it returns a json type of object which can be parsed through ConvertFrom-JSON.  On the inverse, you should be able to use ConvertTo-JSON to create the command text to POST using Invoke-RESTMethod. I personally haven't delved into this area yet and probably won't be able to, but I have used PowerShell a lot to do REST API interaction without difficulty so this should be no different.