Skip navigation

So today, my friend and I were working on this interesting case:


There was an alarm for Network Up-link redundancy lost.

Now the funny thing about this alarm, it is triggered when any vSwitch loses it's network redundancy.


Say, for example. vSwitch 1 has two NICs in active/standby state > No alarms are triggered

When one of the NIC goes down on this vSwitch, then the "Network Uplink redundancy lost" alarm is triggered.


So to notify, we configured email notification for this alarm and configured the mail settings on vCenter as well.

All went good, we received the email stating network redundancy was gone.


Then, we added the NIC back to this vSwitch, and the alert was gone. However, this time, the network uplink restored email was not sent.


We spent a good 30 minutes checking the configuration of the alarm.


1. We checked the trigger conditions:

2. Lost network redundancy was set to Alert Status

3. Restored uplink redundancy to portgroups was set to Normal Status


Actions were set to:


1. Send Notification email.

2. Email Settings

3. All the conditions set to send an alert once.


Yet, we received email for Redundancy down and not redundancy restored.


So, out of curiosity, my colleague said, "Hey! Why not try setting up a similar alarm and copy these triggers over?"


Why not? Off we went with this!


1. We created a new alarm with the alarm name "Network NIC redundancy lost"

2. Monitor for Host

3. Monitor for specific events

4. Enable Alarm


Under Triggers:


1. Lost Network Redundancy - Alert

2. Restored uplink redundancy to portgroups - Normal

3. Lost Network Redundancy on DVPorts - Alert

4. Restored Network Redundancy to DVPorts - Normal


Under Actions


1. Configured Send notification email and the email address

2. All triggers set to once

3. Disabled the pre-defined alarm which was not working



Simulated this issue again.

This time:


We received an email notification when the redundancy was lost

And when the redundancy was restored, an email notification was sent and the alert was cleared automatically.


Something indeed fishy with the pre-defined alarm here!!




How to Perform Selective Snapshot of a VM.

Sometimes, we will have a VM with multiple VMDKs, and there comes a need when we have to take a snapshot of this VM. However, we only want to snapshot certain VMDKs.


How do we do this?


> Power off the virtual machine

> Delete any existing snapshots before you change the disk mode (Snapshot Manager > Delete All)

> Right-click the virtual machine and select Edit settings

> Right-click the virtual machine and select Edit settings

> Change the mode to Independent Persistent


Here Independent mode is of two types:



When a VMDK is configured in Independent Persistent Mode, what you will see is that no delta file is associated with this disk during a snapshot operation. In other words, during a snapshot operation, this VMDK continues to behave as if there is no snapshot being taken of the virtual machine and all writes go directly to disk. So there is no delta file created when a snapshot of the VM is taken, but all changes to the disk are preserved when the snapshot is deleted.


Non Persistent:

When a VMDK is configured as Independent Non-persistent Mode, a redo log is created to capture all subsequent writes to that disk. However, if the snapshot is deleted, or the virtual machine is powered off, the changes captured in that redo log are discarded for that Independent Non-persistent VMDK.


Here we want the data to be retained after a Power OFF for the snapshot VMDKs. Hence, we choose Persistent mode.



Enjoy Snapshot'ing


Fault Tolerance is something which is very crucial in certain business critical environments.
We do not want the VM's to have a single second of downtime when the host goes down. What do we do? We enable Fault Tolerance on this.


Now legacy FT provided fault tolerance for VMs with one vCPU. And if we wanted to have FT protection for vCenter VM for example, it was not possible. Why? Because vCenter VM requires a minimum of 2 vCPUs.


Now, the FT support and the way FT works is re-designed in 6.0


Here, I will provide a basic understanding of the new features of FT and how it works.



Basically, FT provides zero downtime for a VM in a HA cluster.

The basic working would be, we have Host 1 which has VM with FT enabled. This will be the primary VM. This results in a secondary VM being created on host 2 running simultaneously and in sync with the VM on host 1.

Should the host 1 go down then the VM on the host 2 will become the primary VM and another VM will be created on a new host in the cluster which becomes the secondary, and the process continues so on.


The old FT (Legacy FT) used a technology called 'Record and Replay' to keep the primary and the secondary VM in sync with each other. This is no longer used in vSphere 6.0. However, it will exist for VMs that was upgraded to vSphere 6.

The legacy FT supports only one vCPU.

In FT 6.0 we have added support for VMs with more than 2 vCPU. This is nothing but a SMP VM. Symmetric Multiprocessing VM is a virtual machine which has more than one vCPU. This usually provides a improved application performance.

Some other features of the new FT are, supports up to 64 GB of memory for the VM, supports vMotion of primary as well as secondary VMs. The primary and secondary can be independently vMotioned without breaking the FT link.

However, we cannot create a snapshot from the vSphere or the command line.


The main feature of FT 6 is, it creates a secondary copy of the VM files on a secondary datastore. This provides against datastore failures as well. However, sharing of files between primary and secondary is not supported 


What the new FT provides that the legacy FT did not:


1. EVC compatible clusters is supported in FT 6, however legacy FT cannot be enabled in EVC enabled cluster. Which meant for legacy FT to run we need to have servers having the same CPU and ESXi versions.

2. Hot enable of FT is now supported.

3. Legacy FT used only thick provisioned eager zeroes disk. However, in the new FT we can use thin disk as well.

4. We have VMDK redundancy in case of datastore failure.

5. The network requirement is 10 GBps

6. DRS is partially supported in the new FT


FT 6 creates two complete VMs. Each with its own .vmx, VMDK files on different datastores. After a failover, a full copy of all the VM files must be done again so the new secondary will have the identical set of the VM files. This will be time consuming.


FT Split Brain. What is it?


Consider there are two hosts: Host 1 (Primary) and Host 2 (Secondary)

Primary has Primary VM and Secondary has secondary VM. And they will have constant communication over the network.

Generally, if the primary goes down, the Secondary will become the primary host and VM accordingly.


Now, let's consider a scenario where the network link between primary and secondary itself goes down. In this case the secondary will think the primary host is down and will elect itself as primary host. Well, here we have two primary hosts and two primary VMs, and no secondary at all.


This scenario is called the split brain. We want to avoid this situation.


So, we use something called as tie-breaker files. Well, well, well, what is this now?


Now this primary and the secondary will have access to a shared storage. In this storage we are going to put some files, to determine if the primary host is still available. The two files are Shared.vmft and .ftgeneration.


The .ftgeneration file is the one that determine if there is a split brain scenario.  Using this file, the secondary will communicate with the primary even when the network up-link between the two is down.
We can also share the primary .vmx and the secondary .vmx files.

However, sharing of the VMDK files is not recommended as it will not provide redundancy.



How VMs communicated with each other?

1. Legacy FT


Record/Replay or Virtual Lockstep.


Here the primary VM will receive some data. This data was sent to the vCPU of the primary VM and another copy of the same data was sent to the vCPU of the secondary VM. They both executed the input at the same time and generated the same result. However, this process is much harder to run on SMP VMs, i,e: VMs with more than one vCPU as it is hard to determine which vCPU should receive which input.


2. New FT - Fast Checkpointing


Here instead of taking the input of primary and sending a copy of this input to secondary, the primary will now take the input, execute it and a result is sent to the secondary VM over the up-link. This is sent periodically between the two VMs.


When FT is first enabled, an exact copy of primary is created on the secondary host.

This process is similar like a modified version of XVMotion, copy the memory and disk of the VMs between hosts and datastores respectively.


Primary will holds each outgoing networking packets until it receives an acknowledgment from the secondary VM for the previously sent network packet. This is to ensure that both the VMs are in sync with the latest data.


As of now we can use only 1 10 GIG network.


Important notes:
FT can be enabled only within a HA enabled cluster.

DRS does not perform balance of FT machines. However, it will balance the non FT VMs.



FT is young, FT is promising and more yet to come in the further releases.


As always, stay informed, keep learning.






Logging into vCenter and want to find something unusual? Something welcoming?




Why not tweak a little welcoming message.


I discovered this while playing around in my lab environment, and thought, why not share this with you guys. Maybe it's already out there!


Just login to your vCenter.

Click the Administration Tab.

Select Edit Message of the Day. Enter your required message that you want to see when opening the client.


Now, you have a welcoming message during every login.





In my one year and plus experience in VMware, I have been asked a lot from various customers regarding shrinking a VMDK.

Nearly 5 cases in a month I receive are stating, I have given 1 TB of Hard Disk to my virtual machine and now I need to shrink it 500 GB, how do I do this?


Well, the first thing we require is a VMware Converter Standalone.
If you don't have it, well, of course, you need to download one!

VMware vCenter Converter Standalone


1. Install this converter machine, either on to your vCenter machine or a remote machine.

2. Open the Converter and it prompts you to login, if it's on a vCenter Server, you can login as localhost and well, if it's on a remote server, with the remote server IP.

3. And on the top left, click the Convert Machine

4. A window pops up asking for certain details. Here.

  • Select Source type: VMware infrastructure Virtual Machine, of course.
  • The Server IP.
  • It's username and password.

5. This connected the Converter to the vCenter and you can see your vCenter Inventory being listed in this window.

6. Select the VM which you want to shrink and click View Source Details and as the name states the details of the source VM will be populated.

7. Specify a name for the destination virtual machine. This destination machine is the converted/shrunk machine.

8. Select the virtual machine Hardware Version and the Datastore it has to reside on.

9. Now a big window opens up with multiple Options.

  • Under Data to copy: Click Edit
  • Here we can see two options from the drop down. Copy all disks is like an exact replica of the source VM and Select Volumes to copy can be used to resize the disk.
  • Select the capacity and the type of provisioning that has to be done. Thick/Thin as per your requirement (Freedom of choice!)

10. Click Next and review all the changes before we proceed further. Let's at least take a few seconds and read this (Because we sure do not read the license agreements for anything) and upon verification click Finish.


There we go. The virtual machine is converted to a virtual machine but with a resized disk.


Burn those data calories away and slim down your VMDKs!



Thank you.

Suhas G Savkoor

We don't always remember everything, correct? We have a password for our Facebook, a password for our email IDs, and passwords for our ESXi hosts.

Naturally, we tend to forget the ESXi password.


Now, if we ask VMware about how to reset a forgotten password for ESXi, the recommended solution would be to re-install. As quoted by our Knowledge Base article any other means might result in a host failure due to complex nature of the ESXi architecture.


Of course this is not recommended for our production environments where we have critical VM's running, and the must solution is to re-install. However, why not play around in our lab? Why perform a re-install for your test ESXi hosts.


There are multiple ways to reset the forgotten password, but my favorite choice would be using the Host Profiles.

Why? It's simple! And I like GUI.


Let's take a look at the steps to do this:


1. Login to your vCenter and Right Click the host which requires a password reset.

2. Select Host Profile and choose the Create Profile From Host option. This ESXi becomes the reference host for your profile. This can be applied later for multiple hosts (If required). Because you have forgotten the password for all!

3. In the Window that pops up, let's give it a name. Say ESXi_Password_Reset and a description if required.

4. Click Next > Review and Finish.


A host profile is now ready. Now, time to configure this.


1. In your vCenter Select Home > Host Profiles

2. Right click the newly created profile and select Edit profile.

3. Now, there's a bunch of options that can be configured here. But, in our case, we will select the Security Configuration (Passwords duh!) and expand it. Well, choose the Administrator password option.

4. Under the option, 'What should the administrator password be', select Configure a fixed administrator password.

5. Enter the new password and confirm it and click OK.

6. Right click the Profile again and select the option Enable or Disable profile Configuration.

7. Here out of the big list of settings, let's choose our required option, which is? You are right! Security Configuration and Click OK.


Well, we got the configuration done. Time to apply:


1. Put the host into maintenance mode. This means power off all the VMs on it, or migrate them off to a different host.

2. Go back to the hosts and clusters view and right click the ESXi host > Host Profiles > Manage profile

3. If you have multiple host profiles, carefully select the ESXi_Password_Reset profile and click OK

4. Right Click the host, again! Host Profile > Apply profile.

5. Review the changes and there we go, Click OK and we got ourselves a new password.


Time to verify:


1. Take a SSH to the host.

2. Login with root and of course, our new password.


esxcli away.


Thank you.