<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>anthonymaw Tracker</title>
    <link>https://communities.vmware.com/wbsdv95928/tracker</link>
    <description>anthonymaw Tracker</description>
    <pubDate>Wed, 15 Nov 2023 10:43:02 GMT</pubDate>
    <dc:date>2023-11-15T10:43:02Z</dc:date>
    <item>
      <title>Re: Issue with clustered RDM's and storage outages</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/Issue-with-clustered-RDM-s-and-storage-outages/m-p/2880802#M279175</link>
      <description>&lt;P&gt;We experienced this problem (NetApp, Cisco UCS, VMware 6.x).&amp;nbsp; As someone else pointed out, the solution is to change the storage Multipath setting from "Round Robin" to "Most Recently Used (VMware)" setting.&amp;nbsp; The issue seems to be that the Windows Failover Cluster Manager service running on each node of the cluster periodically checks for disk ownership by sending a SCSI-3 protocol command to set "persistent reservation".&amp;nbsp; It is part of how the storage failure detection mechanism works.&amp;nbsp; Normally the owning node will get a SCSI acknowledgement signal.&amp;nbsp; However in round-robin the reservation set/check command goes out one channel and the reply comes back on the other channel then the owning cluster node never receives a response and assumes the cluster is down.&amp;nbsp; Other nodes in the cluster also check by sending SCSI commands to see if the LUN's persistent ownership is set and may or may not receive an a response creating a situation where none of the cluster nodes knows if any particular node has suffered storage access failure.&amp;nbsp; It's all documented in the Microsoft Failover Clustering storage management information.&amp;nbsp; This seems to only be an issue in virtualized environments like VMware.&amp;nbsp; In a physical multi-server Windows Failover Cluster where the Windows OS is installed on real servers with shared RDM disks one would install a Windows multipath I/O driver provided by the storage vendor to solve the problem of SCSI commands going out one channel and replies coming back on another channel.&lt;/P&gt;</description>
      <pubDate>Tue, 30 Nov 2021 23:06:04 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/Issue-with-clustered-RDM-s-and-storage-outages/m-p/2880802#M279175</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2021-11-30T23:06:04Z</dc:date>
    </item>
    <item>
      <title>Re: Using Snapshots on Domain Controllers safe or not - depends on situation??</title>
      <link>https://communities.vmware.com/t5/Backup-Recovery-Discussions/Using-Snapshots-on-Domain-Controllers-safe-or-not-depends-on/m-p/2575400#M16701</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Reverting a Domain Controller snapshot, in a multi-DC environment, to an earlier point in time is no different than if the server had been powered off for a while and booted up again.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The member DC will contact it's peer DCs and see that its USN is lower and initiate a full replication sync.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Snapshots are best done when VMware Tools triggers Windows Volume Shadow Copy Services to quiesce the Active Directory database write operations.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;However the AD ESE database engine is very robust and restoring a "dirty" snapshot without VMware Tools/VSS quiescing generally causes no problems either.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Its just like if the server suffered a power failure, stayed off for a while before rebooted.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Reverting a snapshot should not be confused with restoring Active Directory from a backup, like if you accidentally deleted an object.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The only issue is restoring a snapshot more than sixty days old because some previously deleted AD "tombstoned" objects might reappear.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So it's best not to revert DC snapshots more than 60 days old.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 06 Apr 2020 21:50:25 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Backup-Recovery-Discussions/Using-Snapshots-on-Domain-Controllers-safe-or-not-depends-on/m-p/2575400#M16701</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2020-04-06T21:50:25Z</dc:date>
    </item>
    <item>
      <title>Re: VDP fix (POWERCLI): Virtual machine disks consolidation is needed - Unable to Access file since it is locked</title>
      <link>https://communities.vmware.com/t5/VMware-PowerCLI-Documents/VDP-fix-POWERCLI-Virtual-machine-disks-consolidation-is-needed/tac-p/2784442#M164</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have found that for those VMs that report "Disk consolidation needed" after vSphere Data Protection appliance backup, if you check the VDP appliance hardware configuration you may see that there are extra VM's disk files attached to the appliance.&lt;/P&gt;&lt;P&gt;It looks like the VDP backup appliance takes a snapshot, attaches the target VM disk to itself *temporarily* and then makes the backup.&lt;/P&gt;&lt;P&gt;After the backup it is supposed to release/dismount the VM disk and consolidate the snapshot since it was backing up a running VM, but sometimes it fails to release/unmounts the VM disk file &lt;EM&gt;so the VM disk remains locked in a snapshot mode&lt;/EM&gt;.&lt;/P&gt;&lt;P&gt;Attempting to do a manual snapshot consolidation fails because the VDP appliance has an open file handle of the VMDK - very frustrating!!!!&lt;/P&gt;&lt;P&gt;When I look at my own VDP backup appliance in vCenter, I have seen sometimes up to two extra VM Hard disks mounted.&lt;/P&gt;&lt;P&gt;The solution is easy: Just click the Remove button to dismount them.&lt;/P&gt;&lt;P&gt;I can then manually do a snapshot consolidation procedure to clear the VM warning message about "Disk consolidation needed" it usually finishes in less than a minute.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Apr 2018 16:33:30 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-PowerCLI-Documents/VDP-fix-POWERCLI-Virtual-machine-disks-consolidation-is-needed/tac-p/2784442#M164</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2018-04-13T16:33:30Z</dc:date>
    </item>
    <item>
      <title>Re: Need to consolidate disk every day</title>
      <link>https://communities.vmware.com/t5/Backup-Recovery-Discussions/Need-to-consolidate-disk-every-day/m-p/1344230#M8317</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Not sure if this is a permanent solution but I have found that for those VMs that report "Disk consolidation needed" after VDP appliance backup, if you check the VDP appliance hardware configuration you may see that there are extra VM's disk files attached to the appliance.&lt;/P&gt;&lt;P&gt;It looks like the VDP backup appliance takes a snapshot, attaches the target VM disk to itself temporarily and then makes the backup.&lt;/P&gt;&lt;P&gt;After the backup it is supposed to release/dismount the VM disk and consolidate the snapshot (since it was backing up a running VM), but sometimes it fails to do so.&lt;/P&gt;&lt;P&gt;The VM disk remains locked in a snapshot mode.&lt;/P&gt;&lt;P&gt;Attempting to do a manual snapshot consolidation fails because the VDP appliance has an open file handle of the VMDK.&lt;/P&gt;&lt;P&gt;When I look at my own VDP backup appliance in vCenter, I have seen sometimes up to two extra VM Hard disks mounted.&lt;/P&gt;&lt;P&gt;The solution is easy: Just click the Remove button to dismount them.&lt;/P&gt;&lt;P&gt;I can then successfully do a snapshot consolidation procedure to clear the warning message.&lt;/P&gt;&lt;P&gt;The problem seems to be intermittent for me but when it happens the above described procedure to release the disk solves the problem.&lt;/P&gt;&lt;P&gt;in the attached screenshot I have seen Hard disk 8 and Hard disk 9 for some of the backup target VMs.&lt;/P&gt;&lt;P&gt;Hope this helps.&lt;/P&gt;&lt;P&gt;Anthony Maw, Vancouver, Canada&lt;span class="lia-inline-image-display-wrapper" image-alt="Capture.JPG"&gt;&lt;img src="https://communities.vmware.com/t5/image/serverpage/image-id/81054i85BF7F04FCA4E724/image-size/large?v=v2&amp;amp;px=999" role="button" title="Capture.JPG" alt="Capture.JPG" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 13 Apr 2018 16:25:42 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Backup-Recovery-Discussions/Need-to-consolidate-disk-every-day/m-p/1344230#M8317</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2018-04-13T16:25:42Z</dc:date>
    </item>
    <item>
      <title>Re: How do I change the VDP Backup user?</title>
      <link>https://communities.vmware.com/t5/Backup-Recovery-Discussions/How-do-I-change-the-VDP-Backup-user/m-p/879033#M5061</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;It is reminded that the correct port number for the VDP-configure URL is 8443.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So the full URL is&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="https://" rel="nofollow"&gt;https://&lt;/A&gt;&lt;SPAN&gt;&amp;lt;VDP appliance IP address&amp;gt;:8443/vdp-configure/&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;A good user account might be the VSPHERE.LOCAL\administrator user since it already has all the necessary rights and doesn't expire.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 29 Dec 2015 21:50:08 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Backup-Recovery-Discussions/How-do-I-change-the-VDP-Backup-user/m-p/879033#M5061</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2015-12-29T21:50:08Z</dc:date>
    </item>
    <item>
      <title>Re: Incompatible device backing specified for device '0'</title>
      <link>https://communities.vmware.com/t5/VI-VMware-ESX-3-5-Discussions/Incompatible-device-backing-specified-for-device-0/m-p/871938#M43092</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Reboot all the ESX hosts and hitting Storage Adapters "Refresh" fixed out-of-sync storage LUN and that insane 'Incompatible device backing..." error message for me.&lt;/P&gt;&lt;P&gt;vCenter just "goes stupid" sometimes even though all the storage LUNS are presented uniformly to the ESX hosts it still thinks there is a mis-match.&lt;/P&gt;&lt;P&gt;For some strange reason the individual ESX hosts get "out-of-sync" sometimes with their storage configuration.&lt;/P&gt;&lt;P&gt;You should only have to go to one of the ESX hosts Configuration tab, click Rescan All...&lt;/P&gt;&lt;P&gt;All the ESX hosts that are supposed to see the shared storage volumes should automagically update when the first one is done.&lt;/P&gt;&lt;P&gt;But I have seen how setting custom Names on one ESX configuration (instead of the default naa name) on shared storage doesn't replicate across all ESX hosts simultaneously.&lt;/P&gt;&lt;P&gt;Once you reboot all the ESX nodes they all refresh and synchronize changes correctly.&lt;/P&gt;&lt;P&gt;Like those Help Desk team guys ask: "Did you reboot your computer first?"&lt;/P&gt;&lt;P&gt;LOL&lt;/P&gt;&lt;P&gt;Anthony Maw, Vancouver, Canada&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 17 Dec 2015 23:12:21 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VI-VMware-ESX-3-5-Discussions/Incompatible-device-backing-specified-for-device-0/m-p/871938#M43092</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2015-12-17T23:12:21Z</dc:date>
    </item>
    <item>
      <title>Re: ESXi boots after install but Drops to BIOS next reboot</title>
      <link>https://communities.vmware.com/t5/ESXi-Discussions/ESXi-boots-after-install-but-Drops-to-BIOS-next-reboot/m-p/2189173#M210039</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;thanks gentlemen - the shift+o and adding "formatwithmbr" parameter fixed my PC test machine no-boot problem too.&amp;nbsp; I'm guessing VMware creates a GUID partition that is not compatible with BIOS based bootups.&amp;nbsp; If that is the case then I can assume that server motherboards boot up with UEFI / GUID partition tables ?&amp;nbsp; All the best from Vancouver, Canada!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 25 Dec 2013 21:16:12 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/ESXi-Discussions/ESXi-boots-after-install-but-Drops-to-BIOS-next-reboot/m-p/2189173#M210039</guid>
      <dc:creator>anthonymaw</dc:creator>
      <dc:date>2013-12-25T21:16:12Z</dc:date>
    </item>
  </channel>
</rss>

