<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>MJMSRI Tracker</title>
    <link>https://communities.vmware.com/wbsdv95928/tracker</link>
    <description>MJMSRI Tracker</description>
    <pubDate>Sun, 12 Nov 2023 00:07:32 GMT</pubDate>
    <dc:date>2023-11-12T00:07:32Z</dc:date>
    <item>
      <title>VMware Cross vCenter Migration - Do all ESXi Hosts need Ent Plus or just migration hosts?</title>
      <link>https://communities.vmware.com/t5/VMware-vCenter-Discussions/VMware-Cross-vCenter-Migration-Do-all-ESXi-Hosts-need-Ent-Plus/m-p/2994930#M49741</link>
      <description>&lt;P&gt;&lt;STRONG&gt;Hi All &lt;/STRONG&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;&lt;STRONG&gt;Query – &lt;/STRONG&gt;Licencing for VMware Cross vCenter Migration.&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;Question -&lt;/STRONG&gt;&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;Do all ESXi hosts in source vcenter and all ESXi hosts in destination vcenter require Enterprise plus licencing to hot migrate VMs?&lt;/LI&gt;&lt;LI&gt;Or can you just licence the hosts you want to cross vcenter migrate between, such as 1 host in source and 1 host in destination?&lt;/LI&gt;&lt;/UL&gt;&lt;LI&gt;&lt;STRONG&gt;&amp;nbsp;&lt;/STRONG&gt;&lt;STRONG&gt;Reference –&lt;/STRONG&gt;&lt;UL&gt;&lt;LI&gt;To vMotion powered-on virtual machines with the Advanced Cross vCenter vMotion feature, you must have a vSphere Enterprise Plus license on both the source and destination vCenter Server instances&lt;/LI&gt;&lt;LI&gt;To migrate powered-off virtual machines with the Advanced Cross vCenter vMotion feature, you must have a vSphere Standard license.&lt;/LI&gt;&lt;/UL&gt;&lt;/LI&gt;&lt;UL&gt;&lt;LI&gt;&lt;A href="https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-DAD0C40A-7F66-44CF-B6E8-43A0153ABE81.html" target="_blank" rel="noopener"&gt;https://docs.vmware.com/en/VMware-vSphere/8.0/vsphere-vcenter-esxi-management/GUID-DAD0C40A-7F66-44CF-B6E8-43A0153ABE81.html&lt;/A&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 09 Nov 2023 13:24:16 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vCenter-Discussions/VMware-Cross-vCenter-Migration-Do-all-ESXi-Hosts-need-Ent-Plus/m-p/2994930#M49741</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-11-09T13:24:16Z</dc:date>
    </item>
    <item>
      <title>Re: Reboot Node when rebalancing is in progress</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Reboot-Node-when-rebalancing-is-in-progress/m-p/2985662#M15566</link>
      <description>&lt;P&gt;Hi, the best plan would be to power off each host, for the new disks, then repeat for all other hosts. That will result in all new disks being fitted and online in all hosts.&lt;/P&gt;&lt;P&gt;You can then select to create the new disk groups on each host at the same time and vSAN will then be able to complete just one rebalance task as all new DGs will be created at the same time and no reboots needed.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Sep 2023 06:31:10 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Reboot-Node-when-rebalancing-is-in-progress/m-p/2985662#M15566</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-09-07T06:31:10Z</dc:date>
    </item>
    <item>
      <title>Re: Updating vSAN stretched cluster vCenter Foundation</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Updating-VSAN-stretched-cluster-vCenter-Foundation/m-p/2985610#M15563</link>
      <description>&lt;P&gt;&lt;SPAN&gt;This was indeed an issue that has been fixed in vSphere 6.5 U2, and 6.7 U1 (see &lt;/SPAN&gt;&lt;A title="https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-671-release-notes.html" href="https://docs.vmware.com/en/VMware-vSphere/6.7/rn/vsphere-vcenter-server-671-release-notes.html" rel="nofollow noopener noreferrer" target="_blank"&gt;VMware vCenter Server 6.7 Update 1 Release Notes&lt;/A&gt;&lt;SPAN&gt; )&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Sep 2023 21:01:47 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Updating-VSAN-stretched-cluster-vCenter-Foundation/m-p/2985610#M15563</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-09-06T21:01:47Z</dc:date>
    </item>
    <item>
      <title>Re: Extended Switch accross multiple ESX hosts</title>
      <link>https://communities.vmware.com/t5/Networking-Members/Extended-Switch-accross-multiple-ESX-hosts/m-p/2982274#M383</link>
      <description>&lt;P&gt;Yes all hosts can be connected to a distributed switch for central management and administration. However you will require the Enterprise Plus licence for all hosts CPUs to use that distributed switch. (or if you have vSAN, the distributed switch feature is included)&lt;/P&gt;</description>
      <pubDate>Tue, 15 Aug 2023 07:16:06 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/Extended-Switch-accross-multiple-ESX-hosts/m-p/2982274#M383</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-08-15T07:16:06Z</dc:date>
    </item>
    <item>
      <title>VDS Load Balancing for LAG / LACP</title>
      <link>https://communities.vmware.com/t5/Networking-Members/VDS-Load-Balancing-for-LAG-LACP/m-p/2980930#M372</link>
      <description>&lt;P&gt;Hi all, I want to check the correct load balancing mode for the setup below.&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;1 x Distributed Switch with 2 x 25gb uplinks&lt;/LI&gt;&lt;LI&gt;uplinks connected into 2 Cisco catalyst switches&lt;/LI&gt;&lt;LI&gt;LACP setup on Cisco catalyst switches for both interfaces on each host in Active mode.&lt;/LI&gt;&lt;LI&gt;distributed switch setup with LAG and both 25gb uplinks added into LAG obn VDS&lt;/LI&gt;&lt;LI&gt;LAG set as Active uplink on VDS Port Groups. The 2 uplinks set as unused.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;I have found the load balancing set current as “use explicit failover order”.&lt;/P&gt;&lt;P&gt;I believe the load balancing above is wrong and should be set as “&lt;EM&gt;&lt;STRONG&gt;Route based on&lt;/STRONG&gt;&lt;/EM&gt;&amp;nbsp;&lt;EM&gt;&lt;STRONG&gt;IP Hash”&lt;/STRONG&gt;&lt;/EM&gt;, is this correct?&lt;/P&gt;&lt;P&gt;Looking at this article:&amp;nbsp;&lt;A href="https://kb.vmware.com/s/article/1004048" target="_blank" rel="noopener"&gt;https://kb.vmware.com/s/article/1004048&lt;/A&gt; it simply details: “&lt;SPAN&gt;From the Load Balancing dropdown, &lt;/SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;select the correct load balancing policy. This will be determined by the physical switch. Refer to the physical switch vendor&lt;/STRONG&gt;&lt;/EM&gt;&lt;SPAN&gt; if there are questions on which load balancing algorithm should be used.”&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 05 Aug 2023 10:29:51 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/VDS-Load-Balancing-for-LAG-LACP/m-p/2980930#M372</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-08-05T10:29:51Z</dc:date>
    </item>
    <item>
      <title>Re: VXLAN without NSX</title>
      <link>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980928#M371</link>
      <description>&lt;P&gt;Ok thanks so in summary, if you want to have VXLAN down to the ESXi hosts and VMs, NSX is required? So VXLAN down to hosts and VMs is impossible without NSX?&lt;/P&gt;</description>
      <pubDate>Sat, 05 Aug 2023 10:16:48 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980928#M371</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-08-05T10:16:48Z</dc:date>
    </item>
    <item>
      <title>Re: VXLAN without NSX</title>
      <link>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980295#M363</link>
      <description>&lt;P&gt;Hi, yes the solution in place is as you describe:&amp;nbsp;&lt;SPAN&gt;VM -&amp;gt; ESXi -&amp;gt; TOR switch (here occurs the encapsulation) -&amp;gt; internet -&amp;gt; TOR switch (de-encapsulation occurs) -&amp;gt; ESXi -&amp;gt; VM&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;So the above is the typical VXLAN deployment and straightforward.&lt;/P&gt;&lt;P&gt;if you wanted to extend the above so that VXLAN was also within the DC and down to the hosts, how would that be achieved without NSX?&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 01 Aug 2023 19:45:32 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980295#M363</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-08-01T19:45:32Z</dc:date>
    </item>
    <item>
      <title>Re: VXLAN without NSX</title>
      <link>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980185#M359</link>
      <description>&lt;P&gt;Hi thanks for the replies.&lt;/P&gt;&lt;P&gt;so what is in place is VXLAN EVPN between 2 DCs.&lt;/P&gt;&lt;P&gt;So to confirm, there are two types of VXLAN implementation.&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;The first is just between DCs as the overlay and therefore managed at switch level in each DC?&lt;/LI&gt;&lt;LI&gt;Then secondly there is a more advanced where there is the above VXLAN between DCs and also VXLAN within the DCs?&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;With the first option above, if there was a suspected VXLAN issue, then it would be on the switches, multicast on switches etc? As there is nothing on ESXi related to config of VXLAN apart from vDS and port groups with VLANs?&lt;/P&gt;&lt;P&gt;If you went for the second option above with VXLAN within the DC, is there a way to do that with VXLAN only! or would you need NSX for this?&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Tue, 01 Aug 2023 08:17:06 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2980185#M359</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-08-01T08:17:06Z</dc:date>
    </item>
    <item>
      <title>VXLAN without NSX</title>
      <link>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2979745#M353</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;my question is about VXLAN in a VMware vMSC and Cisco catalyst solution across 2 datacenters with ISL between.&lt;/P&gt;&lt;P&gt;if you do not have NSX, what configuration is needed on the ESXi hosts? And distributed switch?&lt;/P&gt;&lt;P&gt;Is it just that the distributed switch can have the IGMP snooping enabled then all other VXLAN configuration is ‘Hardware VXLAN’ on the physical switches?&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Fri, 28 Jul 2023 21:04:54 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Networking-Members/VXLAN-without-NSX/m-p/2979745#M353</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2023-07-28T21:04:54Z</dc:date>
    </item>
    <item>
      <title>vSAN 7.0.3 Stretched Cluster - Simulate entire 1 Data site failure</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-7-0-3-Stretched-Cluster-Simulate-entire-1-Data-site-failure/m-p/2934598#M14608</link>
      <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have a 3+3+1 vSAN Stretched Cluster, so 2 data sites each with 3 Hosts and a witness site.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Only vSphere Standard in place, so vSphere HA Enabled. However no DRS as no licence for this.&amp;nbsp;&lt;/P&gt;&lt;P&gt;We want to simulate a data site failure, what would be the best way to do this so we can effectively complete a 'Test DR Site Failover'?&lt;/P&gt;&lt;P&gt;Would it be to login to the hosts iLO and simply power off the HPE Hosts? Or is there a built in script to initiate the vSphere HA workflow so it would gracefully shutdown the VMs in the data site, then power them on in the remaining data site?&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Thu, 20 Oct 2022 15:50:26 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-7-0-3-Stretched-Cluster-Simulate-entire-1-Data-site-failure/m-p/2934598#M14608</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2022-10-20T15:50:26Z</dc:date>
    </item>
    <item>
      <title>ESXi / vSAN Command to show either disk speed such as 12gbps or the disk type such as SATA</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/ESXi-vSAN-Command-to-show-either-disk-speed-such-as-12gbps-or/m-p/2929944#M14553</link>
      <description>&lt;P&gt;Hello, we have a 19-Node vSAN Cluster with 433 Disks.&amp;nbsp;&lt;/P&gt;&lt;P&gt;I want to ensure that all disks are SSD SAS Disks running at 12 Gbps and that there are not any SATA disks running at 6 Gbps.&amp;nbsp;&lt;/P&gt;&lt;P&gt;is there a command to show the disk speed for all of these 433 disks?&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Wed, 21 Sep 2022 16:00:39 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/ESXi-vSAN-Command-to-show-either-disk-speed-such-as-12gbps-or/m-p/2929944#M14553</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2022-09-21T16:00:39Z</dc:date>
    </item>
    <item>
      <title>Re: Expanding disk group vSAN</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Expanding-disk-group-vSAN/m-p/2926722#M14513</link>
      <description>&lt;P&gt;Yes i agree that a second disk group per host would be the best option to increase performance and redundancy. You can only have 1 Cache disk per disk group so your option suggesting to have 2 cache per disk group would not work.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 31 Aug 2022 16:13:05 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Expanding-disk-group-vSAN/m-p/2926722#M14513</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2022-08-31T16:13:05Z</dc:date>
    </item>
    <item>
      <title>Re: Remove server from vSAN</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Remove-server-from-vSAN/m-p/2926393#M14505</link>
      <description>&lt;P&gt;&lt;U&gt;Hi&amp;nbsp;&lt;a href="https://communities.vmware.com/t5/user/viewprofilepage/user-id/5574258"&gt;@JohnB52&lt;/a&gt;&amp;nbsp; have you tried erasing the disk partitons?&amp;nbsp;&lt;A href="https://williamlam.com/2015/09/erasing-existing-disk-partitions-now-available-in-the-vsphere-web-client-vsphere-6-0-update-1.html" target="_blank"&gt;https://williamlam.com/2015/09/erasing-existing-disk-partitions-now-available-in-the-vsphere-web-client-vsphere-6-0-update-1.html&lt;/A&gt;&amp;nbsp;&lt;/U&gt;&lt;/P&gt;</description>
      <pubDate>Tue, 30 Aug 2022 09:09:24 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/Remove-server-from-vSAN/m-p/2926393#M14505</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2022-08-30T09:09:24Z</dc:date>
    </item>
    <item>
      <title>vSAN 2-Node Direct Connect move from one vCenter to a new vCenter</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-2-Node-Direct-Connect-move-from-one-vCenter-to-a-new/m-p/2922939#M14465</link>
      <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have a 2-Node Direct Connect vSAN Cluster managed by a vCenter. We have a new vCenter and want to move this cluster from the old vCenter to the new vCenter. Would this article cover the steps needed to achieve this?&lt;/P&gt;&lt;P&gt;&lt;A href="https://kb.vmware.com/s/article/2151610" target="_blank"&gt;https://kb.vmware.com/s/article/2151610&lt;/A&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The current vCenter has 2 vSAN Clusters&amp;nbsp;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;vSAN 7.0.2 Cluster 7-Node. This contains the vSAN Witness VM for the 2-Node Cluster below.&lt;/LI&gt;&lt;LI&gt;vSAN 7.0.2 Cluster 2-Node direct connect.&lt;/LI&gt;&lt;/UL&gt;&lt;P&gt;So plan is:&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;Follow the above KB article to move the 7-Node vSAN to the new vCenter that contains the vSAN Witness VM (This cluster uses 2 x vDS)&lt;/LI&gt;&lt;LI&gt;Then move the 2-Node vSAN to the new vCenter.&amp;nbsp; (This cluster uses 2 x vDS)&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 09 Aug 2022 11:35:49 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-2-Node-Direct-Connect-move-from-one-vCenter-to-a-new/m-p/2922939#M14465</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2022-08-09T11:35:49Z</dc:date>
    </item>
    <item>
      <title>Re: Is it supported to protect/replicate a VCSA 7.x VM with Veeam Replication?</title>
      <link>https://communities.vmware.com/t5/VMware-vCenter-Discussions/Is-it-supported-to-protect-replicate-a-VCSA-7-x-VM-with-Veeam/m-p/2881167#M44610</link>
      <description>&lt;P&gt;&lt;a href="https://communities.vmware.com/t5/user/viewprofilepage/user-id/24614"&gt;@depping&lt;/a&gt;&amp;nbsp;ok, so does vsphere replication support this now?&lt;/P&gt;</description>
      <pubDate>Thu, 02 Dec 2021 14:25:46 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vCenter-Discussions/Is-it-supported-to-protect-replicate-a-VCSA-7-x-VM-with-Veeam/m-p/2881167#M44610</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2021-12-02T14:25:46Z</dc:date>
    </item>
    <item>
      <title>Is it supported to protect/replicate a VCSA 7.x VM with Veeam Replication?</title>
      <link>https://communities.vmware.com/t5/VMware-vCenter-Discussions/Is-it-supported-to-protect-replicate-a-VCSA-7-x-VM-with-Veeam/m-p/2881122#M44607</link>
      <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;I am aware of VCHA for VCSA however in this instance we are looking to utilize Veeam Replication for protection of VMs to a 2nd Site and want to know if VMware support this?&lt;/P&gt;&lt;P&gt;So could the VCSA 7.x with embedded PSC be replicated to a 2nd site with Veeam Replication and is that supported?&lt;/P&gt;&lt;P&gt;The searches i have seen show the below where&amp;nbsp;&lt;a href="https://communities.vmware.com/t5/user/viewprofilepage/user-id/24614"&gt;@depping&lt;/a&gt;&amp;nbsp;covered this years ago allude to this NOT being supported however now we have the VCSA maybe this is supported?&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.yellow-bricks.com/2012/09/21/can-i-protect-my-vcenter-server-with-vsphere-replication/" target="_blank"&gt;https://www.yellow-bricks.com/2012/09/21/can-i-protect-my-vcenter-server-with-vsphere-replication/&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 02 Dec 2021 10:29:04 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vCenter-Discussions/Is-it-supported-to-protect-replicate-a-VCSA-7-x-VM-with-Veeam/m-p/2881122#M44607</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2021-12-02T10:29:04Z</dc:date>
    </item>
    <item>
      <title>Re: No storage replication adapters installed.</title>
      <link>https://communities.vmware.com/t5/Site-Recovery-Manager/No-storage-replication-adapters-installed/m-p/2878339#M13956</link>
      <description>&lt;P&gt;&lt;a href="https://communities.vmware.com/t5/user/viewprofilepage/user-id/3803110"&gt;@JDMils_Interact&lt;/a&gt;&amp;nbsp;the VA IP Address will be the IP Address for the NetApp Virtual Storage Console (VSC). There is a new VSC out now called ONTAP TOOLS for VMware 9.8 so that will also be the target IP in this command for newer scenarios like this.&amp;nbsp;&lt;/P&gt;&lt;P&gt;You can try to login to the VSC to check the administrator password is correct by visiting &lt;A href="https://&amp;lt;VSC" target="_blank"&gt;https://&amp;lt;VSCIP&amp;gt;:9083&amp;nbsp;&lt;/A&gt;&lt;/P&gt;&lt;P&gt;If you cant login then open a VMware Console to the VM (VMRC or Web) then login with the 'maint' user and you can reset the administrator password with option 2 once logged in.&lt;/P&gt;</description>
      <pubDate>Tue, 16 Nov 2021 14:04:55 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Site-Recovery-Manager/No-storage-replication-adapters-installed/m-p/2878339#M13956</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2021-11-16T14:04:55Z</dc:date>
    </item>
    <item>
      <title>vSAN Stretched Cluster expected network partition implication</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-Stretched-Cluster-expected-network-partition-implication/m-p/2814796#M12441</link>
      <description>&lt;P&gt;Hi All,&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have a stretched cluster 3+3+w so two data sites (DC1 &amp;amp; DC2) and a witness site.&amp;nbsp;&lt;/P&gt;&lt;P&gt;- There is a direct link from DC1 to the Witness Site&lt;/P&gt;&lt;P&gt;- There is a direct link from DC2 to Witness Site&lt;/P&gt;&lt;P&gt;- There is a 10Gb link between DC1 and DC2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;- The link between DC2 and the Witness Site has failed.&lt;/P&gt;&lt;P&gt;- Now the witness is showing as in 'Group 2' network partition&lt;/P&gt;&lt;P&gt;- The 6 vSAN Hosts at both DC1 and DC2 are in Group 1&amp;nbsp;&lt;/P&gt;&lt;P&gt;- Ping from hosts vSAN VMkernel in DC1 to Witness is sucesful as that link is working ok. Ping from hosts vSAN VMKernel in DC2 to Witness fails as link down.&lt;/P&gt;&lt;P&gt;- VMs all showing as 'non-compliant' in storage policy.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Question - is the above the expected outcome of that link failure from data site to witness site? i would have thought that as DC1 could still communicate sucesfull to and from the Witness site the witness appliance would still be in Group 1 and VMs compliant and simply show a warning that one of the datasites has an issue?&lt;/P&gt;</description>
      <pubDate>Fri, 04 Dec 2020 13:41:55 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/vSAN-Stretched-Cluster-expected-network-partition-implication/m-p/2814796#M12441</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2020-12-04T13:41:55Z</dc:date>
    </item>
    <item>
      <title>Re: 2-Node vSAN ROBO over 3 sites</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/2-Node-vSAN-ROBO-over-3-sites/m-p/2307674#M10153</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi &lt;B&gt;TheBobkin&lt;/B&gt;​ thanks for the speedy response. so in terms of licencing for this new setup we would just for example need one of the below options and would not need an ESXi CPU licence and separate vSAN Licence? The below would cover VMware Licencing for the 2 new vSAN hosts in the 1+1+1 cluster?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;UL&gt;&lt;LI&gt;VMware HCI Kit ROBO Standard (per-25 VMs) that includes vSphere ROBO Enterprise and vSAN ROBO Standard&lt;/LI&gt;&lt;LI&gt;VMware HCI Kit ROBO Advanced (per-25 VMs) that includes vSphere ROBO Enterprise and vSAN ROBO Advanced&lt;/LI&gt;&lt;/UL&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 18 Sep 2020 15:30:56 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/2-Node-vSAN-ROBO-over-3-sites/m-p/2307674#M10153</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2020-09-18T15:30:56Z</dc:date>
    </item>
    <item>
      <title>2-Node vSAN ROBO over 3 sites</title>
      <link>https://communities.vmware.com/t5/VMware-vSAN-Discussions/2-Node-vSAN-ROBO-over-3-sites/m-p/2307672#M10151</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;we currently have 3 sites, 2 separate datacenters (london DC1, cardiff DC2) and 1 office. currently there is a vSAN Stretched cluster between the DCs, with 3 Nodes in each DC and the Witness in the office location. VCSA resides as a VM within the vSAN Cluster.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We now have a new requirement for a new VDI workload that needs GPUs in the hosts. The current vSAN Cluster has no GPUs and doesn't need them. So one idea is to purchase 2 new hosts and fit one into each DC then set these up as a ROBO with a new witness in the office location. So this would then be 1+1+1. Most specific point here is that in this new vSAN Cluster there would only be 1 vSAN Host in each DC, would this be supported? Do you have a KB that details this topology as i cannot find one?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This would then result in 2 vSAN Clusters across the 3 sites that if supported we would make the new vSAN Cluster on new VLANs separate etc. This would then mean for the new cluster would have 1 Host in each DC that would be managed by the vCenter in the other vSAN Cluster and a new Witness in the office location.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Only 10 VMs to run in the 2-Node vSAN So believe pack of 25 VM licences could be used instead of CPU licence?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;seems it is possible as &lt;B&gt;depping&lt;/B&gt;​ wrote about this but cant see this detailed as supported from VMware? &lt;A href="http://www.yellow-bricks.com/2017/01/24/two-host-stretched-vsan-cluster-with-standard-license/" title="http://www.yellow-bricks.com/2017/01/24/two-host-stretched-vsan-cluster-with-standard-license/"&gt;Two host stretched vSAN cluster with Standard license? | Yellow Bricks&lt;/A&gt; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 18 Sep 2020 14:28:16 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-vSAN-Discussions/2-Node-vSAN-ROBO-over-3-sites/m-p/2307672#M10151</guid>
      <dc:creator>MJMSRI</dc:creator>
      <dc:date>2020-09-18T14:28:16Z</dc:date>
    </item>
  </channel>
</rss>

