<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Any plan regarding how to operate multi-node etcd cluster? in VMware Cloud Director Container Service Extension Discussions</title>
    <link>https://communities.vmware.com/t5/VMware-Cloud-Director-Container/Any-plan-regarding-how-to-operate-multi-node-etcd-cluster/m-p/2928841#M118</link>
    <description>&lt;P&gt;I have done some tests regarding this topic with a cluster created with 3 master.&lt;BR /&gt;If one control plane node is shutdown from vCenter, "get pods -A" continue to work. (As expected)&lt;BR /&gt;If two control plane nodes are shutdown, "get pods -A" doesn't work anymore (Expected)&lt;BR /&gt;After restarting one of the control plane node "get pods -A" works again, (Expected)&lt;BR /&gt;So the basic functionality of a multi control plane nodes is working.&lt;BR /&gt;&lt;BR /&gt;One issue is that no errors are reported in the events or in status of the cluster from CSE plugin. (Status is "ready")&lt;BR /&gt;The only thing visible is at load balancer level&amp;nbsp; which shows that some endpoints are down and VAPP that is noticing some VMs down.&lt;BR /&gt;Would it be possible to add some kinds of "health" in the CSE plugin? (like all control planes node up and running / worker nodes up and running, load balancer associated to management IP deployed etc)&lt;BR /&gt;&lt;BR /&gt;Second issue, I have deleted on purpose one of the control plane VM.&lt;BR /&gt;As mentioned above no information are reported from the CSE plugin, it still show "3 nodes".&lt;BR /&gt;It doesn't recreate the missing node (no "auto-heal" , which would be the best)&lt;BR /&gt;Is there a procedure on how to replace a failed node in such case?&lt;/P&gt;</description>
    <pubDate>Wed, 14 Sep 2022 13:53:24 GMT</pubDate>
    <dc:creator>ccalvetbeta</dc:creator>
    <dc:date>2022-09-14T13:53:24Z</dc:date>
    <item>
      <title>Any plan regarding how to operate multi-node etcd cluster?</title>
      <link>https://communities.vmware.com/t5/VMware-Cloud-Director-Container/Any-plan-regarding-how-to-operate-multi-node-etcd-cluster/m-p/2922735#M67</link>
      <description>&lt;P&gt;The new capability to create multiples nodes for the control plane is good.&lt;BR /&gt;However how should they be operated?&lt;BR /&gt;&lt;BR /&gt;I am thinking of etcd&lt;BR /&gt;&lt;A href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/" target="_blank"&gt;https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;For example:&lt;BR /&gt;&lt;BR /&gt;What would be the process to replace a failed ectd member?&lt;BR /&gt;Removing a failed member doesn't seem possible if they are managed by Tanzu.&lt;BR /&gt;I didn't see an option in the gui to delete a specific node.&lt;BR /&gt;&lt;BR /&gt;Will their be an option to backup etcd directly from the GUI or via CLI?&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 08 Aug 2022 10:17:21 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-Cloud-Director-Container/Any-plan-regarding-how-to-operate-multi-node-etcd-cluster/m-p/2922735#M67</guid>
      <dc:creator>ccalvetbeta</dc:creator>
      <dc:date>2022-08-08T10:17:21Z</dc:date>
    </item>
    <item>
      <title>Re: Any plan regarding how to operate multi-node etcd cluster?</title>
      <link>https://communities.vmware.com/t5/VMware-Cloud-Director-Container/Any-plan-regarding-how-to-operate-multi-node-etcd-cluster/m-p/2928841#M118</link>
      <description>&lt;P&gt;I have done some tests regarding this topic with a cluster created with 3 master.&lt;BR /&gt;If one control plane node is shutdown from vCenter, "get pods -A" continue to work. (As expected)&lt;BR /&gt;If two control plane nodes are shutdown, "get pods -A" doesn't work anymore (Expected)&lt;BR /&gt;After restarting one of the control plane node "get pods -A" works again, (Expected)&lt;BR /&gt;So the basic functionality of a multi control plane nodes is working.&lt;BR /&gt;&lt;BR /&gt;One issue is that no errors are reported in the events or in status of the cluster from CSE plugin. (Status is "ready")&lt;BR /&gt;The only thing visible is at load balancer level&amp;nbsp; which shows that some endpoints are down and VAPP that is noticing some VMs down.&lt;BR /&gt;Would it be possible to add some kinds of "health" in the CSE plugin? (like all control planes node up and running / worker nodes up and running, load balancer associated to management IP deployed etc)&lt;BR /&gt;&lt;BR /&gt;Second issue, I have deleted on purpose one of the control plane VM.&lt;BR /&gt;As mentioned above no information are reported from the CSE plugin, it still show "3 nodes".&lt;BR /&gt;It doesn't recreate the missing node (no "auto-heal" , which would be the best)&lt;BR /&gt;Is there a procedure on how to replace a failed node in such case?&lt;/P&gt;</description>
      <pubDate>Wed, 14 Sep 2022 13:53:24 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/VMware-Cloud-Director-Container/Any-plan-regarding-how-to-operate-multi-node-etcd-cluster/m-p/2928841#M118</guid>
      <dc:creator>ccalvetbeta</dc:creator>
      <dc:date>2022-09-14T13:53:24Z</dc:date>
    </item>
  </channel>
</rss>

