<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Different types of vGPU's in One Cluster  ? in Horizon Desktops and Apps</title>
    <link>https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/Different-types-of-vGPU-s-in-One-Cluster/m-p/1842177#M83393</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi ,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have been using vGPU's in Win 7 for a long time.. K2's.. M60's.. P4's.&amp;nbsp; they have been working great. ( K2's have now been deprecated sounds like . )&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We now are using Instant Clones / UEM / App Stacks / Windows 10 .&lt;/P&gt;&lt;P&gt;We Were using Persistent Desktops ( ie Unidesk ) - OOh how i MISS THEE.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is :&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it supported putting Multiple ESXi hosts in the Same CLuster.. With Different vGPU Cards . ( specifically P4 and M60's ) .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;i.e. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cluster 1 - &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;5 Esxi hosts.. ( 6.7 u3 ) &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2 hosts have P4's .&lt;/P&gt;&lt;P&gt;2 hosts have M60's .&lt;/P&gt;&lt;P&gt;1 just a normal user host .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This worked Well w/ Persistent Machines ( having them all in the same cluster )&amp;nbsp; . As you just lock down the desktop to the ESXi host that has the card the user needs .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Instant Clones create a slightly different set of issues.&amp;nbsp; As they recreate themselves every login .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is the preferred method to :&lt;/P&gt;&lt;P&gt;- Create a new Cluster for every Card type basically ?.. &lt;/P&gt;&lt;P&gt;So a P4 Cluster.. a M60 cluster.. and only put servers w/ those cards in there ?&lt;/P&gt;&lt;P&gt;This should allow easy vmotions between servers w/ the Same exact cards ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;im getting the impression this would create the least amount of headaches .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Anyone had this issue ?.. Just curious what others are doing .&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Tue, 29 Oct 2019 16:43:00 GMT</pubDate>
    <dc:creator>Douglas42Adams</dc:creator>
    <dc:date>2019-10-29T16:43:00Z</dc:date>
    <item>
      <title>Different types of vGPU's in One Cluster  ?</title>
      <link>https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/Different-types-of-vGPU-s-in-One-Cluster/m-p/1842177#M83393</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi ,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We have been using vGPU's in Win 7 for a long time.. K2's.. M60's.. P4's.&amp;nbsp; they have been working great. ( K2's have now been deprecated sounds like . )&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We now are using Instant Clones / UEM / App Stacks / Windows 10 .&lt;/P&gt;&lt;P&gt;We Were using Persistent Desktops ( ie Unidesk ) - OOh how i MISS THEE.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My question is :&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it supported putting Multiple ESXi hosts in the Same CLuster.. With Different vGPU Cards . ( specifically P4 and M60's ) .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;i.e. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cluster 1 - &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;5 Esxi hosts.. ( 6.7 u3 ) &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2 hosts have P4's .&lt;/P&gt;&lt;P&gt;2 hosts have M60's .&lt;/P&gt;&lt;P&gt;1 just a normal user host .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This worked Well w/ Persistent Machines ( having them all in the same cluster )&amp;nbsp; . As you just lock down the desktop to the ESXi host that has the card the user needs .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Instant Clones create a slightly different set of issues.&amp;nbsp; As they recreate themselves every login .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is the preferred method to :&lt;/P&gt;&lt;P&gt;- Create a new Cluster for every Card type basically ?.. &lt;/P&gt;&lt;P&gt;So a P4 Cluster.. a M60 cluster.. and only put servers w/ those cards in there ?&lt;/P&gt;&lt;P&gt;This should allow easy vmotions between servers w/ the Same exact cards ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;im getting the impression this would create the least amount of headaches .&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Anyone had this issue ?.. Just curious what others are doing .&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 29 Oct 2019 16:43:00 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/Different-types-of-vGPU-s-in-One-Cluster/m-p/1842177#M83393</guid>
      <dc:creator>Douglas42Adams</dc:creator>
      <dc:date>2019-10-29T16:43:00Z</dc:date>
    </item>
    <item>
      <title>Re: Different types of vGPU's in One Cluster  ?</title>
      <link>https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/Different-types-of-vGPU-s-in-One-Cluster/m-p/1842178#M83394</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes, keep your clusters separated by GPU model. The profiles applied are specific to that GPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You cannot control which host the instant clones get provisioned on and DRS/vMotion would also cause issues.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Keep in mind that vMotion is not supported until vSphere 6.7u2 - it is an awesome feature for maintenance purposes.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 30 Oct 2019 00:02:43 GMT</pubDate>
      <guid>https://communities.vmware.com/t5/Horizon-Desktops-and-Apps/Different-types-of-vGPU-s-in-One-Cluster/m-p/1842178#M83394</guid>
      <dc:creator>nburton935</dc:creator>
      <dc:date>2019-10-30T00:02:43Z</dc:date>
    </item>
  </channel>
</rss>

