We have been using vGPU's in Win 7 for a long time.. K2's.. M60's.. P4's. they have been working great. ( K2's have now been deprecated sounds like . )
We now are using Instant Clones / UEM / App Stacks / Windows 10 .
We Were using Persistent Desktops ( ie Unidesk ) - OOh how i MISS THEE.
My question is :
Is it supported putting Multiple ESXi hosts in the Same CLuster.. With Different vGPU Cards . ( specifically P4 and M60's ) .
Cluster 1 -
5 Esxi hosts.. ( 6.7 u3 )
2 hosts have P4's .
2 hosts have M60's .
1 just a normal user host .
This worked Well w/ Persistent Machines ( having them all in the same cluster ) . As you just lock down the desktop to the ESXi host that has the card the user needs .
Instant Clones create a slightly different set of issues. As they recreate themselves every login .
Is the preferred method to :
- Create a new Cluster for every Card type basically ?..
So a P4 Cluster.. a M60 cluster.. and only put servers w/ those cards in there ?
This should allow easy vmotions between servers w/ the Same exact cards ?
im getting the impression this would create the least amount of headaches .
Anyone had this issue ?.. Just curious what others are doing .
Yes, keep your clusters separated by GPU model. The profiles applied are specific to that GPU.
You cannot control which host the instant clones get provisioned on and DRS/vMotion would also cause issues.
Keep in mind that vMotion is not supported until vSphere 6.7u2 - it is an awesome feature for maintenance purposes.