VMware Horizon Community
MrBeatnik
Hot Shot
Hot Shot
Jump to solution

Best way to Rebalance local disks with linked clones?

Hi,

  • Running View 5.2, looking to upgrade soon.
  • We use local storage (FusionIO).
  • Each pool is spread across 4 servers running these local storage.
  • Our linked clones are non-persistent - users don't get a specific desktop, and desktops are deleted on logout.

So, I'd like to patch the servers.

My thinking on this is:

  1. Edit the pool to remove the local storage on the sever to be patched.
  2. Rebalance.
  3. Perform update (maintenance mode, yada yada).

So the rebalance operation is having limited success in my first test

I am aware of this: VMware View 5.2 Documentation Library : You cannot load-balance virtual machines across a resource pool. For example, you cannot use the View Composer rebalance operation with linked-clones that are stored on local datastores.

So I cannot rebalance (first of all, I'm not sure why? In my case, can't machines just be deleted and recreated elsewhere - considering a rebalance also does a refresh anyway)?

What is the best approach?

  • I need to ensure that available desktops on the specific server local storage are not available anymore (once there are more sources available on another server storage) so that users don't connect/are not connected when I do maintenance.
  • I need to ensure that connected desktops on the specific server local storage are deleted on logout (as usual) and do not get recreated on that storage again.

Of course, when the server comes back up, I need to reverse the process and get desktops back onto it, and maintain a balance across the other servers.

Any advice?

Thanks in advance.

0 Kudos
1 Solution

Accepted Solutions
kgsivan
VMware Employee
VMware Employee
Jump to solution

Rebalance is not a simple delete and re-create action. Even the Desktops are non-persistent, there are persistent disks associated with it. like (Internal disk)

These carries data like domain membership trust and all. Therefore, without a shared datastore, preserving and migrating such internal disk is impossible (and prevented).

Now for you case, since it is floating desktops (non- persistent),

  1. De-select the datastore of the host to be patched by editing the pool.
  2. Logoff active sessions (if any) for the specific VMs / put the VM into maintenance mode.
  3. Delete Those specific VMs through *View administrator Pool Inventory*
  4. Configure back the datastore spec layout after patching host,
  5. And then increase the pool size to the desired numbers

View solution in original post

0 Kudos
5 Replies
kgsivan
VMware Employee
VMware Employee
Jump to solution

Rebalance is not a simple delete and re-create action. Even the Desktops are non-persistent, there are persistent disks associated with it. like (Internal disk)

These carries data like domain membership trust and all. Therefore, without a shared datastore, preserving and migrating such internal disk is impossible (and prevented).

Now for you case, since it is floating desktops (non- persistent),

  1. De-select the datastore of the host to be patched by editing the pool.
  2. Logoff active sessions (if any) for the specific VMs / put the VM into maintenance mode.
  3. Delete Those specific VMs through *View administrator Pool Inventory*
  4. Configure back the datastore spec layout after patching host,
  5. And then increase the pool size to the desired numbers
0 Kudos
MrBeatnik
Hot Shot
Hot Shot
Jump to solution

kgsivan wrote:

Rebalance is not a simple delete and re-create action. Even the Desktops are non-persistent, there are persistent disks associated with it. like (Internal disk)

These carries data like domain membership trust and all. Therefore, without a shared datastore, preserving and migrating such internal disk is impossible (and prevented).

Now for you case, since it is floating desktops (non- persistent),

  1. De-select the datastore of the host to be patched by editing the pool.
  2. Logoff active sessions (if any) for the specific VMs / put the VM into maintenance mode.
  3. Delete Those specific VMs through *View administrator Pool Inventory*
  4. Configure back the datastore spec layout after patching host,
  5. And then increase the pool size to the desired numbers

I understand that the move and refresh operation will indeed be quicker, and this is not possible with local storage.

It would be nice to have the option on rebalance to "delete and recreate" or "move and refresh (default)" - but perhaps I am in a unique position where there is not enough demand for this type of feature.

One thing - after deselecting the datastore, I did notice that choosing rebalance did "do something" to some desktops. I'm not sure what. There were the desktops that had "task halted" as well.

Oh well.

Anyway, thanks for your advice. I am doing:

1. Done (as per your 1)

2. Delete any remaining "active/provisioned" desktops on that datastore (Through Pool Inventory).

3. Instead of forcing users to logoff (as they are still using sessions) we wait for logoff. On logoff the, desktop is deleted. -- Once this has happened, there should be no more desktops on this datastore.

4. Done (as per your 4)

5. This is a strange one - I didn't decrease the pool size in the steps above (our servers have enough overhead to take the full pool size with one server down). I'm assuming that when desktops on the other datastores are deleted (logged off) one by one, they will be spun up on this restored datastore (one by one)?

Thanks

0 Kudos
kgsivan
VMware Employee
VMware Employee
Jump to solution

> 5. This is a strange one - I didn't decrease the pool size in the steps above (our servers have enough overhead to take the full pool size with one server down). I'm assuming that when desktops on the other datastores are deleted (logged off) > one by one, they will be spun up on this restored datastore (one by one)?


Sorry... My bad. You don't need to increase pool size. Instead user "Disable Provisioning" before VM removal and "Enable Provisioning" after patching, and adding the host-back..

(If so, even you don't need to edit the datastore spec.)


MrBeatnik
Hot Shot
Hot Shot
Jump to solution

kgsivan wrote:

> 5. This is a strange one - I didn't decrease the pool size in the steps above (our servers have enough overhead to take the full pool size with one server down). I'm assuming that when desktops on the other datastores are deleted (logged off) > one by one, they will be spun up on this restored datastore (one by one)?


Sorry... My bad. You don't need to increase pool size. Instead user "Disable Provisioning" before VM removal and "Enable Provisioning" after patching, and adding the host-back..

(If so, even you don't need to edit the datastore spec.)


Hmm, I haven't disabled provisioning either - I still want desktops to be created in this case, but I guess others may not if they don't have the resources.

I found that when deleting the desktops from the other resources, the "minimum available" count kicks in and spins up some new desktops on the remaining datastores. All fine with me.

When the new datastore comes back, it starts using it again automatically when users log off (desktop deleted) from other datastores.

This seems OK, have now completed the patch on the first server.

I can see this might be a bit of a nightmare with an extensive deployment.

Thanks for the assistance.

0 Kudos
kgsivan
VMware Employee
VMware Employee
Jump to solution

True. You can disable provisioning for a short period of time (during the time of deletion up-to host removal.)

However as you pointed out you can let the provisioning happens to other datastores in parallel, and prevent to the particular host by deselecting the datastore.

Agree with you that both are tedious in large scale... 

Thanks for the detailed response.

0 Kudos