VMware Cloud Community
alexisdog
Enthusiast
Enthusiast
Jump to solution

Adding Hosts With Existing VMs to "Greenfield" DRS Enabled Cluster

I am currently involved in an evaluation project for Hypervisor 5.5 and vCenter. Existing are 2 physical servers with redundant everything and 8 SAS hot swap hard drive bays. Initially, of the 8 bays only 4 were populated with hard drives. The hard drives were striped, ESXi 5.0 was loaded, 4 VMs were created on each server and all lived happily ever after.

Now I would like to upgrade those servers to 5.5... as follows:

I populated the remaining 4 bays on each server with hard drives and created a second stripe of sufficient capacity (twice the capacity of the original 4 drives). I shut down the servers, switched the boot stripe to the new stripe of drives and installed ESXi 5.5 on the new stripe. The old stripe also remains in tact, so I can boot to ESXi 5.0 if I set to boot from the original stripe or boot to ESXi 5.5 if I boot to the new stripe (both OSes boot fine, are properly networked, vCenter configured, etc.).

When booting into 5.5, it sees its own, new stripe and also finds the original stripe which is listed as a second attached datastore (I actually feel I want this to make the eventual VM migration easy from the old to the new datastore), both are Local LSI Disk, Non-SSD, Type VFMS5.

Panic sets in when I boot both machines into 5.5 and it comes time to add the 5.5 hosts into a cluster (I would also like to test DRS, vStorage, and HA) and I reach the setting of, "Choose Resource Pool." I am scared to death that selecting the first option, "Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the host will be deleted." will mean not only a reformat of the new stripe which I would like to add to the cluster, but also the still attached older stripe which has data that I want to maintain. I do not want to lose the data or the VMs on the original stripe, but want to migrate them into a new cluster of 2 ESXi 5.5 hosts. I was actually hoping to migrate the data to the new strips on the new hosts and then re-purpose the 2 original arrays (across both machines) as a third vStorage array.

Questions:

1. If I select the option, "Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the host will be deleted." with all drives attached, will all my data be lost?

2. If I pull the 4 original drives (5.0) and use the option, "Put all of this host's virtual machines in the cluster's root resource pool. Resource pools currently present on the host will be deleted." only with the new arrays connected (5.5), and then reconnect the old arrays after the hosts are added to the cluster, will the re added arrays still get sucked in and data deleted?

3. Is picking the second option, "Create a new resource pool for this host's virtual machines and resource pools. This preserves the host's current resource pool hierarchy." a safe option? If this option works, does it matter if I have my original array attached when adding the hosts into the cluster?

Last point: In reading every document I could find it seems strongly suggested to configure hosts that do not have an VMs deployed, which is why I am going to great lengths to try to keep the new hosts as empty as possible and with 1 port basic networking pending finishing the configuration. Does it matter if I migrate the VMs or add them as guests to the ESXi 5.5 before or after I add the hosts to the cluster?

Any thoughts or help would be greatly appreciated.

0 Kudos
1 Solution

Accepted Solutions
markdjones82
Expert
Expert
Jump to solution

I would go with option C.

VSAN i would agree has some silly requirements, but what they were aiming for is almost enterprise class SAN at a decent price by using SSD's for caching like arrays, but like you said if you are not needing high performance I would continue to go with a NAS NFS solution.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com

View solution in original post

0 Kudos
11 Replies
markdjones82
Expert
Expert
Jump to solution

When you add the host to the cluster with that first option it is just saying to import all the registered machines etc into the "root" of the cluster instead of using existing resource pools.  It should not delete any VM's at all other than if you had created a resource pool on the host itself.  when you add to the cluster it hsould just bring everything included the VM's into the cluster. See link etc below.




ClusterComputeResource.AddHost_Task – Adds a host to a
cluster. The host name must be either an IP address, such as 192.168.0.1, or a
DNS resolvable name. If the cluster supports nested resource pools and you
specify the optional resourcePool argument, the
host's root resource pool becomes the specified resource pool, and that resource
pool and the associated hierarchy is added to the cluster.

If a cluster does not
support nested resource pools and you add a host to the cluster, the standalon...
resource pool hierarchy is discarded and all virtual machines on the
host are added to the cluster's root resource pool.




ClusterComputeResource.moveHostInto_Task moves a host
that is in the same datacenter as the cluster into the cluster. If the host is
already part of a different cluster, the host must be in maintenance mode.




ClusterComputeResource.moveInto_Task works like moveHostInto_Task, but supports an array of hosts at a
time. When using this method, you cannot preserve the original resource pool
hierarchy of the hosts.

See link below:

vSphere Documentation Center

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
alexisdog
Enthusiast
Enthusiast
Jump to solution

Thanx Mark. I understand the doc that you sent me, which speaks to the last point, in that I have not registered the existing virtual machines in the 5.5 yet in anticipation of testing the vstorage. I guess, another way of explaining my hang up is that I have found no documentation on what actually happens to the disk drives during these processes of being added to a cluster or being added to vstorage, etc. (a day in the life of a hard drive..). I am seeing both of these operations as similar to a software raid implementation and I fear that as part of this process the boot sector of the disk maybe written to (or sometimes called initialized) resulting in an immediate and complete data loss. This is likely a possibility because in any multi disk scheme you have to look to write identifying information to the various drives if you want to attempt a reliable recovery in the case of controller failure. If the VMs are registered, ESXi may just implement a procedure to move those files before requisitioning the hard drive space (once it is in a pooled resource you really don't know where the data actually is anyway). So I guess my choices are as follows:

A.

1. Register VMs on local host

2. Join hosts to cluster

3. implement a migration to the new data array

4. implement vstorage

B.

1. Join hosts to the cluster

2/3. migrate data and register machines to cluster in no order of preference

4. implement vstorage

C.

1. Join hosts to cluster

2. implement vstorage

3. register VMs to cluster

4. migrate machines to preferred datastore

Which looks best?

0 Kudos
markdjones82
Expert
Expert
Jump to solution

I guess i am confused on what you mean implement vstorage.  Are you talking about reformatting it to a VMFS datastore?

If so, I would take option 1 and do a cold migration onto the new storage and then reformat VMFS.  Honestly, you odn't even need to add it into the new cluster, just right click and cold migrate the VM's once in Vcenter.  Just add it to the datacenter and not the cluster.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
alexisdog
Enthusiast
Enthusiast
Jump to solution

I was referring to this: VMware vSphere Storage Appliance (VSA) for Shared Storage, but I guess I meant to say Virtual San :smileyshocked:) - wow the site is changing quickly these days.

0 Kudos
alexisdog
Enthusiast
Enthusiast
Jump to solution

I am trying to see if there would be any benefit from implementing a 2 host failover/HA cluster and if the DRS would give any practical benefit over the current hypervisor only configuration of 2 hosts operating independently of each other. I am currently pushing 2 database servers, one somewhat heavy with a backup and one light, a domain controller and backup dc, and a file server. I am curious to see how the DRS shakes out - though I may not like the results if servers that are supposed to be redundant end up on the same hardware.

0 Kudos
markdjones82
Expert
Expert
Jump to solution

Well, VSAN requires at least 3 servers, at least 1 SSD and 1 HDD.  So, i don't think you can use that with your current setup of 2 hosts.

VMware KB: vSphere 5.5 Virtual SAN requirements

On another note, since you are using local disks, a vmware cluster would not be able to leverage HA/DRS.  That being said, using Vcenter with 5.1/5.5 would provide you the ability to at least migrate your guests for maintenance on hosts depending on your licensing level.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
alexisdog
Enthusiast
Enthusiast
Jump to solution

Yeah, I was reading that myself. It looks like on the new product the requirements have changed. Before there was a lot of docs about setting up a 2 node cluster with vStorage acting as the third node.I guess I mis-stated 2 nodes based on their documentation:

http://www.vmware.com/files/pdf/techpaper/vmw-vsphere-high-availability.pdf

VMware KB: FAQ for VMware vSphere Storage Appliance (VSA) 1.0.x

Further looking through the knowledge base yields this:

  • Two ESXi hosts
  • A physical Windows or Linux machine capable of running the VSA Cluster Service
  • vCenter Server and VSA Manager installed and running on a virtual machine inside one of the two ESXi hosts

- but silly enough the third machine makes it a 3rd node. I get very frustrated with VMWare documentation, because no matter how much I read, I very often cannot predict what a button click will do and the use of terminology is sometimes not standard. Even their best 4 and 5 star docs are a mix of very specific CLI commands with some vague steps thrown in to separate and confuse the reader. I find myself having to personally test each item ahead of implementation on everything, which is very inefficient.

Looking cursorily at the new virtual SAN requirement, I cannot even understand where this product is going. Before it seemed like a clever way to implement a light quorum server, maybe re-purpose an old Windows 98SE box or maybe soon a Windows XP box. If I have to add an SSD on top of each storage array, I might as well just buy a NAS or a SAN at that point. I guess this feature just got tossed off my list of things to test. BTW, I do have a shared storage NAS (NFS) sitting between the 2 ESXi servers.

So I revise my earlier post as follows:

A.

1. Register VMs on local host

2. Join hosts to cluster

3. implement a migration to the new data array

B.

1. Join hosts to the cluster

2/3. migrate data and register machines to cluster in no order of preference

C.

1. Join hosts to cluster

2. register VMs to cluster

3. migrate machines to preferred datastore

Which looks best?

0 Kudos
markdjones82
Expert
Expert
Jump to solution

I would go with option C.

VSAN i would agree has some silly requirements, but what they were aiming for is almost enterprise class SAN at a decent price by using SSD's for caching like arrays, but like you said if you are not needing high performance I would continue to go with a NAS NFS solution.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos
alexisdog
Enthusiast
Enthusiast
Jump to solution

I would do option C as well. Do you know with any measure of certainty that this will not delete the actual data that is stored on the pre-existing datastore when I add the hosts to the cluster?

0 Kudos
alexisdog
Enthusiast
Enthusiast
Jump to solution

I can confirm that adding hosts with non registered VMs existing on their datastores to an HA/DRS cluster will not delete the VM files from the datastore and the machines can be added back after browsing the datastore through vSphere client.

0 Kudos
markdjones82
Expert
Expert
Jump to solution

Yes, I do this operation all the time.  All joining a cluster does is pool compute and memory.  Has no affect on any storage operations.

http://www.twitter.com/markdjones82 | http://nutzandbolts.wordpress.com
0 Kudos