VMware Cloud Community
rmgbasker
Contributor
Contributor

P2V for MSCS Cluster Server

Hi All,

I am plan to convert P2V for MSCS cluster on ESXi 4.1 using VMware Converter 4.3. It is a two node custer connected with HPXP storage and DR LUN replication also in progress.

1) C:\ D:\ (Local Disk)

2) Q: \ F:\ (Cluster Data Disk)

3)  two 64MB disk in unreadable mode (HP CMD Disk)

4) the same cluster data disk is unreadable mode in disk management console (DR replicating LUN)

This above configuration are configured in Passive node.

Please let me know what is process of doing MSCS P2V on ESXi 4.1.

What are critical step need to be carried out before and after migration

6 Replies
iainh667
Enthusiast
Enthusiast

I needed to P2V a MS Cluster a few months ago.  In my case I was using VMware Converter 4.3 to vSphere 4.0, and the customer was happy to accept a single node running cluster services.  You may need additional steps in order to get the second node working...

Here are my notes:-  (if the step starts with a plus(+), then is my "best guess" approach how to migrate the passive node)

+1      migrate the passive node first, using the standard p2v process, migrate the local storage only.  Switch off the passive physical server after the p2v has finished

[for the live node]

-------------------------

2·         [carry out the normal P2V process on the live node, switch off the source physical server when finished.

3·         Power On the VM and install VMware Tools.  Power Off VM.

4·         Edit the VM settings, configure all shared drives (all drives except C) to use the second SCSI Channel.  (This action will automatically add a second SCSI Controller).

5·         Change both SCSI Contollers from BusLogic to LSI Logic.  Power On the VM and install the LSILogic (symmpi) drivers by mounting the ISO image.

6·         Power Off.  Edit the VM Settings and enable “Virtual SCSI Bus Sharing” on the Second SCSI Controller.

7·         The Shared storage must be “EagerZero’d” in order to PowerOn the Virtual Machine.  Open Putty, connect to the ESX Host as root and use the following command:-

“vmkfstools –k /vmfs/volumes/path_to_vmdk/server_disk_no.vmdk”

(This needs to be carried out for all drive letters except the C drive)

8·         Power On the VM

9·         The Hard disk signatures need to be added to the cluster configuration using regedit.

10-    Open a command prompt, type “mountvol”.  Note down the first 8 letters of each drive’s volume label.

11-    Open “regedit”, and navigate to HKLM/SYSTEM/CurrentControlSet/Services/ClusDisk/Parameters/Signatures.  Change each key name to the values recorded in the last step.

[for the passive node]

--------------

+12 Carry out steps 5,6,8,9,10,11 for the passive node's VM

13-         Finally, since snapshots will not work with shared storage, make sure the VM is backed up using a traditional backup method.  (Don’t use esXpress/vDR etc.)

rmgbasker
Contributor
Contributor

After the P2V passive node migration i have uninstalled all hp related drivers in my migrated VM. So, that time my existing physical NIC information will be erased and it will be virtual nic then how can add this passive to existing active node to test failover between physical cluster to virtual cluster.

bulletprooffool
Champion
Champion

I'd go ahead and migrate the second node of the cluster, then work on a fully virtual cluster.

In order to be able to ruin a cluster that is part physical / part virtual, you need to either someone share the storage that you are presenting (possible with passthorugh via iSCSI), or somehow replicate data - 3rd party tool required.

If your target is a fully VM cluster, I'd say you are wasting time and affort working on a Physical / Virtual setup.

One day I will virtualise myself . . .
iainh667
Enthusiast
Enthusiast

Your (now virtual) passive node won't be attached to the same storage as the live physical node, so you won't be able to switchover to the VM.  You will need to take the cluster offline, while you migrate it.

The next step will be to migrate the Live node across to virtual and carry out the post migration steps.

ShadowStar23
Contributor
Contributor

Thanks to iainh667's post I reviewed and tried to use it for myself, but I had to alter the end game scenario to meet my goal or Virtualizing the MSCS cluster.  The post gave me some ideasabout how to review the steps and my physical systems in order to get to a virtual goal.

I will post my steps now just as an added offering in case anyone needs to follow similar steps.

1) At Physical Server Passive Node(s), Go to Cluster administrator and move groups over to other member(s) of cluster. Leaving only the system drive.

2) Installed Converter 4.3 in standalone mode.

3) Run Converter -> Convert Machine -> direct it to vSphere Server.

4) With the cluster gorups moved, there should only be one drive (maybe two) depending on your config to virtualize.  Customize Settings prior to Finishing for Conversion process to begin.

5) Once completed, I disabled the NIC's on the VM and powered up the VM in the cloud to test booting, VMtools install and that the VM was stable.  Once I checked the VM's I shut them down.

6) Repeat steps 1 through 5 for each remaining member on the Cluster (I had 4, dropped it to 2 for this process, and as you will see at the end, I will have a single node cluster (but available to bring other nodes online if needed))

Now that the nodes have been virutalized successfully and independent of the the shared disks now it is time to virtualize the cluster group physical disks themselves.

7) Run Converter -> Convert Machine -> direct it to vSphere Server.

😎 Give each of these conversions a similar name to the host(base) node so that you can get these disks nearby. (on datastore)

9) Customize the conversion so that the C drive (System drive) is not converted.  This ultimately creates a dead VM, but one that has the VMDK's for the shared drives in the cluster.  That is what we are after next.

10) Finish the conversion for each VM holding a cluster gorup (I had all gorups on one Physical server for the conversion and had planned on running all grousp on a single VM in the cloud once done.

11) The Conversion could fail since no disk has a system drive for conversion, if doesn't then gravy.

12) Next I powered down the physical servers (noting their IP addressed for the LAN and Heartbeat, and in my case, noting the shared printer resources in the print group.  I actually had to recreate a few printers after the conversion.)

13) I chose a primary Base Node VM from the cluster and added Virtual Disks from the VM in step 10 to the Base Node VM's settings.  Be sure to add the disk on an avialbale new SCSI ID so that a new SCSI controller is needed. (you could/should move the vmdk's from the step 10 VM to the Base Node of choice for a cleaner config and operation; not to mention snapshots.)

14) Change the SCSI controller for the added VMDK's to LSI Logic SAS.  Ok the config.

15) Be ready to update the drivers for the SCSI controller on your VM (in my case W2K3 cluster)  Reboot when asked.

16) Log back into the VM.  Open Computer Management and go to Disk Management, Refresh/Rescan the disk.  You'll add disk signatures to the new vmdk's.

17) Wait about 2 minutes as the SCSI contorller scans the disks for the signatures and their profiles.  Low and behold, the cluster group drives will appear and have they profile and data show up.

18) Next you can start the Cluster Service and check Cluster Administration or reboot the server and wait for it to come up.

19) Verify settings and all is operational.

20) Configure the VM's NIC card (in my case I added a new VMXNET 2 card and set it up)

21) Reboot once more and verify apps, settins, access, printers, etc.

Done.

Wasn't too painful.  I plan to visit this again soon by adding the second node from the cluster in to the cloud and to make the first node's shared disk accessible as shared storage.  Not sure how that will go for me, but I will try it out.  Hope this helps any one looking for a series of steps and processes.  Feedback is always welcome.

legionkayra
Contributor
Contributor

Buenas tardes tengo una pregunta, tu muestras el cambo de los ids para el apuntamiento del cluster al los servidores la pregunta mia e sque  tengo varios nodos en este caso 4 y se separan en 2 servidores por cluster en nodo activo, yo voy a remplazar los servidores actuales  por virtuales es nesesario todo este proceso!!! de los cluster o no lo es...

Reply
0 Kudos