3 Replies Latest reply on Jun 22, 2014 8:18 PM by Julian_Milano

    Looking for the best scenario to do P2V.

    Julian_Milano Novice

      I have the following virtual farm setups:

       

      Melbourne & Sydney:

      - 7x VMware hosts in each site running VMware 4.1.

      - 1x datacentre in each site made up of 7 virtual hosts.

      - 2x SANs in each site used for storage of virtual servers.

      - 1x vSphere server, as a virtual in the Sydney VMware datacentre.

       

      Melbourne & Malaysia:

      - 7x VMware hosts in MAL site running VMware 5.1.

      - 2x VMware hosts in MEL site running VMware 5.1.

      - 1x datacentre in each site.

      - 2x SANs in each site used for storage of virtual servers.

      - 1x vSphere server, as a virtual in the MAL VMware datacentre.

       

      So my point here is that the new VMware farm we are setting up in Melbourne has 2x VMware hosts local in MEL site, but the vSphere server is located in MAL site.

       

      I am planning to do P2V conversions of physical servers in the MEL site and the target will be the new VMware 5.1 farm in Melbourne. I'm testing the process and found that a P2V of the server below is taking around 14 hours:

       

      * HP BL685 G1 server in C700 enclosure.

      *** C-drive is 136GB.

      *** D-drive is 300GB.

      *** E-drive is 2150GB.

       

      The conversion rates for each of the drives is:

       

      C-drive:     54 min

      D-drive:     120 min

      E-drive:     18 Hrs (actually it's still running as I type but the ETA is 18 hours all up!?)

       

      So my question is, with the vSphere server being in another country, Malaysia, where PING times from Melbourne are around 120ms, is this why it is taking so long to convert one machine?