I do have the question related to the FileOperation using the vSphere API.
Q: How can we use the InitiateFileTransferToGuest() to upload the directory ?
- Do we have to browse through the directory and get the list of the files. Then recursively call the InitiateFileTransferToGuest() and http function to upload each individual files ??/???
Is there any better way of doing it ? or API available ?
- How to handle the symbolic link files ?
Appreciate your help.
Is the perl script only for single file upload ? what about the directory containing around 100 files and directories ? The API has the limitation of passing a file (not a directory or symbolic link).
The GuestOps API is really useful, particularly since it uses a channel through VMware Tools (you can even connect to a VM when it has no network card with GuestOps). But it probably won't be the best approach for copying large files into the guestOS.
If you have a very large number of small files, I'd suggest you zip up the directory structure (or tarball it on *nix). Upload the zip, then use the RunProgramInGuest call to unpack the contents into the local guestOS file system.
If you have large files (gigabytes for example), then you may find the upload process slow (particularly if you need to copy those large files to multiple guestOS VMs). In that case, I'd suggest converting your files into an ISO format (mkisofs from cdrtools is quite easy to script). Then copy that ISO cdrom to a datastore, mount it in the guestOS and call RunProgramInGuest. All of this can be automated, though some orchestration is needed in your scripting. However, using the ISO this way will let you do concurrent operations on multiple guests without a running concurrent file uploads.
Again, the ISO option is really only needed for very large files or concurrency. I'd go with a simple tarball or zip to copy in a small, but large directory tree of files.