I have been bangin my head against our VM server for a while now trying to enable FTP so that we can back up the virtual machine files to another location in case the ESXi server were to meet an untimely end. I do not have the consolidated back up option. I have read multiple documents about removing the # comment from the inetd.conf file and then restarting the services or rebooting the server. I have done all of this and still can not get the ftp to work. I have managed to connect to the console using SSH, but no luck with ftp. When I try to FTP it returns the following...... -ash: ftp: not found
Anyone have this problem?
Looks like the version of wget with ESXi is a bit neutered so you can't use the -r option which would do the folder. ESXi also has ftpput /ftpget but they just seem to deal with single files as well.
Yep, the wget (implemented in busybox) is somewhat simple. Better download wget 1.10.2 (with super duper features) for ESXI.
Downloads should be as simple as: /sbin/wget --ftp-user=root --ftp-password=<passwd> *
@Dave: Time for a new oem.tgz.
I just set up a NFS store on Ubuntu server and attached it to my test host. Worked out nicely. What do you recommend using to backup a VM to that second datastore? I dont want to shut down the VM that is running, is that possible? (i could see how awesome NFS is now). I am also in the same position as you are, I dont have the big bucks budget to deploy HA and DRS and all that great expensive stuff. This could definitly work well, I just need to find out if I could take HOT copies of a VM.
Thanks!
A hot backup is simple. Use the VI client to take a snapshot and then copy the flat VMDK to your NFS store. There is a nice script created by lamw somewhere in http://communities.vmware.com/message/1029047#1029047 called ghetto?? That can automate the process.
s1xth, I am trying ESXi to ESXi FTP using flashFXP but when I attempt to transfer I get failed transfers no matter what I try and thats with AllowForeginAddress on both servers. I can FTP from an ESXi host to my local machine, just not to another ESXi host. Is there anything else you did to get it working?
L] TYPE A
227 Entering Passive Mode (172,23,22,246,245,124).
Opening data connection IP: 172.23.22.246 PORT: 62844
150 Opening ASCII mode data connection for file list
List Complete: 498 bytes in 0.30 seconds (1.6 KB/s)
227 Entering Passive Mode (172,23,22,140,227,87).
Opening data connection IP: 172.23.22.140 PORT: 58199
150 Opening ASCII mode data connection for file list
List Complete: 761 bytes in 0.73 seconds (1.0 KB/s)
Transfer queue completed
1 File failed to transfer
FYI: I got FreeNAS up and running in my test envrionment, going to try to move VM's in between ESXi hosts sometime soon. Late on Friday and I am ready to go home
Re-visiting this idea, I setup a couple NFS shares using FreeNAS (another story) and I am curious as to how the 'hot' backups actually work. They only way snapshots work now is if the base image is intact correct? If you were to lose the base image and restore that from a previous backup, does it matter when the snapshot was taken? IE I could restore the original base image and then restore the snapshot over top of it and be back where I was when the snapshot was taken?
A snapshot is a point in time view of a drive. Once the snapshot is taken all changes to the drive are written to a change file not to the original VMDK file. You wouldn't be able to copy the original vmdk since it would be an open file "Access Denied". Taking a snapshot releases the original VMDK file and allows it to be copied. After the copy you can delete the snapshot from snapshot manager (the change file) and ESXi will merger the original VMDK and the change file.
Sorry, you didn't need an explanation of snapshots.
If you were to use Converter to make the HOT copy then (Windows Only) Converter can, if you choose to use that option, make a copy, track the changes as the copy progresses, and when complete, shut down the services on the source machine merge the changes to the destination machine, shut down the source and turn on the destination. Pretty cool deal if you ask me.
I suppose a good question is do you practice Disaster Recovery? I would be really useful so that you get a better idea of where you end up after a recovery. Decide what you need to have at the end, the recovery point, to see what you need to do to ensure you have a business recoverable situation. I don't think you could rely on delta files for DR.
With this VM install? No I don't. Originally it was setup for a virtual development environment, one which the images could be rebuilt at anytime, however its since morphed into a mesh of images that cannot be easily rebuilt and some that can.
At this point it just 4 x 300GB Raid 1 disks, with no offsite backups. All of it is housed on a bladecenter S utilizing the internal SAS disks.
I would like to backup the base images and some snapshots at this point, just to be on the safe side. I would need to do it to some xternal device, trying NFS with FreeNAS but having some issues with the NFS shares going 'inactive.'
Does that help?
Update to this. I had to reconfigure my storage into a Raid 5 array, so I had to move the VM's off the SAS disks in the DSM's on the bladecenterS. I had a supermicro server sitting around with 2 250GB drives installed. I loaded up FreeNAS and started moving the images, write speed was ok roughly 80-90Mbps, which I could live with. Now that I have the raid up and running and am moving the VM's back onto internal storage the read speed is/was atrocious hovering around 30MBps. I didnt expect such a difference but since I already took this path I have to stick with it. When I am done I will ditch the FreeNAS for something else although not sure what at this point.
So FreeNAS is ok, good write speed but terrible read speed. Roughly 40GB images are taking 4 hours to migrate back to the new volumes.
Use a separate management network specifically for storage. Fast SAS drives and more smaller ones are better than two big ones. You should be able to move at least GB per minute.
NFS configuration can be very simple. A one liner is all that it takes to configure an NFS share. Once you set up the the share there is no need touch it. A simple Linux, BSD, Solaris server is all that you need. No GUI just simple.
How would I configure that seperate network within ESXi? I am confused on that part, the though crossed my mind but just not clear on how to implement it with ESXi.
I have seperate vswitches, but VM's are configured on those. Would I have to change the IP of the VMKernel as well? Move that to a seperate MGMT networKand utilize that network for moving data?
I posted how I use it a few posts back. Each server needs at least two NICs. One nic is connected to your production network. One nic i connected to separate management network.
I really think that keeping storage/management totally separated is good for security if nothing else.
Alright so 2 nics per vmware image each on a seperate vswitch. Seems simple enough.
No. One nic per VM. Two nics on the host. On the management side you need a vmkernel connection and a management connection. The production side only has a vmkernel connection. You can add VMs to production and to management but they are on two separate networks. They need to be isolated, router or switch with routing ability. ESXi client is only visible on the management network.
I am assuming then that all your VM's on your production subnet and seperate from the MGMT network? In my current environment my VM's are on the same subent as my mgmt network. I have a couple other subnets configured as well.
However, for this to work I would need to migrate my VM's onto another vswitch and off the MGMT vswitch correct? Then make the MGMT vswitch my data/MGMT network.
Migration is instant and VM can be running.
It allows you to setup, configure, patch, install etc in the management network and then change connections. Clone a running machine to the management network and test patching etc. I have all monitoring from inside the management network. That way I can see both production and management.
I'm in a web dedicated server environment where there are two servers, each with a SCSI mirror, a SATA for storage of VMs and OS backups, and no extra servers lying around to serve as NFS hosts. Is there a way to simply add NFS to the ESXi host? If not, what would you recommend in that situation?
Thanks!