I wanted to switch from MAMP to (virtual) LAMP on my Mac in order to have a development platform that is identical to my production servers.
In production I use Ubuntu 10.04 as my server OS of choice, hosted by Amazon.
I used VMWare before, basically just to test websites in IE. After IE6 got killed (thank God!) I stopped using VMWare.
I thought installing ubuntu server would be easy, I couldn't be more wrong!
I did a couple of installs, which all fail (due to Fusion lite package and vmware tools download/install issues).
After some searching I found out that I needed Fusion full version (btw even with this version I had issues with sharing on a easy install -> empty /mnt/hgfs ).
Then I did a manual install and only installed ssh.
In order to install vmware tools I did:
sudo apt-get build-essential checkinstall
sudo apt-get install linux-headers-virtual
sudo apt-get install --no-install-recommends open-vm-dkms
sudo apt-get install --no-install-recommends open-vm-tools
(see: Installing VMware tools on linux guest)
Then via VMWare menu i selected Virtual Machine -> Install VMWare tools.
In linux:
sudo mount /dev/crom /crom
tar xzvf /cdrom/VMwareTools-8.8.0-465068.tar.gz -C /tmp/
cd /tmp/vmware-tools-distrib/
sudo ./vmware-install.pl -d
sudo reboot
Via Sharing Options added a folder and it worked! Yey!
Because I need apache user (www-data) to write to this share I needed to change permission on this folder. Apache user has uid 33 and gid 33:
In linux:
id www-data
uid=33(www-data) gid=33(www-data) groups=33(www-data)
On ubuntu hgfs mount point is defined in /etc/mtab:
.host:/ /mnt/hgfs vmhgfs rw,ttl=1 0 0
Editing this file is useless so I copied this to /etc/fstab and added some user parameteres to it:
.host:/ /mnt/hgfs vmhgfs rw,ttl=1,uid=33,gid=33 0 0
After reboot I got some issues mounting /mnt/hgfs and had to skip mounting, so I added 'nobootwait' in /etc/fstab, so the line reads:
.host:/ /mnt/hgfs vmhgfs rw,ttl=1,uid=33,gid=33,nobootwait 0 0
Booting went just fine now, except that user permission where stil 501 as uid and dialout as gid:
edin@ubuntu:~$ ls -l /mnt/hgfs/
total 1
drwxr-xr-x 1 501 dialout 136 2011-09-16 01:30 Websites
I manually re-mounted:
edin@ubuntu:~# umount /mnt/hgfs
edin@ubuntu:~# mount /mnt/hgfs
edin@ubuntu:~# ls -l /mnt/hgfs/
total 1
drwxr-xr-x 1 www-data www-data 136 2011-09-16 01:30 Websites
Needles to say, I really got pissed off (excuse my French). I got dirty (I don't recommend these steps, there is a better way to do this, I just wanted it to work):
I created a bash file which did the re mounting:
sudo nano /bin/remount_hgfs
In the file I added:
#!/bin/sh -e
umount /mnt/hgfs
mount /mnt/hgfs
Made it executable:
sudo chmod +x /bin/remount_hgfs
I needed this to execute on boot, so in /etc/rc.local i added (just before exit 0):
sh /bin/remount_hgfs
Afer reboot everything finally worked. I can finally get back to work.
Has someone had same issues as me (and is there a better solution)?
NOTE: Running Ubuntu 11.10 on VMWare Fusion 4.0.2 on Mac OS X (10.7.2)
Thanks for your quick fix to solve user and group issues.
I just got stuck with another problem: when I modify a file on my host, my guest cannot access them anymore
Steps to reproduce:
Listing on host:
macbook:myshares user$ ls -l ~/myshares
total 4
-rw-r--r-- 1 dragonbe staff 750 Oct 26 22:18 index.php
-rw-r--r-- 1 dragonbe staff 18 Oct 27 12:18 testfile.txt
-rw-r--r-- 1 dragonbe staff 18 Oct 27 12:29 testfile2.txt
-rw-r--r-- 1 dragonbe staff 18 Oct 27 12:40 testfile3.txt
Listing on my guest:
user@server: ls -l /mnt/hgfs/myshares
ls: cannot access /mnt/hgfs/myshares/testfile.txt: Invalid argument
-rw-r--r-- 1 www-data www-data 750 Oct 26 13:18 index.php
-????????? ? ? ? ? ? testfile.txt
-rw-r--r-- 1 www-data www-data 18 Oct 27 12:29 testfile2.txt
-rw-r--r-- 1 www-data www-data 18 Oct 27 12:29 testfile2.txt
Wonder why this is and more importantly: how can I solve this???
The only way I found that was working is to unmount en mount the shares after each change, but this is not really workable.
If someone has a solution for this, it would be very much appreciated.
DragonBe,
To be honest I ditched VMWare (for another virtualisation app).
I just couldn't fight it any more, instead of getting things done I was losing time on getting the vm running.
Besides, the 'solution' I provided isn't really a solution, but a dirty hack.
I will not describe my new VM here, but if you want I can PM you with details.
I am extremly (extremly!) happy with my development setup.
Greetings / Groetjes
Edin
As we're using a vm deliverd by a third-party supplier, we're kinda stuck on this solution. Besides, we've paid a good amount of money for this product that used to be a very cool virtualization tool. Wonder what happened there…
Ping me at with Google Talk on dragonbe [at] gmail [separator] com as I'm interested in stress-free transfer of VM images into other virtualization tools.
DragonBe,
I'm sorry to hear that as I cannot give you a solution to your problem.
I know this is not a solution but does this work afterwards?
chown www-data:www-data testfile.txt
Or
chown -R www-data:www-data your-whole-share-folder
If it does, you could run a cronjob (as root) every minute that fixes the permissions on the guest. This is a sh*tty solution, but it should work.
I agree on the quality of VMWare product though: seems that they are pushing things to work on Windows and forgetting other OS. Really a shame.
Anyway, I will ping you ASAP.
Good luck
Edin
This issue has not been solved in 4.0.2 yet? I reported this months ago http://communities.vmware.com/thread/317246
This is the ONLY major show stopper for us.
I really really really would like to upgrade and buy a bunch of 4.0.2 licenses but if this isn't fixed what is the point?
Can vmware at least look into this and propose a workaround?
Why don't you just use standard OS-supported network file sharing (SAMBA/CIFS) instead? It works ALL the time. The VMware Shared Folders feature has had quirks since day one... it's great for quick-and-dirty, short-use sharing; but if you want a robust solution, use the one that's been in place/supported by OS's for the past 20+ years.
hgfs worked perfectly until 3.1.2. The benefit of it was that we could have a single image that accessed a developers files on their local drive, no matter what OS linux, windows, osx they used as their host. The benefit here that the dev environment files were local so access to those files was not impacted by a network fs. Developing had native access to files on local drives ( in our case it's millions of files, large/small), it makes a difference.
Having the guest access the files over network or hgfs was plenty of fast as it did need to grep, check , millions of files many times an hour. It just needed to display some.
Now you might think that running somekind of samba/nfs on the host and then having the guest access it would be the same thing, but in practice it is not as we then have to worry about each host os, and make sure it has all the service runnign ( try to run an nfs server on windows. pita)
The best and easiest solution was vmware with hgfs. We could ship a image to a new dev, have them share their work dir and they are up and running no matter what os they are on. Using the exact same image for windows, linux, and osx hosts was really nice also.
Hgfs works perfectly before, without a singel glitch...until this issue that we see on newer releases of linux.
>( try to run an nfs server on windows. pita)
No need. All Windows OS's since DOS have supported SAMBA/Windows File Sharing.
So no matter what your host OS is, it's usually quite simple to enable file sharing, and just point the guest to the IP address of the host for the server name part of the UNC path. If you're using NAT or Host-Only for your VM connection, then the host is at x.y.z.1 for that same subnet (In other words, quite simple to configure in the VM as well).
>hgfs worked perfectly until 3.1.2.
I can't comment on that part. All I know is for years, on all VMware products (Workstation and Player -- the Server-class products did not have the feature implemented due to security issues), there have been quirks and issues with the Shared Folders feature - whether Windows or Linux hosts.
Here is what we do and it....
1. download image
2. share a dir
3. run
Your way
support users on linux, windws, osx to run samba servers on their machines, worry about network permission, firewalls, etc. Sure if you have some IT people on staff to hand hold people thats great, but our previous setup worked well without any of that.
The real world example is we can have a remote graphics designer that knows nothing about setting that stuff on windows, osx. It really is a pita to try to run someone remotely to configure servers on their hosts.
Additionally our image also pulls its network config over hgfs. Ip addresses, host names, etc. A user pulls a dev system config from git to the shared hgfs dir, runs the image, and the image is configured in such a way that it looks for it's info in the hgfs share BEFORE networking is even up. This is really handy as when our dev env needs lets say a new virtual host with a new ip address, we can just change it in the config and the image will pick it all up. I can copy my dev share, change the ip addresses in the shared hgfs config, start a new vm with the new share, and electromagically I have a new dev env runnign without touching the image.
The whole point of this setup was to make it really simple for all users across, or various experience and all platforms.
The point is hgfs is a really good feature, it should just work and it did, and probably will correctly but right now it has a hiccup. As I said in the linked post, everything works fine with older linux release, but not with newer ones. So the only issue is that vmware tools need to play better with the new linux releases.
Ok after a week trying out different things I ended up uninstalling Fusion 4 and re-installed Fusion 3. And now all works fine. I can work on files locally while the virtual machine takes it right into its processing unit. Just the way I expected it to be.
Just feel a bit sad about paying for this upgrade… maybe in the next update it will be all right.
Cheers.
I've been having this issue as well, mostly since VMWare Fusion 4.0.2 was released.
Like others mentioned here I prefer to have a single folder on my Mac for my web application code to read/write and have any VM that's configured for that folder via shared folders to use it. Copying files back and forth to SAMBA shares isn't what I want to do, and I shouldn't have to do with this advertised feature of VMWare Fusion.
The only way I've found to fix this is to restart the virtual machine, which is also a pain. The strange thing is that it doesn't happen all of the time, the file becoming unaccessible, only sometimes. I never had this problem with Fusion 3.x.x and so I'm probably going to uninstall Fusion 4.0 and go back to Fusion 3.0
I also have a problem with Ubuntu (at least 11.10) not mounting the VMWare Tools disc automatically. I have to explicitly point to the linux.iso file found within the VMWare Fusion.app package.
I've tried to report these two bugs to VMWare over the past two weeks with no response / no success.
As a customer, and a shareholder, I have to say I'm becoming more and more disappointed with the company. First, with what I consider, it's price gouging with the vSphere pricing, even after the revision, and then issues like this where you can't report problems or even get a response.
There is actually a much easier fix than all that, and I've created a blog post with the steps.
Cheers
This is how I fix it: Work Stuff - (Viraj Kanwade): VMware Fusion - Permissions on Shared Folders on Ubuntu
PS: @scarnie, after spending 4 hours of my office time to come up with a solution for this, saw your comment here. Wish, had seen your comment/ blog earlier. Anyways, I came up with a more generic solution which VMWare should be able to use for a more permanent fix (and they give me some discounts for future updates )