Can I ask why you want to do this...?
IF, its an attempt to make ESX a NAS datastore for other ESX hosts - don't bother... it's not a TCP version 3 impleamentation - which is one of the requirements for full NAS supported...
Actually, I was attempting this so that I could run a fully functional demo ESX on a server that was previously used for ESX 2.5 running IDE hardware. I knew it wouldn't be a simple thing to accomplish hence the question however, I have come up with a solution that works. I am in the final stages of testing it and writing up some documentation that I will post here.
I will say that it works exactly like I expected but getting all the ports properly assigned into the ESX firewall was a bit of a pain.
Please note that this is a bit messy and I will write up a nicer set of instructions once I have more time.
A little background: I have a server that was a test/demo machine running ESX 2.5 that only had IDE drives. Because of this, when I did a clean install of ESX 3, I learned that I would not be able to access the remaining 50GB of disk on my test machine. Obviously, this frustrated me. After a little digging, I found that the ESX 3 COS supports NFSD but only the for UDP connections. No good when connecting from the VMK which only uses TCP for NFS mounts. Knowing just a little about linux as a whole, I knew that I really just needed a TCP enabled build of the nsfd.o module. The problem was getting one. Below are the steps I used to build the module as well as config changes needed in order to allow inbound NFS connections through the ESX firewall.
Once you have a good nsfd.o file, it can be copied to other machines to bypass the build process. I will post my nfsd.tgz on my company website along with this document when I have time.
- NOTE: DO NOT RUN THE COMPILE ON A PRODUCTION MACHINE! \-----
1) As with any modification to an OS, make a backup of the things you're going to possible change. In this case, the modules folder in /lib/modules.
cp 2.4.21-37.0.2.ELvmnix 2.4.21-37.0.2.ELvmnix.old
Kernel module changes ** Skip to #11 if you have a valid nsfd.o **
3) nano vmnix.config, search for CONFIG_NFSD_TCP and change the commented line to:
4) cp vmnix.config kernel-2.4.21-i686-vmnix.config
5) cd ..
6) nano Makefile, change the end of the EXTRAVERSION line from ELcustom to ELvmnix
7) make oldconfig && make dep
When make config runs, you can skip the questions by hitting enter. To be sure that TCP is enabled, you can look for the NFS daemon section and answer Y there. (If you performed steps 2 & 3 correctly, it will be)
8) nano includes/linux/version.h and edit the following line (or add it if it doesn't exist):
#define VMNIX_BUILD "27701"
9) make modules
(It may not be required to make the kernel itself but I did to make sure it all links)
10) If the build completed properly, you can copy the newly created nfsd.o module.
cp ./fs/nfsd/nfsd.o into /lib/modules/2.4.21-37.0.2.ELvmnix/kernel/fs/nfsd/
Service and Firewall Configuration
11) nano /etc/init.d/nfs
12) change line 6 to be:
\# chkconfig: - 50 20
This causes the nfs daemons to start BEFORE the vmware-late process (which mounts NFS stores) when you run step 13
13) chkconfig --level 345 nfs on
14) chkconfig --level 345 portmap on
15) esxcfg-firewall -o 111,tcp,in,sunrpc
16) esxcfg-firewall -o 111,udp,in,sunrpc
17) esxcfg-firewall -o 369,tcp,in,rpc2portmap
18) esxcfg-firewall -o 369,udp,in,rpc2portmap
19) esxcfg-firewall -o 808,tcp,in,mountd
20) esxcfg-firewall -o 2049,tcp,in,nfs
21) esxcfg-firewall -o 2049,udp,in,nfs
22) nano /etc/sysconfig/nfs and add:
This forces the mountd daemon to listen on port 808 otherwise it chooses one at random and won't allow inbound connections through the firewall.
Restart the server and you will now have a working TCP NFS daemon on the ESX 3 COS. From there, you can export your folder and mount it using the VI Client.
If on restart the NFSd services fail to load, you may need to copy the System.map (export list) created in the kernel build process into the /boot folder. This does not change the actual kernel but because the nfsd.o module needs some exports that weren't previously used, the dependency maker added them to the system map.
cp /usr/src/linux-2.4/System.map /boot/System.map-2.4.21-37.0.2ELvmnix
To confirm everything is funtioning, run rpcinfo -p to get the port and transport list of RPC services. The line to look for has "2049 tcp nfs" in it.
\** Edited for typos ** 8/8/2006
\** Edited with fixes from posts ** 8/29/06
This is really very cool - and i can't wait to try on my lab server...
Do you think you could look at Enterprise iSCSI and ESX act as iSCSI target... that way I could present LUNs and format them as VMFS...
My admin guide has a how to do this with Fedora Core 5 - but I would like to be able to do this with ESX...
Of course, this is totally nuts and shouldn't be done... but I would like to try... I am gonna try your instructions tommorow...
Point me over to your guide for iSCSI and I'll see what I can come up with No promises though as I haven't played w/ iSCSI at all yet.
Unfortunately, I don't think that service will work for us w/o some major help. The kernel the COS is built on is 2.4.21. The readme says it needs 2.6.14 or newer. I'll see if the build options needed for crypto are present as an option in this kernel though to see what happens.
I've managed to get this to work on a Dell GX270 and I have next to no Linux knowledge.
Wow, i'm impressed... I'm gonna try my laptop next...
I have some good news ... I have a partially working iSCSI driver that runs on the COS! Just have a bit more work to do and I'll be able to write up the instructions.
How is the performance when you do this, does this effect the entire esx performance, since before, COS performance effected the entire esx host.
I have some good news ... I have a partially working
iSCSI driver that runs on the COS! Just have a bit
more work to do and I'll be able to write up the
Sweet! Waiting with baited breath!
How is the performance when you do this, does this
effect the entire esx performance, since before, COS
performance effected the entire esx host.
Not for production - but for test/dev - and for those people who don't have the hw for dedicated iSCSI system...
Incidentally, NAS/iSCSI wouldn't effect the VM's as they run under a different kernel. It would most like effect the responsive ness of the ESX host for management...
Is it bad when the kernel comes back and says, "bad things happened"? (lol) May be harder than I thought. The iSCSI target daemon seems to have a bit of trouble locking the interrupts so while it can get the drive geometry and report it back to the VMK, it can't read enough to the disk to be useful. Still trying though ... so we'll see what comes out of it.
Would be nice if the VMware devs updated to a newer rev of the linux kernel