Let's assume that your server where you want to write your backup via NFS is called 'nfsserver' and the path to the backup is /storeBackup. You can then use the following entry in /etc/exports on nfsserver (example with GNU/Linux, can differ on other Unix like operating systems):
192.168.1.0/24 means, that access from any ip address beginning
with 192.168.1 is allowed.
You should run
# exportfs -a
to make your entry visible to NFS. With
# exportfs -v
you can see how NFS is configured.
You probably have to change the ip address and the mask to your needs. Using no_root_squash is important for the client root user to have root permissions on the mounted file system. Use async to get a much better write performance (see man mount for further explanations). If you use async, storeBackup will not be able to detect if a file systems capacity exceeds.
In /etc/fstab on the NFS client (where you run storeBackup) you should configure a line like
nfsserver:/storeBackup /backup nfs user,exec,async,noatime 1 1
This will mount the file system /storeBackup of nfsserver
to /backup on your client. This will occur if you boot or if
# mount /backup
on the NFS client.
There are many other options with NFS. This short description only tries to give some helpful hints, not to explain NFS.
Read or write access?
You probably want write access for storeBackup.pl but only read access for the users. There are at least two ways to achieve this:
precommand = mount /backup -o remount,rw postcommand = mount /backup -o remount,ro
This will give storeBackup.pl write access (rw = read write) during the backup. Naturally, you can also wrap a script around storeBackup.pl that does the same. The disadvantage of this method is, that the users also will get write access during backup.
Heinz-Josef Claes 2014-04-20