Minicluster:NFS

De WikiLICC
Revisão de 22h59min de 23 de junho de 2010 por Dago (Discussão | contribs)
Ir para: navegação, pesquisa

O Network File System (NFS) permite que todo o cluster compartilhe parte de seu sistema de arquivos. Neste paradigma, uma ou mais máquinas guardam os arquivos no seu disco físico e agem como um servidor NFS enquanto que os outros "mount" o sistema de arquivos localmente. Para o usuário, parece que seus arquivos existem em todas as máquinas de uma só vez.

Se houver mais de um servidor de arquivos (digamos MESTRE1 e MESTRE2), eles não podem compartilhar o mesmo diretório /home (mas um poder compartilhar o /home/students e o outro pode compartilhar /home/faculty.)

Servidor NFS

This is part two of a three page tutorial on setting up NFS. The full tutorial includes

Supporting Packages for the NFS Server

Any machines that will act as NFS servers need to

apt-get install nfs-common nfs-kernel-server

Included in the dependencies for these is portmap, which is responsible for communicating on the proper ports and passing connections over.

/etc/exports

Next, /etc/exports needs to be configured. This file, automatically installed, controls which part of the server's file system will be shared with the other machines. Comments can be added with the # sign. The format of the file is

<directory to share> <allowed machines>(options)

(Notice that there's no space between the allow client machine specification and the opening parentheses for the options.)

For mounting users' home directories (more about this with LDAP), it's often wise to mount the directory some place other than /home because all of the Debian machines will already have a /home directory. For instance, I'm using /shared because it doesn't already exist on all of my NFS clients, and there won't be confusion with mounting over files already in that directory.

The clients can be specified a variety of ways: by IP address, domain name, or CIDR mask. Below in my example, I'm allowing all machines with an IP in my network range (192.168.1.1 - 192.168.1.254) to mount this file. Since I'll also be configuring them with DNS to be X.raptor.loc, I could also use *.raptor.loc for my clients. (However, using IP addresses doesn't require DNS to be up and running, taking out one possible point of failure, and so is generally more stable.) Multiple specifications for a given mount point may be space-separated with parentheses immediately following each client specification, like the following.

<directory to share> <allowed machines #1>(options for #1) <allowed machines #2>(options for #2)

Options for /etc/exports include

  • rw/ro - rw allows both reads and writes to the filesystem. NFS acts read only (ro) by default.
  • async/sync - The asynchronous option allows the NFS server to respond to requests before committing changes. According to man, this can improve performance but can corrupt data in the event of a crash. Synchronous is the default.
  • wdelay/no_wdelay - Write delay allows the NFS server to put off committing changes to disk if it suspects that another write is coming shortly.
  • subtree_check/no_subtree_check - Subtree checking improves security by checking not just that the client has rights to mount a directory, but all of the directory's subdirectories as well. Subtree checking is enabled by default, but the NFS server will complain if you don't specifically indicate it.
  • root_squash/no_root_squash - Root squashing prevents a root user on a machine using the filesystem to act as it if is the root user on the actual filesystem; this is more secure. It is on by default.

My /etc/exports file looks like this:

/shared   192.168.1.0/24(sync,no_wdelay,subtree_check,rw,root_squash)

I am mounting gyrfalcon's /shared directory for any machines within my internal IP range. Sync is enabled, so the NFS server finishes writing changes to disk before responding to new requests. I do not have a write delay, so changes will be immediately written. Any subdirectories will also be checked for proper permissions before being mounted. The directory will be readable and writable, and root on any NFS clients will not have the same rights as root on gyrfalcon itself.

Restarting the NFS Server (Learning to Share)

Go ahead and restart the nfs-kernel with

/etc/init.d/nfs-kernel-server restart

or with

exportfs -var

Both methods restart the NFS server. exportfs can also be used to restart the behavior just towards a specific client. Just to eyrie, for example, would be

gyrfalcon:~# exportfs 192.168.1.254:/shared

To make sure that the mount is being shared, use showmount -e. You should see the shared directories listed:

gyrfalcon:/user# showmount -e
Export list for gyrfalcon:
/shared 192.168.1/17

Cliente NFS

This is part three of a three page tutorial on NFS. The full tutorial includes

NFS Client Packages

Any machines that will act as NFS clients and will be using the shared filesystem need to

apt-get install nfs-common

Again, portmap is included with nfs-common, so it doesn't need to be installed separately. Then /etc/fstab needs to be configured.

/etc/fstab

/etc/fstab provides the opposite functionality as /etc/exports - rather than telling what to export, this file tells the machine what to import and where to mount it. In other words, this includes eventhing it needs to mount, even its own hard drives, floppy drives, cdrom drives, and such. The format of this file is as follows:

<source to mount from> <local mount point> <type of filesystem> <options> <dump> <pass>

You'll want to add the NFS line under any existing lines, so that it gets mounted after your local drives. When you specify an NFS mount, use

<NFS server>:<remote location>

You should be able to use the defaults option, which uses the options you set up in /etc/exports on the NFS server, and dump and pass don't need to enabled, so they can have 0's.

My /etc/fstab looks like this.

# /etc/fstab: static file system information.
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0
/dev/sda1       /               ext3    defaults,errors=remount-ro 0       1
/dev/sda2       none            swap    sw              0       0
/dev/hdb        /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0
192.168.1.200:/shared     /shared   nfs     defaults        0       0

After specifying it in /etc/fstab, it will automatically be mounted when the machine starts up, if the mount point exists. (If you're using /shared or another directory that isn't automatically created as part of Debian, you'll need to create the directory.) To mount it without having to reboot, use

mount <mount point>

For instance, mine would be mount /shared. Similarly, you can also do umount <mount point> to unmount a filesystem.

Troubleshooting: NFS Mounts not Loading at Boot

I had a problem with my firewall not automatically mounting the NFS systems at boot, for whatever reason. I could issue mount -a as root as soon as the system booted up, but it wouldn't boot at load time, despite the /etc/fstab file. To "hack fix" it, I added my own script at /etc/rcS.d/S46mymount. (46 runs right after S45mountnfs.sh and S46mountnfs-bootclean.sh.) It needs to be executable (chmod +x nfshack), but the file itself is simply:

#!/bin/bash

mount -a

If anyone knows of a better fix for this, please contact me at kwanous <at> debianclusters <dot> org.

Troubleshooting

Algumas vezes um erro ocorre:

[maquina~] $ mount /shared
mount: RPC: Timed out

Reiniciando o serviço portmap funcionou para o autor:

[maquina~] $ /etc/init.d/portmap stop
Stopping portmap daemon....
[maquina~] $ /etc/init.d/portmap start
Starting portmap daemon....
[maquina~] $ mount /shared

Veja