Network File System (NFS) Server and Client Configuration in Debian

NFS was developed at a time when we weren’t able to share our drives like we are able to today – in the Windows environment. It offers the ability to share the hard disk space of a big server with many smaller clients. Again, this is a client/server environment. While this seems like a standard service to offer, it was not always like this. In the past, clients and servers were unable to share their disk space.

Thin clients have no hard drives and thus need a “virtual” hard-disk. The NFS mount their hard disk from the server and, while the user thinks they are saving their documents to their local (thin client) disk, they are in fact saving them to the server. In a thin client environment, the root, usr and home partitions are all offered to the client from the server via NFS.

Some of the most notable benefits that NFS can provide are:

• Local workstations use less disk space because commonly used data can be stored on a single machine and still remain accessible to others over the network.

• There is no need for users to have separate home directories on every network machine. Home directories could be set up on the NFS server and made available throughout the network.

• Storage devices such as floppy disks, CDROM drives, and Zip® drives can be used by other machines on the network. This may reduce the number of removable media drives throughout the network.


Use nfs-kernel-server package if you have a fairly recent kernel (2.4.27 or better) and you want to use the kernel-mode NFS server. The user-mode NFS server in the “nfs-user-server” package is slower but more featureful and easier to debug than the kernel-mode server.

Installing NFS in Dedian

Making your computer an NFS server or client is very easy.A Debian NFS client needs

# apt-get install nfs-common portmap

while a Debian NFS server needs

# apt-get install nfs-kernel-server nfs-common portmap

NFS Server Configuration

NFS exports from a server are controlled by the file /etc/exports. Each line begins with the absolute path of a directory to be exported, followed by a space-seperated list of allowed clients.


A client can be specified either by name or IP address. Wildcards (*) are allowed in names, as are netmasks (e.g. /24) following IP addresses, but should usually be avoided for security reasons.

A client specification may be followed by a set of options, in parenthesis. It is important not to leave any space between the last client specification character and the opening parenthesis, since spaces are intrepreted as client seperators.

For each options specified in /etc/exports file can be check export man pages.Click here for manpage.

If you make changes to /etc/exports on a running NFS server, you can make these changes effective by issuing the command:

# exportfs -a

NFS Client Configuration

NFS volumes can be mounted by root directly from the command line. For example

# mount /mnt/nfs

mounts the /home directory from the machine as the directory /mnt/nfs on the client. Of course, for this to work, the directory /mnt/nfs must exist on the client and the server must have been configured to allow the client to access the volume.

It is more usual for clients to mount NFS volumes automatically at boot-time. NFS volumes can be specified like any others in /etc/fstab.

/etc/fstab /home nfs rw,rsize=4096,wsize=4096,hard,intr,async,nodev,nosuid 0 0 /usr nfs ro,rsize=8192,hard,intr,nfsvers=3,tcp,noatime,nodev,async 0 0

There are two kinds of mount options to consider: those specific to NFS and those which apply to all mounts. Consider first those specific to NFS.

For each options menctioned in /etc/fstab file check the man pages of fstab.Click here for manpage.

NFS Performance Tuning

NFS does not need a fast processor or a lot of memory. I/O is the bottleneck, so fast disks and a fast network help. If you use IDE disks, use hdparam to tune them for optimal transfer rates. If you support multiple, simultaneous users, consider paying for SCSI disks; SCSI can schedule multiple, interleaved requests much more intelligently than IDE can.

On the software side, by far the most effective step you can take is to optimize the NFS block size. NFS transfers data in chunks. If the chunks are too small, your computers spend more time processing chunk headers than moving bits. If the chunks are too large, your computers move more bits than they need to for a given set of data. To optimize the NFS block size, measure the transfer time for various block size values. Here is a measurement of the transfer time for a 256 MB file full of zeros.

# mount /mnt -o rw,wsize=1024
# time dd if=/dev/zero of=/mnt/test bs=16k count=16k
16384+0 records in
16384+0 records out

real 0m32.207s
user 0m0.000s
sys 0m0.990s

# umount /mnt

This corresponds to a throughput of 63 Mb/s.
Try writing with block sizes of 1024, 2048, 4096, and 8192 bytes (if you use NFS v3, you can try 16384 and 32768, too) and measuring the time required for each. In order to get an idea of the uncertainly in your measurements, repeat each measurement several times. In order to defeat caching, be sure to unmount and remount between measurements.

# mount /mnt -o ro,rsize=1024
# time dd if=/mnt/test of=/dev/null bs=16k
16384+0 records in
16384+0 records out

real 0m26.772s
user 0m0.010s
sys 0m0.530s

# umount /mnt

Your optimal block sizes for both reading and writing will almost certainly exceed 1024 bytes. It may occur that, like mine, your data do not indicate a clear optimum, but instead seem to approach an asymptote as block size is increased. In this case, you should pick the lowest block size which gets you close to the asymptote, rather than the highest available block size; anecdotal evidence indicates that too large block sizes can cause problems.

Once you have decided on an rsize and wsize, be sure to write them into your clients’ /etc/fstab. You might also consider specifying the noatime option.

Important points

Hard or Soft Link

Soft mounts cause data corruption, that I have never tried them. When you use hard, though, be sure to also use intr, so that clients can escape from a hung NFS server with a Ctrl-C.

Udp or Tcp Protocol

Most admins usually end up using udp because they use Linux servers, But if you have BSD or Solaris servers, by all means use TCP, as long as your tests indicate that it does not have a substantial, negative impact on performance.

NFS v2 or NFS v3

NFS v2 and NFS v3 differ only in minor details. While v3 supports a non-blocking write operation which theoretically speeds up NFS, in practice I have not seen any discernable performance advantage of v2 over v3. Still, I use v3 when I can, since it supports files larger than 2 GB and block sizes larger than 8192 bytes.

rsize and wsize options in fstab file

See the section on performance tuning below for advise of choosing rsize and wsize.
NFS security is utterly attrocious. An NFS server trusts an NFS client to enfore file access permissions. Therefore it is very important that you are root on any box you export to, and that you export with the insecure option, which would allow any old user on the client box arbitrary access to all the exported files.

Sponsored Link

11 thoughts on “Network File System (NFS) Server and Client Configuration in Debian

  1. Hi,

    As the Debian NFS maintainer, I have to say I’m a bit disappointed at the quality of this article; there are so many things that are just plain wrong. I can’t address them all in too great detail, but a few points are (and I hope the website doesn’t mess up my list 🙂 ):

    * You do not install things with apt-get unless you really know what you’re doing; use aptitude.
    * The kernel-mode server is _more_ featureful than the userspace server (and 2.2.13 isn’t really a “recent” kernel by any standard, the 2.2 series was abandoned like five years ago). For one, it supports NFSv4, which the article for some reason doesn’t cover at all.
    * The default rsize and wsize have not been 1024 in ages; I believe they’re 32768 now for NFSv3, but I haven’t checked it. Better to leave the field alone.
    * -o soft or -o hard has absolutely nothing to do with symlinks vs. hardlinks. Soft mounts do not cause data corruption just by themselves; it has to be combined with some kind of outage (like a network outage).
    * If you have any kind of network where you could expect any sort of packet loss, you most likely want TCP. In fact, NFSv4 only lets you use TCP, which is a good thing.
    * NFS supports Kerberos authentication in addition to trusting IPs; it’s not so horribly insecure as you claim.
    * Lots of memory _is_ helpful on both clients and servers. Both can cache, which helps hit the disks less.

    I appreciate people writing articles on difficult topics, but please, make a better job of checking your facts first instead of copying them blindly from man pages or old versions of package descriptions.

  2. Steinar,

    This is off topic, however I was wondering why you recommend using aptitude over apt-get?

    I’d always thought that aptitude was just a frontend to apt-get and have never used it (when I came to debian years ago from RedHat/Slackware apt-get seemed like the best thing in the world) so I was wondering if I’m missing something important?

    Cheers, GSandie

  3. @Steinar

    Thanks for your valuable information and may be i am still under learning stage please check my comments as follows

    * The kernel-mode server is _more_ featureful than the userspace server (and 2.2.13 isn’t really a “recent” kernel by any standard, the 2.2 series was abandoned like five years ago). For one, it supports NFSv4, which the article for some reason doesn’t cover at all.

    I am using debian stable version so i am using nfs package coming with this.
    * The default rsize and wsize have not been 1024 in ages; I believe they’re 32768 now for NFSv3, but I haven’t checked it. Better to leave the field alone.

    I have menctioned about the NFSv3 they have to use 32768 under performance.I am not sure every one is using NFSv3 so this is general writeup

  4. Interesting… in an archeological sort of way.
    By the way, even Debian stable provides kernel versions 2.4.27 and 2.6.8. I was surprised to find that 2.2.25 is still provided for Atari, Amiga and a few other obscure architectures. So 2.2.13 is VERY far from recent whatever branch of Debian you are using.

  5. GSandie:

    apt-get is a thin, low-level command-line interface to libapt. aptitude is a user-friendly front-end. You should not normally use apt-get unless you really know what you’re doing. In particular, aptitude has better conflict resolution, knows how to pull in Recommends, and can automatically remove packages that are no longer needed (and were only installed as dependencies of other packages).

    I know the word “apt-get” is deeply ingrained in common knowledge, but really, aptitude is what most people should be using these days.

  6. Hi,
    I found this site evry useful as I am a newbiew to NFS stuff.I found the following
    link missing.Please do look into this if it may be useful.

    In the section- NFS Client Configuration

    NFS volumes can be mounted by root directly from the command line. For example

    For each options menctioned in /etc/fstab file check the man pages
    of fstab.”Click here for manpage.” – There is no link provided here.


  7. how to access the files from server to client? Can u send me. This is very urgent

  8. S.Selvarani: Your question is a bit vague. If you want to access NFS mounted files on a client machine, then you would have to add NFS server capability to the client and modify the exports file on the “client” accordingly.

    Thank you for the article, which I found helpful.

  9. Hi,

    I have a NFS share which is used to store email accounts. So, you can imagine the number of read & write operation over NFS. 80% write operations!

    The NFS server runs the following hardware & software:
    Supermicro 2u, RAID-10 (Sata RAID, 3ware controller), 1.5TB NFS-Share, 8GB RAM DDR2, Pentium D 2.6GHz processor (looks like a desktop processor to me on server board, perhaps!), baseboard – Supermicro PDSML(not details available)

    OS – CentOS 5.2 64bit, NFSv3 operation (installed from nfs-utils-1.0.9-33), current kernel 2.6.18-194.3.1.el5

    I am facing a big performance issue with NFS.. the load avg on the nfs servers are always between 10 and 30. I had to increase the number of nfs daemons to 12 since its shared across 7 clients and most nfsd go in ‘D’ state. If I keep default, then mail server ends up with high mail queue and lots of lock failures. We also need to maintain nfslock since file locking is important to us.

    client side mount options used are:

    server side options are:

    Output of nfstat
    Server nfs v3:
    null getattr setattr lookup access readlink
    140 0% 494082 1% 66987 0% 358388 1% 420319 1% 0 0%
    read write create mkdir symlink mknod
    1805704 6% 24135656 88% 23579 0% 1 0% 0 0% 0 0%
    remove rmdir rename link readdir readdirplus
    21374 0% 0 0% 24297 0% 21362 0% 0 0% 11 0%
    fsstat fsinfo pathconf commit
    221 0% 8 0% 0 0% 0 0%

    Output of mpstat:
    08:15:35 AM CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
    08:15:40 AM all 15.02 0.00 9.01 24.42 0.20 0.50 0.00 50.85 1836.20
    08:15:45 AM all 13.89 0.00 7.09 26.47 0.00 0.20 0.00 52.35 1313.20
    Average: all 14.45 0.00 8.05 25.45 0.10 0.35 0.00 51.60 1574.70

    However, I have an NFS server from the same pool that functions remarkably. Its the same config as the rest. Its nfs write operations are 84% and iowait is 0.05%. The only thing I can see about this server is that the OS was able to detect the family of the processor exactly (i.e. All processor Flags | use dmidecode -t processor), while for the rest it couldnt. Let me tell you that processor is exactly the same and so is the OS.

    Is there anything I can do to improve NFS performance?


Leave a comment

Your email address will not be published. Required fields are marked *