2430 lines
70 KiB
HTML
2430 lines
70 KiB
HTML
|
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
|
|
<HTML><HEAD><TITLE>Man page of NFS</TITLE>
|
|
</HEAD><BODY>
|
|
<H1>NFS</H1>
|
|
Section: File Formats (5)<BR>Updated: 9 October 2012<BR><A HREF="#index">Index</A>
|
|
<A HREF="/cgi-bin/man/man2html">Return to Main Contents</A><HR>
|
|
|
|
<A NAME="lbAB"> </A>
|
|
<H2>NAME</H2>
|
|
|
|
nfs - fstab format and options for the
|
|
<B>nfs</B>
|
|
|
|
file systems
|
|
<A NAME="lbAC"> </A>
|
|
<H2>SYNOPSIS</H2>
|
|
|
|
<I>/etc/fstab</I>
|
|
|
|
<A NAME="lbAD"> </A>
|
|
<H2>DESCRIPTION</H2>
|
|
|
|
NFS is an Internet Standard protocol
|
|
created by Sun Microsystems in 1984. NFS was developed
|
|
to allow file sharing between systems residing
|
|
on a local area network.
|
|
The Linux NFS client supports three versions
|
|
of the NFS protocol:
|
|
NFS version 2 [RFC1094],
|
|
NFS version 3 [RFC1813],
|
|
and NFS version 4 [RFC3530].
|
|
<P>
|
|
|
|
The
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command attaches a file system to the system's
|
|
name space hierarchy at a given mount point.
|
|
The
|
|
<I>/etc/fstab</I>
|
|
|
|
file describes how
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
should assemble a system's file name hierarchy
|
|
from various independent file systems
|
|
(including file systems exported by NFS servers).
|
|
Each line in the
|
|
<I>/etc/fstab</I>
|
|
|
|
file describes a single file system, its mount point,
|
|
and a set of default mount options for that mount point.
|
|
<P>
|
|
|
|
For NFS file system mounts, a line in the
|
|
<I>/etc/fstab</I>
|
|
|
|
file specifies the server name,
|
|
the path name of the exported server directory to mount,
|
|
the local directory that is the mount point,
|
|
the type of file system that is being mounted,
|
|
and a list of mount options that control
|
|
the way the filesystem is mounted and
|
|
how the NFS client behaves when accessing
|
|
files on this mount point.
|
|
The fifth and sixth fields on each line are not used
|
|
by NFS, thus conventionally each contain the digit zero. For example:
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:path /mountpoint fstype option,option,... 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
The server's hostname and export pathname
|
|
are separated by a colon, while
|
|
the mount options are separated by commas. The remaining fields
|
|
are separated by blanks or tabs.
|
|
<P>
|
|
|
|
The server's hostname can be an unqualified hostname,
|
|
a fully qualified domain name,
|
|
a dotted quad IPv4 address, or
|
|
an IPv6 address enclosed in square brackets.
|
|
Link-local and site-local IPv6 addresses must be accompanied by an
|
|
interface identifier.
|
|
See
|
|
<B><A HREF="/cgi-bin/man/man2html?7+ipv6">ipv6</A></B>(7)
|
|
|
|
for details on specifying raw IPv6 addresses.
|
|
<P>
|
|
|
|
The
|
|
<I>fstype</I>
|
|
|
|
field contains "nfs". Use of the "nfs4" fstype in
|
|
<I>/etc/fstab</I>
|
|
|
|
is deprecated.
|
|
<A NAME="lbAE"> </A>
|
|
<H2>MOUNT OPTIONS</H2>
|
|
|
|
Refer to
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
for a description of generic mount options
|
|
available for all file systems. If you do not need to
|
|
specify any mount options, use the generic option
|
|
<B>defaults</B>
|
|
|
|
in
|
|
<I>/etc/fstab</I>.
|
|
|
|
|
|
<A NAME="lbAF"> </A>
|
|
<H3>Options supported by all versions</H3>
|
|
|
|
These options are valid to use with any NFS version.
|
|
<DL COMPACT>
|
|
<DT id="1"><B>nfsvers=</B><I>n</I>
|
|
|
|
<DD>
|
|
The NFS protocol version number used to contact the server's NFS service.
|
|
If the server does not support the requested version, the mount request
|
|
fails.
|
|
If this option is not specified, the client negotiates a suitable version
|
|
with
|
|
the server, trying version 4 first, version 3 second, and version 2 last.
|
|
<DT id="2"><B>vers=</B><I>n</I>
|
|
|
|
<DD>
|
|
This option is an alternative to the
|
|
<B>nfsvers</B>
|
|
|
|
option.
|
|
It is included for compatibility with other operating systems
|
|
<DT id="3"><B>soft</B> / <B>hard</B>
|
|
|
|
<DD>
|
|
Determines the recovery behavior of the NFS client
|
|
after an NFS request times out.
|
|
If neither option is specified (or if the
|
|
<B>hard</B>
|
|
|
|
option is specified), NFS requests are retried indefinitely.
|
|
If the
|
|
<B>soft</B>
|
|
|
|
option is specified, then the NFS client fails an NFS request
|
|
after
|
|
<B>retrans</B>
|
|
|
|
retransmissions have been sent,
|
|
causing the NFS client to return an error
|
|
to the calling application.
|
|
<DT id="4"><DD>
|
|
<I>NB:</I>
|
|
|
|
A so-called "soft" timeout can cause
|
|
silent data corruption in certain cases. As such, use the
|
|
<B>soft</B>
|
|
|
|
option only when client responsiveness
|
|
is more important than data integrity.
|
|
Using NFS over TCP or increasing the value of the
|
|
<B>retrans</B>
|
|
|
|
option may mitigate some of the risks of using the
|
|
<B>soft</B>
|
|
|
|
option.
|
|
<DT id="5"><B>intr</B> / <B>nointr</B>
|
|
|
|
<DD>
|
|
This option is provided for backward compatibility.
|
|
It is ignored after kernel 2.6.25.
|
|
<DT id="6"><B>timeo=</B><I>n</I>
|
|
|
|
<DD>
|
|
The time in deciseconds (tenths of a second) the NFS client waits for a
|
|
response before it retries an NFS request.
|
|
<DT id="7"><DD>
|
|
For NFS over TCP the default
|
|
<B>timeo</B>
|
|
|
|
value is 600 (60 seconds).
|
|
The NFS client performs linear backoff: After each retransmission the
|
|
timeout is increased by
|
|
<B>timeo</B>
|
|
|
|
up to the maximum of 600 seconds.
|
|
<DT id="8"><DD>
|
|
However, for NFS over UDP, the client uses an adaptive
|
|
algorithm to estimate an appropriate timeout value for frequently used
|
|
request types (such as READ and WRITE requests), but uses the
|
|
<B>timeo</B>
|
|
|
|
setting for infrequently used request types (such as FSINFO requests).
|
|
If the
|
|
<B>timeo</B>
|
|
|
|
option is not specified,
|
|
infrequently used request types are retried after 1.1 seconds.
|
|
After each retransmission, the NFS client doubles the timeout for
|
|
that request,
|
|
up to a maximum timeout length of 60 seconds.
|
|
<DT id="9"><B>retrans=</B><I>n</I>
|
|
|
|
<DD>
|
|
The number of times the NFS client retries a request before
|
|
it attempts further recovery action. If the
|
|
<B>retrans</B>
|
|
|
|
option is not specified, the NFS client tries each UDP request
|
|
three times and each TCP request twice.
|
|
<DT id="10"><DD>
|
|
The NFS client generates a "server not responding" message
|
|
after
|
|
<B>retrans</B>
|
|
|
|
retries, then attempts further recovery (depending on whether the
|
|
<B>hard</B>
|
|
|
|
mount option is in effect).
|
|
<DT id="11"><B>rsize=</B><I>n</I>
|
|
|
|
<DD>
|
|
The maximum number of bytes in each network READ request
|
|
that the NFS client can receive when reading data from a file
|
|
on an NFS server.
|
|
The actual data payload size of each NFS READ request is equal to
|
|
or smaller than the
|
|
<B>rsize</B>
|
|
|
|
setting. The largest read payload supported by the Linux NFS client
|
|
is 1,048,576 bytes (one megabyte).
|
|
<DT id="12"><DD>
|
|
The
|
|
<B>rsize</B>
|
|
|
|
value is a positive integral multiple of 1024.
|
|
Specified
|
|
<B>rsize</B>
|
|
|
|
values lower than 1024 are replaced with 4096; values larger than
|
|
1048576 are replaced with 1048576. If a specified value is within the supported
|
|
range but not a multiple of 1024, it is rounded down to the nearest
|
|
multiple of 1024.
|
|
<DT id="13"><DD>
|
|
If an
|
|
<B>rsize</B>
|
|
|
|
value is not specified, or if the specified
|
|
<B>rsize</B>
|
|
|
|
value is larger than the maximum that either client or server can support,
|
|
the client and server negotiate the largest
|
|
<B>rsize</B>
|
|
|
|
value that they can both support.
|
|
<DT id="14"><DD>
|
|
The
|
|
<B>rsize</B>
|
|
|
|
mount option as specified on the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command line appears in the
|
|
<I>/etc/mtab</I>
|
|
|
|
file. However, the effective
|
|
<B>rsize</B>
|
|
|
|
value negotiated by the client and server is reported in the
|
|
<I>/proc/mounts</I>
|
|
|
|
file.
|
|
<DT id="15"><B>wsize=</B><I>n</I>
|
|
|
|
<DD>
|
|
The maximum number of bytes per network WRITE request
|
|
that the NFS client can send when writing data to a file
|
|
on an NFS server. The actual data payload size of each
|
|
NFS WRITE request is equal to
|
|
or smaller than the
|
|
<B>wsize</B>
|
|
|
|
setting. The largest write payload supported by the Linux NFS client
|
|
is 1,048,576 bytes (one megabyte).
|
|
<DT id="16"><DD>
|
|
Similar to
|
|
<B>rsize</B>
|
|
|
|
, the
|
|
<B>wsize</B>
|
|
|
|
value is a positive integral multiple of 1024.
|
|
Specified
|
|
<B>wsize</B>
|
|
|
|
values lower than 1024 are replaced with 4096; values larger than
|
|
1048576 are replaced with 1048576. If a specified value is within the supported
|
|
range but not a multiple of 1024, it is rounded down to the nearest
|
|
multiple of 1024.
|
|
<DT id="17"><DD>
|
|
If a
|
|
<B>wsize</B>
|
|
|
|
value is not specified, or if the specified
|
|
<B>wsize</B>
|
|
|
|
value is larger than the maximum that either client or server can support,
|
|
the client and server negotiate the largest
|
|
<B>wsize</B>
|
|
|
|
value that they can both support.
|
|
<DT id="18"><DD>
|
|
The
|
|
<B>wsize</B>
|
|
|
|
mount option as specified on the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command line appears in the
|
|
<I>/etc/mtab</I>
|
|
|
|
file. However, the effective
|
|
<B>wsize</B>
|
|
|
|
value negotiated by the client and server is reported in the
|
|
<I>/proc/mounts</I>
|
|
|
|
file.
|
|
<DT id="19"><B>ac</B> / <B>noac</B>
|
|
|
|
<DD>
|
|
Selects whether the client may cache file attributes. If neither
|
|
option is specified (or if
|
|
<B>ac</B>
|
|
|
|
is specified), the client caches file
|
|
attributes.
|
|
<DT id="20"><DD>
|
|
To improve performance, NFS clients cache file
|
|
attributes. Every few seconds, an NFS client checks the server's version of each
|
|
file's attributes for updates. Changes that occur on the server in
|
|
those small intervals remain undetected until the client checks the
|
|
server again. The
|
|
<B>noac</B>
|
|
|
|
option prevents clients from caching file
|
|
attributes so that applications can more quickly detect file changes
|
|
on the server.
|
|
<DT id="21"><DD>
|
|
In addition to preventing the client from caching file attributes,
|
|
the
|
|
<B>noac</B>
|
|
|
|
option forces application writes to become synchronous so
|
|
that local changes to a file become visible on the server
|
|
immediately. That way, other clients can quickly detect recent
|
|
writes when they check the file's attributes.
|
|
<DT id="22"><DD>
|
|
Using the
|
|
<B>noac</B>
|
|
|
|
option provides greater cache coherence among NFS clients
|
|
accessing the same files,
|
|
but it extracts a significant performance penalty.
|
|
As such, judicious use of file locking is encouraged instead.
|
|
The DATA AND METADATA COHERENCE section contains a detailed discussion
|
|
of these trade-offs.
|
|
<DT id="23"><B>acregmin=</B><I>n</I>
|
|
|
|
<DD>
|
|
The minimum time (in seconds) that the NFS client caches
|
|
attributes of a regular file before it requests
|
|
fresh attribute information from a server.
|
|
If this option is not specified, the NFS client uses
|
|
a 3-second minimum.
|
|
See the DATA AND METADATA COHERENCE section
|
|
for a full discussion of attribute caching.
|
|
<DT id="24"><B>acregmax=</B><I>n</I>
|
|
|
|
<DD>
|
|
The maximum time (in seconds) that the NFS client caches
|
|
attributes of a regular file before it requests
|
|
fresh attribute information from a server.
|
|
If this option is not specified, the NFS client uses
|
|
a 60-second maximum.
|
|
See the DATA AND METADATA COHERENCE section
|
|
for a full discussion of attribute caching.
|
|
<DT id="25"><B>acdirmin=</B><I>n</I>
|
|
|
|
<DD>
|
|
The minimum time (in seconds) that the NFS client caches
|
|
attributes of a directory before it requests
|
|
fresh attribute information from a server.
|
|
If this option is not specified, the NFS client uses
|
|
a 30-second minimum.
|
|
See the DATA AND METADATA COHERENCE section
|
|
for a full discussion of attribute caching.
|
|
<DT id="26"><B>acdirmax=</B><I>n</I>
|
|
|
|
<DD>
|
|
The maximum time (in seconds) that the NFS client caches
|
|
attributes of a directory before it requests
|
|
fresh attribute information from a server.
|
|
If this option is not specified, the NFS client uses
|
|
a 60-second maximum.
|
|
See the DATA AND METADATA COHERENCE section
|
|
for a full discussion of attribute caching.
|
|
<DT id="27"><B>actimeo=</B><I>n</I>
|
|
|
|
<DD>
|
|
Using
|
|
<B>actimeo</B>
|
|
|
|
sets all of
|
|
<B>acregmin</B>,
|
|
|
|
<B>acregmax</B>,
|
|
|
|
<B>acdirmin</B>,
|
|
|
|
and
|
|
<B>acdirmax</B>
|
|
|
|
to the same value.
|
|
If this option is not specified, the NFS client uses
|
|
the defaults for each of these options listed above.
|
|
<DT id="28"><B>bg</B> / <B>fg</B>
|
|
|
|
<DD>
|
|
Determines how the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command behaves if an attempt to mount an export fails.
|
|
The
|
|
<B>fg</B>
|
|
|
|
option causes
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
to exit with an error status if any part of the mount request
|
|
times out or fails outright.
|
|
This is called a "foreground" mount,
|
|
and is the default behavior if neither the
|
|
<B>fg</B>
|
|
|
|
nor
|
|
<B>bg</B>
|
|
|
|
mount option is specified.
|
|
<DT id="29"><DD>
|
|
If the
|
|
<B>bg</B>
|
|
|
|
option is specified, a timeout or failure causes the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command to fork a child which continues to attempt
|
|
to mount the export.
|
|
The parent immediately returns with a zero exit code.
|
|
This is known as a "background" mount.
|
|
<DT id="30"><DD>
|
|
If the local mount point directory is missing, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command acts as if the mount request timed out.
|
|
This permits nested NFS mounts specified in
|
|
<I>/etc/fstab</I>
|
|
|
|
to proceed in any order during system initialization,
|
|
even if some NFS servers are not yet available.
|
|
Alternatively these issues can be addressed
|
|
using an automounter (refer to
|
|
<B><A HREF="/cgi-bin/man/man2html?8+automount">automount</A></B>(8)
|
|
|
|
for details).
|
|
<DT id="31"><B>rdirplus</B> / <B>nordirplus</B>
|
|
|
|
<DD>
|
|
Selects whether to use NFS v3 or v4 READDIRPLUS requests.
|
|
If this option is not specified, the NFS client uses READDIRPLUS requests
|
|
on NFS v3 or v4 mounts to read small directories.
|
|
Some applications perform better if the client uses only READDIR requests
|
|
for all directories.
|
|
<DT id="32"><B>retry=</B><I>n</I>
|
|
|
|
<DD>
|
|
The number of minutes that the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command retries an NFS mount operation
|
|
in the foreground or background before giving up.
|
|
If this option is not specified, the default value for foreground mounts
|
|
is 2 minutes, and the default value for background mounts is 10000 minutes
|
|
(80 minutes shy of one week).
|
|
If a value of zero is specified, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command exits immediately after the first failure.
|
|
<DT id="33"><DD>
|
|
Note that this only affects how many retries are made and doesn't
|
|
affect the delay caused by each retry. For UDP each retry takes the
|
|
time determined by the
|
|
<B>timeo</B>
|
|
|
|
and
|
|
<B>retrans</B>
|
|
|
|
options, which by default will be about 7 seconds. For TCP the
|
|
default is 3 minutes, but system TCP connection timeouts will
|
|
sometimes limit the timeout of each retransmission to around 2 minutes.
|
|
<DT id="34"><B>sec=</B><I>flavors</I>
|
|
|
|
<DD>
|
|
A colon-separated list of one or more security flavors to use for accessing
|
|
files on the mounted export. If the server does not support any of these
|
|
flavors, the mount operation fails.
|
|
If
|
|
<B>sec=</B>
|
|
|
|
is not specified, the client attempts to find
|
|
a security flavor that both the client and the server supports.
|
|
Valid
|
|
<I>flavors</I>
|
|
|
|
are
|
|
<B>none</B>,
|
|
|
|
<B>sys</B>,
|
|
|
|
<B>krb5</B>,
|
|
|
|
<B>krb5i</B>,
|
|
|
|
and
|
|
<B>krb5p</B>.
|
|
|
|
Refer to the SECURITY CONSIDERATIONS section for details.
|
|
<DT id="35"><B>sharecache</B> / <B>nosharecache</B>
|
|
|
|
<DD>
|
|
Determines how the client's data cache and attribute cache are shared
|
|
when mounting the same export more than once concurrently. Using the
|
|
same cache reduces memory requirements on the client and presents
|
|
identical file contents to applications when the same remote file is
|
|
accessed via different mount points.
|
|
<DT id="36"><DD>
|
|
If neither option is specified, or if the
|
|
<B>sharecache</B>
|
|
|
|
option is
|
|
specified, then a single cache is used for all mount points that
|
|
access the same export. If the
|
|
<B>nosharecache</B>
|
|
|
|
option is specified,
|
|
then that mount point gets a unique cache. Note that when data and
|
|
attribute caches are shared, the mount options from the first mount
|
|
point take effect for subsequent concurrent mounts of the same export.
|
|
<DT id="37"><DD>
|
|
As of kernel 2.6.18, the behavior specified by
|
|
<B>nosharecache</B>
|
|
|
|
is legacy caching behavior. This
|
|
is considered a data risk since multiple cached copies
|
|
of the same file on the same client can become out of sync
|
|
following a local update of one of the copies.
|
|
<DT id="38"><B>resvport</B> / <B>noresvport</B>
|
|
|
|
<DD>
|
|
Specifies whether the NFS client should use a privileged source port
|
|
when communicating with an NFS server for this mount point.
|
|
If this option is not specified, or the
|
|
<B>resvport</B>
|
|
|
|
option is specified, the NFS client uses a privileged source port.
|
|
If the
|
|
<B>noresvport</B>
|
|
|
|
option is specified, the NFS client uses a non-privileged source port.
|
|
This option is supported in kernels 2.6.28 and later.
|
|
<DT id="39"><DD>
|
|
Using non-privileged source ports helps increase the maximum number of
|
|
NFS mount points allowed on a client, but NFS servers must be configured
|
|
to allow clients to connect via non-privileged source ports.
|
|
<DT id="40"><DD>
|
|
Refer to the SECURITY CONSIDERATIONS section for important details.
|
|
<DT id="41"><B>lookupcache=</B><I>mode</I>
|
|
|
|
<DD>
|
|
Specifies how the kernel manages its cache of directory entries
|
|
for a given mount point.
|
|
<I>mode</I>
|
|
|
|
can be one of
|
|
<B>all</B>,
|
|
|
|
<B>none</B>,
|
|
|
|
<B>pos</B>,
|
|
|
|
or
|
|
<B>positive</B>.
|
|
|
|
This option is supported in kernels 2.6.28 and later.
|
|
<DT id="42"><DD>
|
|
The Linux NFS client caches the result of all NFS LOOKUP requests.
|
|
If the requested directory entry exists on the server,
|
|
the result is referred to as
|
|
<I>positive</I>.
|
|
|
|
If the requested directory entry does not exist on the server,
|
|
the result is referred to as
|
|
<I>negative</I>.
|
|
|
|
<DT id="43"><DD>
|
|
If this option is not specified, or if
|
|
<B>all</B>
|
|
|
|
is specified, the client assumes both types of directory cache entries
|
|
are valid until their parent directory's cached attributes expire.
|
|
<DT id="44"><DD>
|
|
If
|
|
<B>pos</B> or <B>positive</B>
|
|
|
|
is specified, the client assumes positive entries are valid
|
|
until their parent directory's cached attributes expire, but
|
|
always revalidates negative entires before an application
|
|
can use them.
|
|
<DT id="45"><DD>
|
|
If
|
|
<B>none</B>
|
|
|
|
is specified,
|
|
the client revalidates both types of directory cache entries
|
|
before an application can use them.
|
|
This permits quick detection of files that were created or removed
|
|
by other clients, but can impact application and server performance.
|
|
<DT id="46"><DD>
|
|
The DATA AND METADATA COHERENCE section contains a
|
|
detailed discussion of these trade-offs.
|
|
<DT id="47"><B>fsc</B> / <B>nofsc</B>
|
|
|
|
<DD>
|
|
Enable/Disables the cache of (read-only) data pages to the local disk
|
|
using the FS-Cache facility. See <A HREF="/cgi-bin/man/man2html?8+cachefilesd">cachefilesd</A>(8)
|
|
and <kernel_soruce>/Documentation/filesystems/caching
|
|
for detail on how to configure the FS-Cache facility.
|
|
Default value is nofsc.
|
|
</DL>
|
|
<A NAME="lbAG"> </A>
|
|
<H3>Options for NFS versions 2 and 3 only</H3>
|
|
|
|
Use these options, along with the options in the above subsection,
|
|
for NFS versions 2 and 3 only.
|
|
<DL COMPACT>
|
|
<DT id="48"><B>proto=</B><I>netid</I>
|
|
|
|
<DD>
|
|
The
|
|
<I>netid</I>
|
|
|
|
determines the transport that is used to communicate with the NFS
|
|
server. Available options are
|
|
<B>udp</B>, <B>udp6</B>, tcp<B>, </B>tcp6<B>, and </B>rdma<B>.</B>
|
|
|
|
Those which end in
|
|
<B>6</B>
|
|
|
|
use IPv6 addresses and are only available if support for TI-RPC is
|
|
built in. Others use IPv4 addresses.
|
|
<DT id="49"><DD>
|
|
Each transport protocol uses different default
|
|
<B>retrans</B>
|
|
|
|
and
|
|
<B>timeo</B>
|
|
|
|
settings.
|
|
Refer to the description of these two mount options for details.
|
|
<DT id="50"><DD>
|
|
In addition to controlling how the NFS client transmits requests to
|
|
the server, this mount option also controls how the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command communicates with the server's rpcbind and mountd services.
|
|
Specifying a netid that uses TCP forces all traffic from the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command and the NFS client to use TCP.
|
|
Specifying a netid that uses UDP forces all traffic types to use UDP.
|
|
<DT id="51"><DD>
|
|
<B>Before using NFS over UDP, refer to the TRANSPORT METHODS section.</B>
|
|
|
|
<DT id="52"><DD>
|
|
If the
|
|
<B>proto</B>
|
|
|
|
mount option is not specified, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command discovers which protocols the server supports
|
|
and chooses an appropriate transport for each service.
|
|
Refer to the TRANSPORT METHODS section for more details.
|
|
<DT id="53"><B>udp</B>
|
|
|
|
<DD>
|
|
The
|
|
<B>udp</B>
|
|
|
|
option is an alternative to specifying
|
|
<B>proto=udp.</B>
|
|
|
|
It is included for compatibility with other operating systems.
|
|
<DT id="54"><DD>
|
|
<B>Before using NFS over UDP, refer to the TRANSPORT METHODS section.</B>
|
|
|
|
<DT id="55"><B>tcp</B>
|
|
|
|
<DD>
|
|
The
|
|
<B>tcp</B>
|
|
|
|
option is an alternative to specifying
|
|
<B>proto=tcp.</B>
|
|
|
|
It is included for compatibility with other operating systems.
|
|
<DT id="56"><B>rdma</B>
|
|
|
|
<DD>
|
|
The
|
|
<B>rdma</B>
|
|
|
|
option is an alternative to specifying
|
|
<B>proto=rdma.</B>
|
|
|
|
<DT id="57"><B>port=</B><I>n</I>
|
|
|
|
<DD>
|
|
The numeric value of the server's NFS service port.
|
|
If the server's NFS service is not available on the specified port,
|
|
the mount request fails.
|
|
<DT id="58"><DD>
|
|
If this option is not specified, or if the specified port value is 0,
|
|
then the NFS client uses the NFS service port number
|
|
advertised by the server's rpcbind service.
|
|
The mount request fails if the server's rpcbind service is not available,
|
|
the server's NFS service is not registered with its rpcbind service,
|
|
or the server's NFS service is not available on the advertised port.
|
|
<DT id="59"><B>mountport=</B><I>n</I>
|
|
|
|
<DD>
|
|
The numeric value of the server's mountd port.
|
|
If the server's mountd service is not available on the specified port,
|
|
the mount request fails.
|
|
<DT id="60"><DD>
|
|
If this option is not specified,
|
|
or if the specified port value is 0, then the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command uses the mountd service port number
|
|
advertised by the server's rpcbind service.
|
|
The mount request fails if the server's rpcbind service is not available,
|
|
the server's mountd service is not registered with its rpcbind service,
|
|
or the server's mountd service is not available on the advertised port.
|
|
<DT id="61"><DD>
|
|
This option can be used when mounting an NFS server
|
|
through a firewall that blocks the rpcbind protocol.
|
|
<DT id="62"><B>mountproto=</B><I>netid</I>
|
|
|
|
<DD>
|
|
The transport the NFS client uses
|
|
to transmit requests to the NFS server's mountd service when performing
|
|
this mount request, and when later unmounting this mount point.
|
|
<DT id="63"><DD>
|
|
<I>netid</I>
|
|
|
|
may be one of
|
|
<B>udp</B>, and <B>tcp</B>
|
|
|
|
which use IPv4 address or, if TI-RPC is built into the
|
|
<B>mount.nfs</B>
|
|
|
|
command,
|
|
<B>udp6</B>, and <B>tcp6</B>
|
|
|
|
which use IPv6 addresses.
|
|
<DT id="64"><DD>
|
|
This option can be used when mounting an NFS server
|
|
through a firewall that blocks a particular transport.
|
|
When used in combination with the
|
|
<B>proto</B>
|
|
|
|
option, different transports for mountd requests and NFS requests
|
|
can be specified.
|
|
If the server's mountd service is not available via the specified
|
|
transport, the mount request fails.
|
|
<DT id="65"><DD>
|
|
Refer to the TRANSPORT METHODS section for more on how the
|
|
<B>mountproto</B>
|
|
|
|
mount option interacts with the
|
|
<B>proto</B>
|
|
|
|
mount option.
|
|
<DT id="66"><B>mounthost=</B><I>name</I>
|
|
|
|
<DD>
|
|
The hostname of the host running mountd.
|
|
If this option is not specified, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command assumes that the mountd service runs
|
|
on the same host as the NFS service.
|
|
<DT id="67"><B>mountvers=</B><I>n</I>
|
|
|
|
<DD>
|
|
The RPC version number used to contact the server's mountd.
|
|
If this option is not specified, the client uses a version number
|
|
appropriate to the requested NFS version.
|
|
This option is useful when multiple NFS services
|
|
are running on the same remote server host.
|
|
<DT id="68"><B>namlen=</B><I>n</I>
|
|
|
|
<DD>
|
|
The maximum length of a pathname component on this mount.
|
|
If this option is not specified, the maximum length is negotiated
|
|
with the server. In most cases, this maximum length is 255 characters.
|
|
<DT id="69"><DD>
|
|
Some early versions of NFS did not support this negotiation.
|
|
Using this option ensures that
|
|
<B><A HREF="/cgi-bin/man/man2html?3+pathconf">pathconf</A></B>(3)
|
|
|
|
reports the proper maximum component length to applications
|
|
in such cases.
|
|
<DT id="70"><B>lock</B> / <B>nolock</B>
|
|
|
|
<DD>
|
|
Selects whether to use the NLM sideband protocol to lock files on the server.
|
|
If neither option is specified (or if
|
|
<B>lock</B>
|
|
|
|
is specified), NLM locking is used for this mount point.
|
|
When using the
|
|
<B>nolock</B>
|
|
|
|
option, applications can lock files,
|
|
but such locks provide exclusion only against other applications
|
|
running on the same client.
|
|
Remote applications are not affected by these locks.
|
|
<DT id="71"><DD>
|
|
NLM locking must be disabled with the
|
|
<B>nolock</B>
|
|
|
|
option when using NFS to mount
|
|
<I>/var</I>
|
|
|
|
because
|
|
<I>/var</I>
|
|
|
|
contains files used by the NLM implementation on Linux.
|
|
Using the
|
|
<B>nolock</B>
|
|
|
|
option is also required when mounting exports on NFS servers
|
|
that do not support the NLM protocol.
|
|
<DT id="72"><B>cto</B> / <B>nocto</B>
|
|
|
|
<DD>
|
|
Selects whether to use close-to-open cache coherence semantics.
|
|
If neither option is specified (or if
|
|
<B>cto</B>
|
|
|
|
is specified), the client uses close-to-open
|
|
cache coherence semantics. If the
|
|
<B>nocto</B>
|
|
|
|
option is specified, the client uses a non-standard heuristic to determine when
|
|
files on the server have changed.
|
|
<DT id="73"><DD>
|
|
Using the
|
|
<B>nocto</B>
|
|
|
|
option may improve performance for read-only mounts,
|
|
but should be used only if the data on the server changes only occasionally.
|
|
The DATA AND METADATA COHERENCE section discusses the behavior
|
|
of this option in more detail.
|
|
<DT id="74"><B>acl</B> / <B>noacl</B>
|
|
|
|
<DD>
|
|
Selects whether to use the NFSACL sideband protocol on this mount point.
|
|
The NFSACL sideband protocol is a proprietary protocol
|
|
implemented in Solaris that manages Access Control Lists. NFSACL was never
|
|
made a standard part of the NFS protocol specification.
|
|
<DT id="75"><DD>
|
|
If neither
|
|
<B>acl</B>
|
|
|
|
nor
|
|
<B>noacl</B>
|
|
|
|
option is specified,
|
|
the NFS client negotiates with the server
|
|
to see if the NFSACL protocol is supported,
|
|
and uses it if the server supports it.
|
|
Disabling the NFSACL sideband protocol may be necessary
|
|
if the negotiation causes problems on the client or server.
|
|
Refer to the SECURITY CONSIDERATIONS section for more details.
|
|
<DT id="76"><B>local_lock=</B>mechanism
|
|
|
|
<DD>
|
|
Specifies whether to use local locking for any or both of the flock and the
|
|
POSIX locking mechanisms.
|
|
<I>mechanism</I>
|
|
|
|
can be one of
|
|
<B>all</B>,
|
|
|
|
<B>flock</B>,
|
|
|
|
<B>posix</B>,
|
|
|
|
or
|
|
<B>none</B>.
|
|
|
|
This option is supported in kernels 2.6.37 and later.
|
|
<DT id="77"><DD>
|
|
The Linux NFS client provides a way to make locks local. This means, the
|
|
applications can lock files, but such locks provide exclusion only against
|
|
other applications running on the same client. Remote applications are not
|
|
affected by these locks.
|
|
<DT id="78"><DD>
|
|
If this option is not specified, or if
|
|
<B>none</B>
|
|
|
|
is specified, the client assumes that the locks are not local.
|
|
<DT id="79"><DD>
|
|
If
|
|
<B>all</B>
|
|
|
|
is specified, the client assumes that both flock and POSIX locks are local.
|
|
<DT id="80"><DD>
|
|
If
|
|
<B>flock</B>
|
|
|
|
is specified, the client assumes that only flock locks are local and uses
|
|
NLM sideband protocol to lock files when POSIX locks are used.
|
|
<DT id="81"><DD>
|
|
If
|
|
<B>posix</B>
|
|
|
|
is specified, the client assumes that POSIX locks are local and uses NLM
|
|
sideband protocol to lock files when flock locks are used.
|
|
<DT id="82"><DD>
|
|
To support legacy flock behavior similar to that of NFS clients < 2.6.12,
|
|
use 'local_lock=flock'. This option is required when exporting NFS mounts via
|
|
Samba as Samba maps Windows share mode locks as flock. Since NFS clients >
|
|
2.6.12 implement flock by emulating POSIX locks, this will result in
|
|
conflicting locks.
|
|
<DT id="83"><DD>
|
|
NOTE: When used together, the 'local_lock' mount option will be overridden
|
|
by 'nolock'/'lock' mount option.
|
|
</DL>
|
|
<A NAME="lbAH"> </A>
|
|
<H3>Options for NFS version 4 only</H3>
|
|
|
|
Use these options, along with the options in the first subsection above,
|
|
for NFS version 4 and newer.
|
|
<DL COMPACT>
|
|
<DT id="84"><B>proto=</B><I>netid</I>
|
|
|
|
<DD>
|
|
The
|
|
<I>netid</I>
|
|
|
|
determines the transport that is used to communicate with the NFS
|
|
server. Supported options are
|
|
<B>tcp</B>, <B>tcp6</B>, and <B>rdma</B>.
|
|
|
|
<B>tcp6</B>
|
|
|
|
use IPv6 addresses and is only available if support for TI-RPC is
|
|
built in. Both others use IPv4 addresses.
|
|
<DT id="85"><DD>
|
|
All NFS version 4 servers are required to support TCP,
|
|
so if this mount option is not specified, the NFS version 4 client
|
|
uses the TCP protocol.
|
|
Refer to the TRANSPORT METHODS section for more details.
|
|
<DT id="86"><B>port=</B><I>n</I>
|
|
|
|
<DD>
|
|
The numeric value of the server's NFS service port.
|
|
If the server's NFS service is not available on the specified port,
|
|
the mount request fails.
|
|
<DT id="87"><DD>
|
|
If this mount option is not specified,
|
|
the NFS client uses the standard NFS port number of 2049
|
|
without first checking the server's rpcbind service.
|
|
This allows an NFS version 4 client to contact an NFS version 4
|
|
server through a firewall that may block rpcbind requests.
|
|
<DT id="88"><DD>
|
|
If the specified port value is 0,
|
|
then the NFS client uses the NFS service port number
|
|
advertised by the server's rpcbind service.
|
|
The mount request fails if the server's rpcbind service is not available,
|
|
the server's NFS service is not registered with its rpcbind service,
|
|
or the server's NFS service is not available on the advertised port.
|
|
<DT id="89"><B>cto</B> / <B>nocto</B>
|
|
|
|
<DD>
|
|
Selects whether to use close-to-open cache coherence semantics
|
|
for NFS directories on this mount point.
|
|
If neither
|
|
<B>cto</B>
|
|
|
|
nor
|
|
<B>nocto</B>
|
|
|
|
is specified,
|
|
the default is to use close-to-open cache coherence
|
|
semantics for directories.
|
|
<DT id="90"><DD>
|
|
File data caching behavior is not affected by this option.
|
|
The DATA AND METADATA COHERENCE section discusses
|
|
the behavior of this option in more detail.
|
|
<DT id="91"><B>clientaddr=</B><I>n.n.n.n</I>
|
|
|
|
<DD>
|
|
<DT id="92"><B>clientaddr=</B><I>n:n:</I><B>...</B><I>:n</I>
|
|
|
|
<DD>
|
|
Specifies a single IPv4 address (in dotted-quad form),
|
|
or a non-link-local IPv6 address,
|
|
that the NFS client advertises to allow servers
|
|
to perform NFS version 4 callback requests against
|
|
files on this mount point. If the server is unable to
|
|
establish callback connections to clients, performance
|
|
may degrade, or accesses to files may temporarily hang.
|
|
<DT id="93"><DD>
|
|
If this option is not specified, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command attempts to discover an appropriate callback address automatically.
|
|
The automatic discovery process is not perfect, however.
|
|
In the presence of multiple client network interfaces,
|
|
special routing policies,
|
|
or atypical network topologies,
|
|
the exact address to use for callbacks may be nontrivial to determine.
|
|
<DT id="94"><B>migration</B> / <B>nomigration</B>
|
|
|
|
<DD>
|
|
Selects whether the client uses an identification string that is compatible
|
|
with NFSv4 Transparent State Migration (TSM).
|
|
If the mounted server supports NFSv4 migration with TSM, specify the
|
|
<B>migration</B>
|
|
|
|
option.
|
|
<DT id="95"><DD>
|
|
Some server features misbehave in the face of a migration-compatible
|
|
identification string.
|
|
The
|
|
<B>nomigration</B>
|
|
|
|
option retains the use of a traditional client indentification string
|
|
which is compatible with legacy NFS servers.
|
|
This is also the behavior if neither option is specified.
|
|
A client's open and lock state cannot be migrated transparently
|
|
when it identifies itself via a traditional identification string.
|
|
<DT id="96"><DD>
|
|
This mount option has no effect with NFSv4 minor versions newer than zero,
|
|
which always use TSM-compatible client identification strings.
|
|
</DL>
|
|
<A NAME="lbAI"> </A>
|
|
<H2>nfs4 FILE SYSTEM TYPE</H2>
|
|
|
|
The
|
|
<B>nfs4</B>
|
|
|
|
file system type is an old syntax for specifying NFSv4 usage. It can still
|
|
be used with all NFSv4-specific and common options, excepted the
|
|
<B>nfsvers</B>
|
|
|
|
mount option.
|
|
<A NAME="lbAJ"> </A>
|
|
<H2>MOUNT CONFIGURATION FILE</H2>
|
|
|
|
If the mount command is configured to do so, all of the mount options
|
|
described in the previous section can also be configured in the
|
|
<I>/etc/nfsmount.conf </I>
|
|
|
|
file. See
|
|
<B><A HREF="/cgi-bin/man/man2html?5+nfsmount.conf">nfsmount.conf</A>(5)</B>
|
|
|
|
for details.
|
|
<A NAME="lbAK"> </A>
|
|
<H2>EXAMPLES</H2>
|
|
|
|
To mount an export using NFS version 2,
|
|
use the
|
|
<B>nfs</B>
|
|
|
|
file system type and specify the
|
|
<B>nfsvers=2</B>
|
|
|
|
mount option.
|
|
To mount using NFS version 3,
|
|
use the
|
|
<B>nfs</B>
|
|
|
|
file system type and specify the
|
|
<B>nfsvers=3</B>
|
|
|
|
mount option.
|
|
To mount using NFS version 4,
|
|
use either the
|
|
<B>nfs</B>
|
|
|
|
file system type, with the
|
|
<B>nfsvers=4</B>
|
|
|
|
mount option, or the
|
|
<B>nfs4</B>
|
|
|
|
file system type.
|
|
<P>
|
|
|
|
The following example from an
|
|
<I>/etc/fstab</I>
|
|
|
|
file causes the mount command to negotiate
|
|
reasonable defaults for NFS behavior.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:/export /mnt nfs defaults 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
Here is an example from an /etc/fstab file for an NFS version 2 mount over UDP.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:/export /mnt nfs nfsvers=2,proto=udp 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
This example shows how to mount using NFS version 4 over TCP
|
|
with Kerberos 5 mutual authentication.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:/export /mnt nfs4 sec=krb5 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
This example shows how to mount using NFS version 4 over TCP
|
|
with Kerberos 5 privacy or data integrity mode.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:/export /mnt nfs4 sec=krb5p:krb5i 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
This example can be used to mount /usr over NFS.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
server:/export /usr nfs ro,nolock,nocto,actimeo=3600 0 0
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
This example shows how to mount an NFS server
|
|
using a raw IPv6 link-local address.
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
[fe80::215:c5ff:fb3e:e2b1%eth0]:/export /mnt nfs defaults 0 0
|
|
</PRE>
|
|
|
|
<A NAME="lbAL"> </A>
|
|
<H2>TRANSPORT METHODS</H2>
|
|
|
|
NFS clients send requests to NFS servers via
|
|
Remote Procedure Calls, or
|
|
<I>RPCs</I>.
|
|
|
|
The RPC client discovers remote service endpoints automatically,
|
|
handles per-request authentication,
|
|
adjusts request parameters for different byte endianness on client and server,
|
|
and retransmits requests that may have been lost by the network or server.
|
|
RPC requests and replies flow over a network transport.
|
|
<P>
|
|
|
|
In most cases, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command, NFS client, and NFS server
|
|
can automatically negotiate proper transport
|
|
and data transfer size settings for a mount point.
|
|
In some cases, however, it pays to specify
|
|
these settings explicitly using mount options.
|
|
<P>
|
|
|
|
Traditionally, NFS clients used the UDP transport exclusively for
|
|
transmitting requests to servers. Though its implementation is
|
|
simple, NFS over UDP has many limitations that prevent smooth
|
|
operation and good performance in some common deployment
|
|
environments. Even an insignificant packet loss rate results in the
|
|
loss of whole NFS requests; as such, retransmit timeouts are usually
|
|
in the subsecond range to allow clients to recover quickly from
|
|
dropped requests, but this can result in extraneous network traffic
|
|
and server load.
|
|
<P>
|
|
|
|
However, UDP can be quite effective in specialized settings where
|
|
the networks MTU is large relative to NFSs data transfer size (such
|
|
as network environments that enable jumbo Ethernet frames). In such
|
|
environments, trimming the
|
|
<B>rsize</B>
|
|
|
|
and
|
|
<B>wsize</B>
|
|
|
|
settings so that each
|
|
NFS read or write request fits in just a few network frames (or even
|
|
in a single frame) is advised. This reduces the probability that
|
|
the loss of a single MTU-sized network frame results in the loss of
|
|
an entire large read or write request.
|
|
<P>
|
|
|
|
TCP is the default transport protocol used for all modern NFS
|
|
implementations. It performs well in almost every conceivable
|
|
network environment and provides excellent guarantees against data
|
|
corruption caused by network unreliability. TCP is often a
|
|
requirement for mounting a server through a network firewall.
|
|
<P>
|
|
|
|
Under normal circumstances, networks drop packets much more
|
|
frequently than NFS servers drop requests. As such, an aggressive
|
|
retransmit timeout setting for NFS over TCP is unnecessary. Typical
|
|
timeout settings for NFS over TCP are between one and ten minutes.
|
|
After the client exhausts its retransmits (the value of the
|
|
<B>retrans</B>
|
|
|
|
mount option), it assumes a network partition has occurred,
|
|
and attempts to reconnect to the server on a fresh socket. Since
|
|
TCP itself makes network data transfer reliable,
|
|
<B>rsize</B>
|
|
|
|
and
|
|
<B>wsize</B>
|
|
|
|
can safely be allowed to default to the largest values supported by
|
|
both client and server, independent of the network's MTU size.
|
|
<A NAME="lbAM"> </A>
|
|
<H3>Using the mountproto mount option</H3>
|
|
|
|
This section applies only to NFS version 2 and version 3 mounts
|
|
since NFS version 4 does not use a separate protocol for mount
|
|
requests.
|
|
<P>
|
|
|
|
The Linux NFS client can use a different transport for
|
|
contacting an NFS server's rpcbind service, its mountd service,
|
|
its Network Lock Manager (NLM) service, and its NFS service.
|
|
The exact transports employed by the Linux NFS client for
|
|
each mount point depends on the settings of the transport
|
|
mount options, which include
|
|
<B>proto</B>,
|
|
|
|
<B>mountproto</B>,
|
|
|
|
<B>udp</B>, and <B>tcp</B>.
|
|
|
|
<P>
|
|
|
|
The client sends Network Status Manager (NSM) notifications
|
|
via UDP no matter what transport options are specified, but
|
|
listens for server NSM notifications on both UDP and TCP.
|
|
The NFS Access Control List (NFSACL) protocol shares the same
|
|
transport as the main NFS service.
|
|
<P>
|
|
|
|
If no transport options are specified, the Linux NFS client
|
|
uses UDP to contact the server's mountd service, and TCP to
|
|
contact its NLM and NFS services by default.
|
|
<P>
|
|
|
|
If the server does not support these transports for these services, the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command attempts to discover what the server supports, and then retries
|
|
the mount request once using the discovered transports.
|
|
If the server does not advertise any transport supported by the client
|
|
or is misconfigured, the mount request fails.
|
|
If the
|
|
<B>bg</B>
|
|
|
|
option is in effect, the mount command backgrounds itself and continues
|
|
to attempt the specified mount request.
|
|
<P>
|
|
|
|
When the
|
|
<B>proto</B>
|
|
|
|
option, the
|
|
<B>udp</B>
|
|
|
|
option, or the
|
|
<B>tcp</B>
|
|
|
|
option is specified but the
|
|
<B>mountproto</B>
|
|
|
|
option is not, the specified transport is used to contact
|
|
both the server's mountd service and for the NLM and NFS services.
|
|
<P>
|
|
|
|
If the
|
|
<B>mountproto</B>
|
|
|
|
option is specified but none of the
|
|
<B>proto</B>, <B>udp</B> or <B>tcp</B>
|
|
|
|
options are specified, then the specified transport is used for the
|
|
initial mountd request, but the mount command attempts to discover
|
|
what the server supports for the NFS protocol, preferring TCP if
|
|
both transports are supported.
|
|
<P>
|
|
|
|
If both the
|
|
<B>mountproto</B> and <B>proto</B>
|
|
|
|
(or
|
|
<B>udp</B> or <B>tcp</B>)
|
|
|
|
options are specified, then the transport specified by the
|
|
<B>mountproto</B>
|
|
|
|
option is used for the initial mountd request, and the transport
|
|
specified by the
|
|
<B>proto</B>
|
|
|
|
option (or the
|
|
<B>udp</B> or <B>tcp</B> options)
|
|
|
|
is used for NFS, no matter what order these options appear.
|
|
No automatic service discovery is performed if these options are
|
|
specified.
|
|
<P>
|
|
|
|
If any of the
|
|
<B>proto</B>, <B>udp</B>, <B>tcp</B>,
|
|
|
|
or
|
|
<B>mountproto</B>
|
|
|
|
options are specified more than once on the same mount command line,
|
|
then the value of the rightmost instance of each of these options
|
|
takes effect.
|
|
<A NAME="lbAN"> </A>
|
|
<H3>Using NFS over UDP on high-speed links</H3>
|
|
|
|
Using NFS over UDP on high-speed links such as Gigabit
|
|
<B>can cause silent data corruption</B>.
|
|
|
|
<P>
|
|
|
|
The problem can be triggered at high loads, and is caused by problems in
|
|
IP fragment reassembly. NFS read and writes typically transmit UDP packets
|
|
of 4 Kilobytes or more, which have to be broken up into several fragments
|
|
in order to be sent over the Ethernet link, which limits packets to 1500
|
|
bytes by default. This process happens at the IP network layer and is
|
|
called fragmentation.
|
|
<P>
|
|
|
|
In order to identify fragments that belong together, IP assigns a 16bit
|
|
<I>IP ID</I>
|
|
|
|
value to each packet; fragments generated from the same UDP packet
|
|
will have the same IP ID. The receiving system will collect these
|
|
fragments and combine them to form the original UDP packet. This process
|
|
is called reassembly. The default timeout for packet reassembly is
|
|
30 seconds; if the network stack does not receive all fragments of
|
|
a given packet within this interval, it assumes the missing fragment(s)
|
|
got lost and discards those it already received.
|
|
<P>
|
|
|
|
The problem this creates over high-speed links is that it is possible
|
|
to send more than 65536 packets within 30 seconds. In fact, with
|
|
heavy NFS traffic one can observe that the IP IDs repeat after about
|
|
5 seconds.
|
|
<P>
|
|
|
|
This has serious effects on reassembly: if one fragment gets lost,
|
|
another fragment
|
|
<I>from a different packet</I>
|
|
|
|
but with the
|
|
<I>same IP ID</I>
|
|
|
|
will arrive within the 30 second timeout, and the network stack will
|
|
combine these fragments to form a new packet. Most of the time, network
|
|
layers above IP will detect this mismatched reassembly - in the case
|
|
of UDP, the UDP checksum, which is a 16 bit checksum over the entire
|
|
packet payload, will usually not match, and UDP will discard the
|
|
bad packet.
|
|
<P>
|
|
|
|
However, the UDP checksum is 16 bit only, so there is a chance of 1 in
|
|
65536 that it will match even if the packet payload is completely
|
|
random (which very often isn't the case). If that is the case,
|
|
silent data corruption will occur.
|
|
<P>
|
|
|
|
This potential should be taken seriously, at least on Gigabit
|
|
Ethernet.
|
|
Network speeds of 100Mbit/s should be considered less
|
|
problematic, because with most traffic patterns IP ID wrap around
|
|
will take much longer than 30 seconds.
|
|
<P>
|
|
|
|
It is therefore strongly recommended to use
|
|
<B>NFS over TCP where possible</B>,
|
|
|
|
since TCP does not perform fragmentation.
|
|
<P>
|
|
|
|
If you absolutely have to use NFS over UDP over Gigabit Ethernet,
|
|
some steps can be taken to mitigate the problem and reduce the
|
|
probability of corruption:
|
|
<DL COMPACT>
|
|
<DT id="97"><I>Jumbo frames:</I>
|
|
|
|
<DD>
|
|
Many Gigabit network cards are capable of transmitting
|
|
frames bigger than the 1500 byte limit of traditional Ethernet, typically
|
|
9000 bytes. Using jumbo frames of 9000 bytes will allow you to run NFS over
|
|
UDP at a page size of 8K without fragmentation. Of course, this is
|
|
only feasible if all involved stations support jumbo frames.
|
|
<DT id="98"><DD>
|
|
To enable a machine to send jumbo frames on cards that support it,
|
|
it is sufficient to configure the interface for a MTU value of 9000.
|
|
<DT id="99"><I>Lower reassembly timeout:</I>
|
|
|
|
<DD>
|
|
By lowering this timeout below the time it takes the IP ID counter
|
|
to wrap around, incorrect reassembly of fragments can be prevented
|
|
as well. To do so, simply write the new timeout value (in seconds)
|
|
to the file
|
|
<B>/proc/sys/net/ipv4/ipfrag_time</B>.
|
|
|
|
<DT id="100"><DD>
|
|
A value of 2 seconds will greatly reduce the probability of IPID clashes on
|
|
a single Gigabit link, while still allowing for a reasonable timeout
|
|
when receiving fragmented traffic from distant peers.
|
|
</DL>
|
|
<A NAME="lbAO"> </A>
|
|
<H2>DATA AND METADATA COHERENCE</H2>
|
|
|
|
Some modern cluster file systems provide
|
|
perfect cache coherence among their clients.
|
|
Perfect cache coherence among disparate NFS clients
|
|
is expensive to achieve, especially on wide area networks.
|
|
As such, NFS settles for weaker cache coherence that
|
|
satisfies the requirements of most file sharing types.
|
|
<A NAME="lbAP"> </A>
|
|
<H3>Close-to-open cache consistency</H3>
|
|
|
|
Typically file sharing is completely sequential.
|
|
First client A opens a file, writes something to it, then closes it.
|
|
Then client B opens the same file, and reads the changes.
|
|
<P>
|
|
|
|
When an application opens a file stored on an NFS version 3 server,
|
|
the NFS client checks that the file exists on the server
|
|
and is permitted to the opener by sending a GETATTR or ACCESS request.
|
|
The NFS client sends these requests
|
|
regardless of the freshness of the file's cached attributes.
|
|
<P>
|
|
|
|
When the application closes the file,
|
|
the NFS client writes back any pending changes
|
|
to the file so that the next opener can view the changes.
|
|
This also gives the NFS client an opportunity to report
|
|
write errors to the application via the return code from
|
|
<B><A HREF="/cgi-bin/man/man2html?2+close">close</A></B>(2).
|
|
|
|
<P>
|
|
|
|
The behavior of checking at open time and flushing at close time
|
|
is referred to as
|
|
<I>close-to-open cache consistency</I>,
|
|
|
|
or
|
|
<I>CTO</I>.
|
|
|
|
It can be disabled for an entire mount point using the
|
|
<B>nocto</B>
|
|
|
|
mount option.
|
|
<A NAME="lbAQ"> </A>
|
|
<H3>Weak cache consistency</H3>
|
|
|
|
There are still opportunities for a client's data cache
|
|
to contain stale data.
|
|
The NFS version 3 protocol introduced "weak cache consistency"
|
|
(also known as WCC) which provides a way of efficiently checking
|
|
a file's attributes before and after a single request.
|
|
This allows a client to help identify changes
|
|
that could have been made by other clients.
|
|
<P>
|
|
|
|
When a client is using many concurrent operations
|
|
that update the same file at the same time
|
|
(for example, during asynchronous write behind),
|
|
it is still difficult to tell whether it was
|
|
that client's updates or some other client's updates
|
|
that altered the file.
|
|
<A NAME="lbAR"> </A>
|
|
<H3>Attribute caching</H3>
|
|
|
|
Use the
|
|
<B>noac</B>
|
|
|
|
mount option to achieve attribute cache coherence
|
|
among multiple clients.
|
|
Almost every file system operation checks
|
|
file attribute information.
|
|
The client keeps this information cached
|
|
for a period of time to reduce network and server load.
|
|
When
|
|
<B>noac</B>
|
|
|
|
is in effect, a client's file attribute cache is disabled,
|
|
so each operation that needs to check a file's attributes
|
|
is forced to go back to the server.
|
|
This permits a client to see changes to a file very quickly,
|
|
at the cost of many extra network operations.
|
|
<P>
|
|
|
|
Be careful not to confuse the
|
|
<B>noac</B>
|
|
|
|
option with "no data caching."
|
|
The
|
|
<B>noac</B>
|
|
|
|
mount option prevents the client from caching file metadata,
|
|
but there are still races that may result in data cache incoherence
|
|
between client and server.
|
|
<P>
|
|
|
|
The NFS protocol is not designed to support
|
|
true cluster file system cache coherence
|
|
without some type of application serialization.
|
|
If absolute cache coherence among clients is required,
|
|
applications should use file locking. Alternatively, applications
|
|
can also open their files with the O_DIRECT flag
|
|
to disable data caching entirely.
|
|
<A NAME="lbAS"> </A>
|
|
<H3>File timestamp maintainence</H3>
|
|
|
|
NFS servers are responsible for managing file and directory timestamps
|
|
(<B>atime</B>,
|
|
|
|
<B>ctime</B>, and
|
|
|
|
<B>mtime</B>).
|
|
|
|
When a file is accessed or updated on an NFS server,
|
|
the file's timestamps are updated just like they would be on a filesystem
|
|
local to an application.
|
|
<P>
|
|
|
|
NFS clients cache file attributes, including timestamps.
|
|
A file's timestamps are updated on NFS clients when its attributes
|
|
are retrieved from the NFS server.
|
|
Thus there may be some delay before timestamp updates
|
|
on an NFS server appear to applications on NFS clients.
|
|
<P>
|
|
|
|
To comply with the POSIX filesystem standard, the Linux NFS client
|
|
relies on NFS servers to keep a file's
|
|
<B>mtime</B>
|
|
|
|
and
|
|
<B>ctime</B>
|
|
|
|
timestamps properly up to date.
|
|
It does this by flushing local data changes to the server
|
|
before reporting
|
|
<B>mtime</B>
|
|
|
|
to applications via system calls such as
|
|
<B><A HREF="/cgi-bin/man/man2html?2+stat">stat</A></B>(2).
|
|
|
|
<P>
|
|
|
|
The Linux client handles
|
|
<B>atime</B>
|
|
|
|
updates more loosely, however.
|
|
NFS clients maintain good performance by caching data,
|
|
but that means that application reads, which normally update
|
|
<B>atime</B>,
|
|
|
|
are not reflected to the server where a file's
|
|
<B>atime</B>
|
|
|
|
is actually maintained.
|
|
<P>
|
|
|
|
Because of this caching behavior,
|
|
the Linux NFS client does not support generic atime-related mount options.
|
|
See
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
for details on these options.
|
|
<P>
|
|
|
|
In particular, the
|
|
<B>atime</B>/<B>noatime</B>,
|
|
|
|
<B>diratime</B>/<B>nodiratime</B>,
|
|
|
|
<B>relatime</B>/<B>norelatime</B>,
|
|
|
|
and
|
|
<B>strictatime</B>/<B>nostrictatime</B>
|
|
|
|
mount options have no effect on NFS mounts.
|
|
<P>
|
|
|
|
<I>/proc/mounts</I>
|
|
|
|
may report that the
|
|
<B>relatime</B>
|
|
|
|
mount option is set on NFS mounts, but in fact the
|
|
<B>atime</B>
|
|
|
|
semantics are always as described here, and are not like
|
|
<B>relatime</B>
|
|
|
|
semantics.
|
|
<A NAME="lbAT"> </A>
|
|
<H3>Directory entry caching</H3>
|
|
|
|
The Linux NFS client caches the result of all NFS LOOKUP requests.
|
|
If the requested directory entry exists on the server,
|
|
the result is referred to as a
|
|
<I>positive</I> lookup result.
|
|
|
|
If the requested directory entry does not exist on the server
|
|
(that is, the server returned ENOENT),
|
|
the result is referred to as
|
|
<I>negative</I> lookup result.
|
|
|
|
<P>
|
|
|
|
To detect when directory entries have been added or removed
|
|
on the server,
|
|
the Linux NFS client watches a directory's mtime.
|
|
If the client detects a change in a directory's mtime,
|
|
the client drops all cached LOOKUP results for that directory.
|
|
Since the directory's mtime is a cached attribute, it may
|
|
take some time before a client notices it has changed.
|
|
See the descriptions of the
|
|
<B>acdirmin</B>, <B>acdirmax</B>, and <B>noac</B>
|
|
|
|
mount options for more information about
|
|
how long a directory's mtime is cached.
|
|
<P>
|
|
|
|
Caching directory entries improves the performance of applications that
|
|
do not share files with applications on other clients.
|
|
Using cached information about directories can interfere
|
|
with applications that run concurrently on multiple clients and
|
|
need to detect the creation or removal of files quickly, however.
|
|
The
|
|
<B>lookupcache</B>
|
|
|
|
mount option allows some tuning of directory entry caching behavior.
|
|
<P>
|
|
|
|
Before kernel release 2.6.28,
|
|
the Linux NFS client tracked only positive lookup results.
|
|
This permitted applications to detect new directory entries
|
|
created by other clients quickly while still providing some of the
|
|
performance benefits of caching.
|
|
If an application depends on the previous lookup caching behavior
|
|
of the Linux NFS client, you can use
|
|
<B>lookupcache=positive</B>.
|
|
|
|
<P>
|
|
|
|
If the client ignores its cache and validates every application
|
|
lookup request with the server,
|
|
that client can immediately detect when a new directory
|
|
entry has been either created or removed by another client.
|
|
You can specify this behavior using
|
|
<B>lookupcache=none</B>.
|
|
|
|
The extra NFS requests needed if the client does not
|
|
cache directory entries can exact a performance penalty.
|
|
Disabling lookup caching
|
|
should result in less of a performance penalty than using
|
|
<B>noac</B>,
|
|
|
|
and has no effect on how the NFS client caches the attributes of files.
|
|
<P>
|
|
|
|
<A NAME="lbAU"> </A>
|
|
<H3>The sync mount option</H3>
|
|
|
|
The NFS client treats the
|
|
<B>sync</B>
|
|
|
|
mount option differently than some other file systems
|
|
(refer to
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
for a description of the generic
|
|
<B>sync</B>
|
|
|
|
and
|
|
<B>async</B>
|
|
|
|
mount options).
|
|
If neither
|
|
<B>sync</B>
|
|
|
|
nor
|
|
<B>async</B>
|
|
|
|
is specified (or if the
|
|
<B>async</B>
|
|
|
|
option is specified),
|
|
the NFS client delays sending application
|
|
writes to the server
|
|
until any of these events occur:
|
|
<DL COMPACT>
|
|
<DT id="101"><DD>
|
|
Memory pressure forces reclamation of system memory resources.
|
|
<DT id="102"><DD>
|
|
An application flushes file data explicitly with
|
|
<B><A HREF="/cgi-bin/man/man2html?2+sync">sync</A></B>(2),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?2+msync">msync</A></B>(2),
|
|
|
|
or
|
|
<B><A HREF="/cgi-bin/man/man2html?3+fsync">fsync</A></B>(3).
|
|
|
|
<DT id="103"><DD>
|
|
An application closes a file with
|
|
<B><A HREF="/cgi-bin/man/man2html?2+close">close</A></B>(2).
|
|
|
|
<DT id="104"><DD>
|
|
The file is locked/unlocked via
|
|
<B><A HREF="/cgi-bin/man/man2html?2+fcntl">fcntl</A></B>(2).
|
|
|
|
</DL>
|
|
<P>
|
|
|
|
In other words, under normal circumstances,
|
|
data written by an application may not immediately appear
|
|
on the server that hosts the file.
|
|
<P>
|
|
|
|
If the
|
|
<B>sync</B>
|
|
|
|
option is specified on a mount point,
|
|
any system call that writes data to files on that mount point
|
|
causes that data to be flushed to the server
|
|
before the system call returns control to user space.
|
|
This provides greater data cache coherence among clients,
|
|
but at a significant performance cost.
|
|
<P>
|
|
|
|
Applications can use the O_SYNC open flag to force application
|
|
writes to individual files to go to the server immediately without
|
|
the use of the
|
|
<B>sync</B>
|
|
|
|
mount option.
|
|
<A NAME="lbAV"> </A>
|
|
<H3>Using file locks with NFS</H3>
|
|
|
|
The Network Lock Manager protocol is a separate sideband protocol
|
|
used to manage file locks in NFS version 2 and version 3.
|
|
To support lock recovery after a client or server reboot,
|
|
a second sideband protocol --
|
|
known as the Network Status Manager protocol --
|
|
is also required.
|
|
In NFS version 4,
|
|
file locking is supported directly in the main NFS protocol,
|
|
and the NLM and NSM sideband protocols are not used.
|
|
<P>
|
|
|
|
In most cases, NLM and NSM services are started automatically,
|
|
and no extra configuration is required.
|
|
Configure all NFS clients with fully-qualified domain names
|
|
to ensure that NFS servers can find clients to notify them of server reboots.
|
|
<P>
|
|
|
|
NLM supports advisory file locks only.
|
|
To lock NFS files, use
|
|
<B><A HREF="/cgi-bin/man/man2html?2+fcntl">fcntl</A></B>(2)
|
|
|
|
with the F_GETLK and F_SETLK commands.
|
|
The NFS client converts file locks obtained via
|
|
<B><A HREF="/cgi-bin/man/man2html?2+flock">flock</A></B>(2)
|
|
|
|
to advisory locks.
|
|
<P>
|
|
|
|
When mounting servers that do not support the NLM protocol,
|
|
or when mounting an NFS server through a firewall
|
|
that blocks the NLM service port,
|
|
specify the
|
|
<B>nolock</B>
|
|
|
|
mount option. NLM locking must be disabled with the
|
|
<B>nolock</B>
|
|
|
|
option when using NFS to mount
|
|
<I>/var</I>
|
|
|
|
because
|
|
<I>/var</I>
|
|
|
|
contains files used by the NLM implementation on Linux.
|
|
<P>
|
|
|
|
Specifying the
|
|
<B>nolock</B>
|
|
|
|
option may also be advised to improve the performance
|
|
of a proprietary application which runs on a single client
|
|
and uses file locks extensively.
|
|
<A NAME="lbAW"> </A>
|
|
<H3>NFS version 4 caching features</H3>
|
|
|
|
The data and metadata caching behavior of NFS version 4
|
|
clients is similar to that of earlier versions.
|
|
However, NFS version 4 adds two features that improve
|
|
cache behavior:
|
|
<I>change attributes</I>
|
|
|
|
and
|
|
<I>file delegation</I>.
|
|
|
|
<P>
|
|
|
|
The
|
|
<I>change attribute</I>
|
|
|
|
is a new part of NFS file and directory metadata
|
|
which tracks data changes.
|
|
It replaces the use of a file's modification
|
|
and change time stamps
|
|
as a way for clients to validate the content
|
|
of their caches.
|
|
Change attributes are independent of the time stamp
|
|
resolution on either the server or client, however.
|
|
<P>
|
|
|
|
A
|
|
<I>file delegation</I>
|
|
|
|
is a contract between an NFS version 4 client
|
|
and server that allows the client to treat a file temporarily
|
|
as if no other client is accessing it.
|
|
The server promises to notify the client (via a callback request) if another client
|
|
attempts to access that file.
|
|
Once a file has been delegated to a client, the client can
|
|
cache that file's data and metadata aggressively without
|
|
contacting the server.
|
|
<P>
|
|
|
|
File delegations come in two flavors:
|
|
<I>read</I>
|
|
|
|
and
|
|
<I>write</I>.
|
|
|
|
A
|
|
<I>read</I>
|
|
|
|
delegation means that the server notifies the client
|
|
about any other clients that want to write to the file.
|
|
A
|
|
<I>write</I>
|
|
|
|
delegation means that the client gets notified about
|
|
either read or write accessors.
|
|
<P>
|
|
|
|
Servers grant file delegations when a file is opened,
|
|
and can recall delegations at any time when another
|
|
client wants access to the file that conflicts with
|
|
any delegations already granted.
|
|
Delegations on directories are not supported.
|
|
<P>
|
|
|
|
In order to support delegation callback, the server
|
|
checks the network return path to the client during
|
|
the client's initial contact with the server.
|
|
If contact with the client cannot be established,
|
|
the server simply does not grant any delegations to
|
|
that client.
|
|
<A NAME="lbAX"> </A>
|
|
<H2>SECURITY CONSIDERATIONS</H2>
|
|
|
|
NFS servers control access to file data,
|
|
but they depend on their RPC implementation
|
|
to provide authentication of NFS requests.
|
|
Traditional NFS access control mimics
|
|
the standard mode bit access control provided in local file systems.
|
|
Traditional RPC authentication uses a number
|
|
to represent each user
|
|
(usually the user's own uid),
|
|
a number to represent the user's group (the user's gid),
|
|
and a set of up to 16 auxiliary group numbers
|
|
to represent other groups of which the user may be a member.
|
|
<P>
|
|
|
|
Typically, file data and user ID values appear unencrypted
|
|
(i.e. "in the clear") on the network.
|
|
Moreover, NFS versions 2 and 3 use
|
|
separate sideband protocols for mounting,
|
|
locking and unlocking files,
|
|
and reporting system status of clients and servers.
|
|
These auxiliary protocols use no authentication.
|
|
<P>
|
|
|
|
In addition to combining these sideband protocols with the main NFS protocol,
|
|
NFS version 4 introduces more advanced forms of access control,
|
|
authentication, and in-transit data protection.
|
|
The NFS version 4 specification mandates support for
|
|
strong authentication and security flavors
|
|
that provide per-RPC integrity checking and encryption.
|
|
Because NFS version 4 combines the
|
|
function of the sideband protocols into the main NFS protocol,
|
|
the new security features apply to all NFS version 4 operations
|
|
including mounting, file locking, and so on.
|
|
RPCGSS authentication can also be used with NFS versions 2 and 3,
|
|
but it does not protect their sideband protocols.
|
|
<P>
|
|
|
|
The
|
|
<B>sec</B>
|
|
|
|
mount option specifies the security flavor used for operations
|
|
on behalf of users on that NFS mount point.
|
|
Specifying
|
|
<B>sec=krb5</B>
|
|
|
|
provides cryptographic proof of a user's identity in each RPC request.
|
|
This provides strong verification of the identity of users
|
|
accessing data on the server.
|
|
Note that additional configuration besides adding this mount option
|
|
is required in order to enable Kerberos security.
|
|
Refer to the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+rpc.gssd">rpc.gssd</A></B>(8)
|
|
|
|
man page for details.
|
|
<P>
|
|
|
|
Two additional flavors of Kerberos security are supported:
|
|
<B>krb5i</B>
|
|
|
|
and
|
|
<B>krb5p</B>.
|
|
|
|
The
|
|
<B>krb5i</B>
|
|
|
|
security flavor provides a cryptographically strong guarantee
|
|
that the data in each RPC request has not been tampered with.
|
|
The
|
|
<B>krb5p</B>
|
|
|
|
security flavor encrypts every RPC request
|
|
to prevent data exposure during network transit; however,
|
|
expect some performance impact
|
|
when using integrity checking or encryption.
|
|
Similar support for other forms of cryptographic security
|
|
is also available.
|
|
<A NAME="lbAY"> </A>
|
|
<H3>NFS version 4 filesystem crossing</H3>
|
|
|
|
The NFS version 4 protocol allows
|
|
a client to renegotiate the security flavor
|
|
when the client crosses into a new filesystem on the server.
|
|
The newly negotiated flavor effects only accesses of the new filesystem.
|
|
<P>
|
|
|
|
Such negotiation typically occurs when a client crosses
|
|
from a server's pseudo-fs
|
|
into one of the server's exported physical filesystems,
|
|
which often have more restrictive security settings than the pseudo-fs.
|
|
<A NAME="lbAZ"> </A>
|
|
<H3>NFS version 4 Leases</H3>
|
|
|
|
In NFS version 4, a lease is a period of time during which a server
|
|
irrevocably grants a file lock to a client.
|
|
If the lease expires, the server is allowed to revoke that lock.
|
|
Clients periodically renew their leases to prevent lock revocation.
|
|
<P>
|
|
|
|
After an NFS version 4 server reboots, each client tells the
|
|
server about all file open and lock state under its lease
|
|
before operation can continue.
|
|
If the client reboots, the server frees all open and lock state
|
|
associated with that client's lease.
|
|
<P>
|
|
|
|
As part of establishing a lease, therefore,
|
|
a client must identify itself to a server.
|
|
A fixed string is used to distinguish that client from
|
|
others, and a changeable verifier is used to indicate
|
|
when the client has rebooted.
|
|
<P>
|
|
|
|
A client uses a particular security flavor and principal
|
|
when performing the operations to establish a lease.
|
|
If two clients happen to present the same identity string,
|
|
a server can use their principals to detect that they are
|
|
different clients, and prevent one client from interfering
|
|
with the other's lease.
|
|
<P>
|
|
|
|
The Linux NFS client establishes one lease for each server.
|
|
Lease management operations, such as lease renewal, are not
|
|
done on behalf of a particular file, lock, user, or mount
|
|
point, but on behalf of the whole client that owns that lease.
|
|
These operations must use the same security flavor and
|
|
principal that was used when the lease was established,
|
|
even across client reboots.
|
|
<P>
|
|
|
|
When Kerberos is configured on a Linux NFS client
|
|
(i.e., there is a
|
|
<I>/etc/krb5.keytab</I>
|
|
|
|
on that client), the client attempts to use a Kerberos
|
|
security flavor for its lease management operations.
|
|
This provides strong authentication of the client to
|
|
each server it contacts.
|
|
By default, the client uses the
|
|
<I>host/</I>
|
|
|
|
or
|
|
<I>nfs/</I>
|
|
|
|
service principal in its
|
|
<I>/etc/krb5.keytab</I>
|
|
|
|
for this purpose.
|
|
<P>
|
|
|
|
If the client has Kerberos configured, but the server
|
|
does not, or if the client does not have a keytab or
|
|
the requisite service principals, the client uses
|
|
<I>AUTH_SYS</I>
|
|
|
|
and UID 0 for lease management.
|
|
<A NAME="lbBA"> </A>
|
|
<H3>Using non-privileged source ports</H3>
|
|
|
|
NFS clients usually communicate with NFS servers via network sockets.
|
|
Each end of a socket is assigned a port value, which is simply a number
|
|
between 1 and 65535 that distinguishes socket endpoints at the same
|
|
IP address.
|
|
A socket is uniquely defined by a tuple that includes the transport
|
|
protocol (TCP or UDP) and the port values and IP addresses of both
|
|
endpoints.
|
|
<P>
|
|
|
|
The NFS client can choose any source port value for its sockets,
|
|
but usually chooses a
|
|
<I>privileged</I>
|
|
|
|
port.
|
|
A privileged port is a port value less than 1024.
|
|
Only a process with root privileges may create a socket
|
|
with a privileged source port.
|
|
<P>
|
|
|
|
The exact range of privileged source ports that can be chosen is
|
|
set by a pair of sysctls to avoid choosing a well-known port, such as
|
|
the port used by ssh.
|
|
This means the number of source ports available for the NFS client,
|
|
and therefore the number of socket connections that can be used
|
|
at the same time,
|
|
is practically limited to only a few hundred.
|
|
<P>
|
|
|
|
As described above, the traditional default NFS authentication scheme,
|
|
known as AUTH_SYS, relies on sending local UID and GID numbers to identify
|
|
users making NFS requests.
|
|
An NFS server assumes that if a connection comes from a privileged port,
|
|
the UID and GID numbers in the NFS requests on this connection have been
|
|
verified by the client's kernel or some other local authority.
|
|
This is an easy system to spoof, but on a trusted physical network between
|
|
trusted hosts, it is entirely adequate.
|
|
<P>
|
|
|
|
Roughly speaking, one socket is used for each NFS mount point.
|
|
If a client could use non-privileged source ports as well,
|
|
the number of sockets allowed,
|
|
and thus the maximum number of concurrent mount points,
|
|
would be much larger.
|
|
<P>
|
|
|
|
Using non-privileged source ports may compromise server security somewhat,
|
|
since any user on AUTH_SYS mount points can now pretend to be any other
|
|
when making NFS requests.
|
|
Thus NFS servers do not support this by default.
|
|
They explicitly allow it usually via an export option.
|
|
<P>
|
|
|
|
To retain good security while allowing as many mount points as possible,
|
|
it is best to allow non-privileged client connections only if the server
|
|
and client both require strong authentication, such as Kerberos.
|
|
<A NAME="lbBB"> </A>
|
|
<H3>Mounting through a firewall</H3>
|
|
|
|
A firewall may reside between an NFS client and server,
|
|
or the client or server may block some of its own ports via IP
|
|
filter rules.
|
|
It is still possible to mount an NFS server through a firewall,
|
|
though some of the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command's automatic service endpoint discovery mechanisms may not work; this
|
|
requires you to provide specific endpoint details via NFS mount options.
|
|
<P>
|
|
|
|
NFS servers normally run a portmapper or rpcbind daemon to advertise
|
|
their service endpoints to clients. Clients use the rpcbind daemon to determine:
|
|
<DL COMPACT>
|
|
<DT id="105"><DD>
|
|
What network port each RPC-based service is using
|
|
<DT id="106"><DD>
|
|
What transport protocols each RPC-based service supports
|
|
</DL>
|
|
<P>
|
|
|
|
The rpcbind daemon uses a well-known port number (111) to help clients find a service endpoint.
|
|
Although NFS often uses a standard port number (2049),
|
|
auxiliary services such as the NLM service can choose
|
|
any unused port number at random.
|
|
<P>
|
|
|
|
Common firewall configurations block the well-known rpcbind port.
|
|
In the absense of an rpcbind service,
|
|
the server administrator fixes the port number
|
|
of NFS-related services so that the firewall
|
|
can allow access to specific NFS service ports.
|
|
Client administrators then specify the port number
|
|
for the mountd service via the
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
command's
|
|
<B>mountport</B>
|
|
|
|
option.
|
|
It may also be necessary to enforce the use of TCP or UDP
|
|
if the firewall blocks one of those transports.
|
|
<A NAME="lbBC"> </A>
|
|
<H3>NFS Access Control Lists</H3>
|
|
|
|
Solaris allows NFS version 3 clients direct access
|
|
to POSIX Access Control Lists stored in its local file systems.
|
|
This proprietary sideband protocol, known as NFSACL,
|
|
provides richer access control than mode bits.
|
|
Linux implements this protocol
|
|
for compatibility with the Solaris NFS implementation.
|
|
The NFSACL protocol never became a standard part
|
|
of the NFS version 3 specification, however.
|
|
<P>
|
|
|
|
The NFS version 4 specification mandates a new version
|
|
of Access Control Lists that are semantically richer than POSIX ACLs.
|
|
NFS version 4 ACLs are not fully compatible with POSIX ACLs; as such,
|
|
some translation between the two is required
|
|
in an environment that mixes POSIX ACLs and NFS version 4.
|
|
<A NAME="lbBD"> </A>
|
|
<H2>THE REMOUNT OPTION</H2>
|
|
|
|
Generic mount options such as
|
|
<B>rw</B> and <B>sync</B>
|
|
|
|
can be modified on NFS mount points using the
|
|
<B>remount</B>
|
|
|
|
option.
|
|
See
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8)
|
|
|
|
for more information on generic mount options.
|
|
<P>
|
|
|
|
With few exceptions, NFS-specific options
|
|
are not able to be modified during a remount.
|
|
The underlying transport or NFS version
|
|
cannot be changed by a remount, for example.
|
|
<P>
|
|
|
|
Performing a remount on an NFS file system mounted with the
|
|
<B>noac</B>
|
|
|
|
option may have unintended consequences.
|
|
The
|
|
<B>noac</B>
|
|
|
|
option is a combination of the generic option
|
|
<B>sync</B>,
|
|
|
|
and the NFS-specific option
|
|
<B>actimeo=0</B>.
|
|
|
|
<A NAME="lbBE"> </A>
|
|
<H3>Unmounting after a remount</H3>
|
|
|
|
For mount points that use NFS versions 2 or 3, the NFS umount subcommand
|
|
depends on knowing the original set of mount options used to perform the
|
|
MNT operation.
|
|
These options are stored on disk by the NFS mount subcommand,
|
|
and can be erased by a remount.
|
|
<P>
|
|
|
|
To ensure that the saved mount options are not erased during a remount,
|
|
specify either the local mount directory, or the server hostname and
|
|
export pathname, but not both, during a remount. For example,
|
|
<P>
|
|
|
|
<PRE>
|
|
|
|
mount -o remount,ro /mnt
|
|
</PRE>
|
|
|
|
<P>
|
|
|
|
merges the mount option
|
|
<B>ro</B>
|
|
|
|
with the mount options already saved on disk for the NFS server mounted at /mnt.
|
|
<A NAME="lbBF"> </A>
|
|
<H2>FILES</H2>
|
|
|
|
<DL COMPACT>
|
|
<DT id="107"><I>/etc/fstab</I>
|
|
|
|
<DD>
|
|
file system table
|
|
<DT id="108"><I>/etc/nfsmount.conf</I>
|
|
|
|
<DD>
|
|
Configuration file for NFS mounts
|
|
</DL>
|
|
<A NAME="lbBG"> </A>
|
|
<H2>BUGS</H2>
|
|
|
|
Before 2.4.7, the Linux NFS client did not support NFS over TCP.
|
|
<P>
|
|
|
|
Before 2.4.20, the Linux NFS client used a heuristic
|
|
to determine whether cached file data was still valid
|
|
rather than using the standard close-to-open cache coherency method
|
|
described above.
|
|
<P>
|
|
|
|
Starting with 2.4.22, the Linux NFS client employs
|
|
a Van Jacobsen-based RTT estimator to determine retransmit
|
|
timeout values when using NFS over UDP.
|
|
<P>
|
|
|
|
Before 2.6.0, the Linux NFS client did not support NFS version 4.
|
|
<P>
|
|
|
|
Before 2.6.8, the Linux NFS client used only synchronous reads and writes
|
|
when the
|
|
<B>rsize</B> and <B>wsize</B>
|
|
|
|
settings were smaller than the system's page size.
|
|
<P>
|
|
|
|
The Linux NFS client does not yet support
|
|
certain optional features of the NFS version 4 protocol,
|
|
such as security negotiation, server referrals, and named attributes.
|
|
<A NAME="lbBH"> </A>
|
|
<H2>SEE ALSO</H2>
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+fstab">fstab</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+mount">mount</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+umount">umount</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+mount.nfs">mount.nfs</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+umount.nfs">umount.nfs</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+exports">exports</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+nfsmount.conf">nfsmount.conf</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?5+netconfig">netconfig</A></B>(5),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?7+ipv6">ipv6</A></B>(7),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+nfsd">nfsd</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+sm-notify">sm-notify</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+rpc.statd">rpc.statd</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+rpc.idmapd">rpc.idmapd</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+rpc.gssd">rpc.gssd</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?8+rpc.svcgssd">rpc.svcgssd</A></B>(8),
|
|
|
|
<B><A HREF="/cgi-bin/man/man2html?1+kerberos">kerberos</A></B>(1)
|
|
|
|
<P>
|
|
RFC 768 for the UDP specification.
|
|
<BR>
|
|
|
|
RFC 793 for the TCP specification.
|
|
<BR>
|
|
|
|
RFC 1094 for the NFS version 2 specification.
|
|
<BR>
|
|
|
|
RFC 1813 for the NFS version 3 specification.
|
|
<BR>
|
|
|
|
RFC 1832 for the XDR specification.
|
|
<BR>
|
|
|
|
RFC 1833 for the RPC bind specification.
|
|
<BR>
|
|
|
|
RFC 2203 for the RPCSEC GSS API protocol specification.
|
|
<BR>
|
|
|
|
RFC 3530 for the NFS version 4 specification.
|
|
<P>
|
|
|
|
<HR>
|
|
<A NAME="index"> </A><H2>Index</H2>
|
|
<DL>
|
|
<DT id="109"><A HREF="#lbAB">NAME</A><DD>
|
|
<DT id="110"><A HREF="#lbAC">SYNOPSIS</A><DD>
|
|
<DT id="111"><A HREF="#lbAD">DESCRIPTION</A><DD>
|
|
<DT id="112"><A HREF="#lbAE">MOUNT OPTIONS</A><DD>
|
|
<DL>
|
|
<DT id="113"><A HREF="#lbAF">Options supported by all versions</A><DD>
|
|
<DT id="114"><A HREF="#lbAG">Options for NFS versions 2 and 3 only</A><DD>
|
|
<DT id="115"><A HREF="#lbAH">Options for NFS version 4 only</A><DD>
|
|
</DL>
|
|
<DT id="116"><A HREF="#lbAI">nfs4 FILE SYSTEM TYPE</A><DD>
|
|
<DT id="117"><A HREF="#lbAJ">MOUNT CONFIGURATION FILE</A><DD>
|
|
<DT id="118"><A HREF="#lbAK">EXAMPLES</A><DD>
|
|
<DT id="119"><A HREF="#lbAL">TRANSPORT METHODS</A><DD>
|
|
<DL>
|
|
<DT id="120"><A HREF="#lbAM">Using the mountproto mount option</A><DD>
|
|
<DT id="121"><A HREF="#lbAN">Using NFS over UDP on high-speed links</A><DD>
|
|
</DL>
|
|
<DT id="122"><A HREF="#lbAO">DATA AND METADATA COHERENCE</A><DD>
|
|
<DL>
|
|
<DT id="123"><A HREF="#lbAP">Close-to-open cache consistency</A><DD>
|
|
<DT id="124"><A HREF="#lbAQ">Weak cache consistency</A><DD>
|
|
<DT id="125"><A HREF="#lbAR">Attribute caching</A><DD>
|
|
<DT id="126"><A HREF="#lbAS">File timestamp maintainence</A><DD>
|
|
<DT id="127"><A HREF="#lbAT">Directory entry caching</A><DD>
|
|
<DT id="128"><A HREF="#lbAU">The sync mount option</A><DD>
|
|
<DT id="129"><A HREF="#lbAV">Using file locks with NFS</A><DD>
|
|
<DT id="130"><A HREF="#lbAW">NFS version 4 caching features</A><DD>
|
|
</DL>
|
|
<DT id="131"><A HREF="#lbAX">SECURITY CONSIDERATIONS</A><DD>
|
|
<DL>
|
|
<DT id="132"><A HREF="#lbAY">NFS version 4 filesystem crossing</A><DD>
|
|
<DT id="133"><A HREF="#lbAZ">NFS version 4 Leases</A><DD>
|
|
<DT id="134"><A HREF="#lbBA">Using non-privileged source ports</A><DD>
|
|
<DT id="135"><A HREF="#lbBB">Mounting through a firewall</A><DD>
|
|
<DT id="136"><A HREF="#lbBC">NFS Access Control Lists</A><DD>
|
|
</DL>
|
|
<DT id="137"><A HREF="#lbBD">THE REMOUNT OPTION</A><DD>
|
|
<DL>
|
|
<DT id="138"><A HREF="#lbBE">Unmounting after a remount</A><DD>
|
|
</DL>
|
|
<DT id="139"><A HREF="#lbBF">FILES</A><DD>
|
|
<DT id="140"><A HREF="#lbBG">BUGS</A><DD>
|
|
<DT id="141"><A HREF="#lbBH">SEE ALSO</A><DD>
|
|
</DL>
|
|
<HR>
|
|
This document was created by
|
|
<A HREF="/cgi-bin/man/man2html">man2html</A>,
|
|
using the manual pages.<BR>
|
|
Time: 00:06:04 GMT, March 31, 2021
|
|
</BODY>
|
|
</HTML>
|