DOC HOME SITE MAP MAN PAGES GNU INFO SEARCH PRINT BOOK
 
Tunable parameters

Filesystem parameters


NOTE: For some filesystems the system allocates inodes dynamically, based on system demand.

Generic filesystem parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
DNLCSIZE 1400/3800/13400 200 40000/160000/640000 KVM
FDFLUSHR 1 1 1200  
FIFOBLKSIZE 16384 5120 65536  
FLCKREC 300 100 2000  
NAUTOUP 60 0 1200  
NC_HASH_SIZE 0 0 8388608  
ROOTFSTYPE ""      
RSTCHOWN 0 0 1  
SENDV_FORCE_RCOPY 1 0 1  


DNLCSIZE
Defines the size of the directory name lookup cache (DNLC). Generally, larger memory systems both support a heavier work load and have more memory to spare, so the DNLCSIZE tunable is automatically adjusted to be larger when more memory is present. You can override the autotuned value by using idtune.

FDFLUSHR
Specifies the time interval, in seconds, for checking the need to write the buffer caches and filesystem pages to disk.

FIFOBLKSIZE
FIFO block size. The capacity of a pipe is 2*FIFOBLKSIZE to allow a writer to send two messages before blocking on flow control.

FLCKREC
This parameter controls the number of record locking structures used by the system.

NAUTOUP
Specifies the time, in seconds, for automatic filesystem updates (for both the buffer cache and filesystem pages). A system buffer is written to the hard disk when it has been memory-resident for the interval specified by the NAUTOUP parameter. Specifying a smaller limit increases system reliability (by writing the buffer caches and filesystem pages to disk more frequently) and decreases system performance. Specifying a larger limit increases system performance at the expense of reliability.

NC_HASH_SIZE
Specifies the number of hash buckets used for lookup in the directory name lookup cache. The default value of 0 sets the number of hash lists to 1/4 the number of ``dnlc'' entries.

ROOTFSTYPE
Default root filesystem type.

RSTCHOWN
Specifies restricted file ownership changes flag. Only ``0'' and ``1'' are valid values for RSTCHOWN. A value of ``0'' is the Release 3 compatibility mode. The owner of a file can change the user ID and group ID of the file to any value, including non-existent user IDs and group IDs. When set to ``1'', RSTCHOWN specifies the FIPS/BSD compatibility mode. This restricts the ability to change ownership of a file. Only the user with appropriate privilege or root processes (those whose UID is ``0'') can change the ownership of a file. The owner of the file can change only the group ID of the file to one group in which the owner has membership. See getgroups(2). A user with appropriate privilege and root processes can change the group ID of any file to any value.

SENDV_FORCE_RCOPY
When set to ``1'', the default, SENDV_FORCE_RCOPY causes the sendv(2) system call to copy the data from the filesystem page into a streams buffer. When set to ``0'', the filesystem page is locked down, and then used ``in place'' by the output stream; there is no copy. When the stream is done with the data, the page is unlocked. This mode of operation represents an optimization over the force copy mode. However, if the stream were to exhibit a ``memory leak'' in the optimized mode, then it is possible that the file page will stay locked indefinitely. This can cause a process to hang when it accesses the affected file, or possibly even when it accesses the affected filesystem.

Buffer cache parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
BUFHWM 4096/16384/65536 10 16384/65536/262144 KVM
NBUF 256 20 3000  
NHBUF 64 32 1024  
NPBUF 64 20 100  
NPGOUTBUF 16 1 100  


BUFHWM
The amount of memory that can be used by the transfer of filesystem structure data such as inodes, indirect blocks, and cylinder groups. It is not file data, which goes through virtual memory (VM) as pages.


NOTE: In some other UNIX systems such as UNIX System V Release 3.2 and earlier, the buffer cache was used for practically all disk I/O, including the contents of files (that is, reads and writes). The use of buffer cache changed with the introduction of virtual memory (VM) and, therefore, changed the appropriate tuning of BUFHWM.

Setting BUFHWM too low causes excess filesystem activity to flush the buffer before it can be re-used. Setting BUFHWM too high reduces the page pool size and can increase paging. In general terms, BUFHWM should grow proportionately with memory size. The value of BUFHWM is autotuned and it should not have to be retuned. However, you may want to retune BUFHWM for a specific condition; for example, to reduce the value of BUFHWM so there is more memory for use by X-windows. Check sar -b before and after the change to ensure that the %rcache and %wcache have not changed dramatically.


NBUF
Buffer cache I/O uses both buffers and buffer headers. (See BUFHWM.) Whenever a buffer header is needed, but none is available, the system dynamically allocates more buffer headers in chunks of NBUF headers at a time. There is no limit to the number of buffer headers in the system, but the tunable BUFHWM limits the number of kilobytes that can be used by buffers and, therefore, effectively limits the number of buffer headers that are allocated.

Once allocated, buffer header space cannot be freed for other uses. Therefore, take care when you raise the value of NBUF. A higher value of NBUF decreases the number of times the Kernel Memory Allocator must be called to allocate space for buffer headers, but this also can result in the allocation of headers that are not used.


NHBUF
Specifies the size of the hash table used to locate a buffer, given a device number and a block number. The value must be a power of ``2'' from ``32'' to ``1024'', inclusive. This value should be about one quarter of the total buffers available. A value between 1/8 and 1/4 of BUFHWM is typically sufficient.

NPBUF
Specifies the number of physical I/O buffers to allocate; one is needed for each active physical read or write. There is no rule for adjusting NPBUF. However, if you expect a lot of I/O and filesystem activity, improvement may be gained by raising NPBUF.

NPGOUTBUF
Specifies the number of buffer headers reserved for the pageout daemon to avoid deadlock.

CDFS filesystem parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
CDFSNINODE 2048 150 20000 KVM


CDFSNINODE
Specifies the maximum number of inode entries in the CDFS inode table when a CDFS file is configured in your system.

DOSFS filesystem parameters

Parameter Dflt Min Max
DOSFSFLUSH 60 1 120
DOSFSNINODE 200 1 400


DOSFSFLUSH
Specifies how often to run the fsflush daemon to write data out to disk on DOSFS filesystems. A smaller value causes the daemon to run more often, making the filesystem ``harder,'' that is, less likely to lose data in a crash. However, it does take some CPU time.

DOSFSNINODE
Specifies the maximum number of inode entries in the DOSFS inode table when a DOSFS file is configured in your system.

MEMFS filesystem parameters

Parameter Dflt Min Max
MEMFS_MAXKMEM 524288 65536 4096000


MEMFS_MAXKMEM
Specifies the allocatable kernel memory in bytes for memfs.

NAMEFS filesystem parameters

Parameter Dflt Min Max
NAMEFSFLUSH 60 1 120


NAMEFSFLUSH
Specifies the flush time interval for namefs.

NFS filesystem parameters

Parameter Dflt Min Max
FIRSTRETRY 1 1 5
KEYNRETRY 6 1 10
KEYTIMEOUT 30 5 50
MAXDUPREQS 400 200 8000
NFS_ASYNC_MAX 8 0 40
NFS_ASYNC_TIMEOUT 4 0 10
NFS_MAXCLIENTS 6 0 20
NFS_MMAP_TIMEOUT 30 30 180
NFS_NRA 1 0 1
NFS_NUM_MOUNTS 256 256 1024
NFS_RETRIES 5 1 10
NFS_TIMEO 10 4 15
NFSD_MAX 8 4 128
NFSD_MIN 2 1 32
NFSD_TIMEOUT 8 3 10
NRNODE 300 100 1300
RECVRETRIES 2 1 10
RTIMETIMEOUT 5 2 15
WORKTIMEOUT 3 3 7


FIRSTRETRY
Specifies the number of times to retry contacting the local lock manager before failing.

KEYNRETRY
Specifies the number of times to retry contacting the keyserver before failing.

KEYTIMEOUT
Specifies the maximum time, in seconds, to wait for the keyserver to reply to a request.

MAXDUPREQS
Specifies the maximum number of cached items in the server side duplicate request cache. Adjust MAXDUPREQS to the service load so that a response entry is likely when the first retransmission is received.

NFS_ASYNC_MAX
Specifies the maximum number of lightweight processes (LWPs) that can be created for asynchronous I/O over NFS.

NFS_ASYNC_TIMEOUT
Specifies, in seconds, the time-to-live of the lightweight processes (LWPs) that do asynchronous I/O over NFS. If an asynchronous request is not generated in the system in NFS_ASYNC_TIMEOUT seconds, the LWPs exit.

NFS_MAXCLIENTS
Specifies the number of active RPC client handles that NFS can cache. Increase this number only when more memory is added to the system.

NFS_MMAP_TIMEOUT
Specifies, in seconds, the interval between the time the NFS mmap lightweight processes (LWP) wakes up and updates file attributes for all mmaped files. Given the large amount of CPU time the mmap LWP can potentially consume, in cases where a significant number of files have been mmaped over NFS, this value should not be made too small.

NFS_NRA
Specifies the number of pages to read ahead for each read operation, if possible. Read-ahead over NFS can be turned off by changing this value to ``0''.

NFS_NUM_MOUNTS
Specifies the maximum number of mounts allowed on the client system.

NFS_RETRIES
Specifies the maximum number of NFS retries before failing the NFS operation for a soft mount. Typically, the default value should not be changed.

NFS_TIMEO
Specifies, in seconds, the maximum initial time to wait for an NFS server to respond to a client request. On machines connected to networks where packets tend to get lost, this value should be changed. On some of these networks you will have greater success getting the packet if you retry after a minimum wait; on other networks, you will have greater success if you wait longer.

NFSD_MAX
Specifies the maximum number of lightweight processes (LWPs) that can be created for servicing NFS requests from clients. You may want to increase this value for dedicated NFS servers.

NFSD_MIN
Specifies the minimum number of lightweight processes (LWPs) that should always exist in the system for servicing NFS requests from NFS clients. You may want to increase this value for dedicated NFS servers.

NFSD_TIMEOUT
Specifies, in seconds, the time-to-live of the lightweight processes (LWPs) that service NFS requests from clients. If the LWP is idle for more than NFSD_TIMEOUT seconds and there are more than NFSD_MIN LWPs, the LWPs exit.

NRNODE
Specifies the maximum number of rnode structures to be allocated for NFS. An rnode is a node specific to the NFSfilesystem type.

RECVRETRIES
Specifies the number of times to retry an RPC call for the client side.

RTIMETIMEOUT
Specifies the maximum time, in seconds, to wait for a reply to a request for synchronization with the server.

WORKTIMEOUT
Specifies the maximum time, in seconds, to wait for the local manager to reply to a request.

SFS filesystem parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
SFSFLUSH 60 1 120  
SFSINODELWM 50 1 1000  
SFSNINODE 8000/32000/128000 150 40000/160000/640000 KVM
SFSTIMELAG 10000 0 1000000  


SFSFLUSH
Specifies how often the fsflush daemon writes sfs inode data out to disk on sfS filesystems. A smaller value causes the daemon to write more often, making the filesystem ``harder,'' that is, less likely to lose data in a crash. However, it does take some CPU time.

SFSINODELWM
Specifies the minimum number of inodes to keep in the dynamically allocated inode pool. This keeps the system from returning space to the KMA (Kernel Memory Allocator) when it is likely to need it back in the inode pool relatively quickly.

SFSNINODE
Specifies the maximum number of inode entries in the sfs inode table. It's important when an sfs file is configured in your system. An sfs inode is a data structure that typically describes a file, directory, link and named pipe in an sfs filesystem type. Too few incore inodes cause:

This tunable is automatically adjusted based on memory size, but you can override the autotuned value by using idtune(1M).


SFSTIMELAG
A heuristic to not reuse inodes sooner than the specified number of ticks, if possible.

S5 filesystem parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
S5FSFLUSH 60 1 120  
S5INODELWM 50 1 1000  
S5NINODE 4000 150 20000 KVM


S5FSFLUSH
Specifies how often the fsflush daemon writes s5 inode data out to disk on s5 filesystems. A smaller value causes the daemon to write more often, making the filesystem ``harder,'' that is, less likely to lose data in a crash. However, it does take some CPU time.

S5INODELWM
Specifies the minimum number of inodes to keep in the dynamically allocated inode pool, also known as the low water mark (LWM). This keeps the system from returning space to the KMA (Kernel Memory Allocator) when it is likely to need it back in the inode pool relatively quickly.

S5NINODE
Specifies the maximum number of inode entries in the memory-resident s5 inode table. This parameter is important when an s5 filesystem type is configured into your system. An s5 inode is a data structure that typically describes a file, directory, link and named pipe in an s5 filesystem type. Too few incore inodes causes:

This tunable is automatically adjusted based on memory size, but you can override the autotuned value by using idtune.

UFS filesystem parameters

Parameter Dflt Min Max
NDQUOT 200 100 400


NDQUOT
The size of the kernel quota table for the ufs filesystem. There is one entry for each user. Therefore, NDQUOT should be more than the maximum number of users that can be logged onto the system if a ufs or sfs filesystem type is being used. If quotas are in effect, the table entries limit the amount of disk space a user can use. If there are no available entries, the message:
   dquot table full
is displayed on the console. If this occurs, increase the value of NDQUOT.

VXFS filesystem parameters

Parameter Dflt 64/256/1024 Min Max 64/256/1024 Auto
VXFSNINODE 7000/16000/64000 150 28000/48000/128000 KVM


VXFSNINODE
Specifies the maximum number of inode entries in the vxfs inode table. It's important when a vxfs filesystem is configured in your system. A vxfs inode is a data structure that typically describes a file, directory, link and named pipe in a vxfs filesystem type. Too few incore inodes causes:

This tunable is automatically adjusted based on memory size, but you can override the autotuned value by using idtune(1M).


NOTE: VXFSNINODE is the only vxfs tunable parameter that can be tuned using idtune, but there are a number of other vxfs tunable I/O parameters that can be tuned using the vxtunefs command. See vxtunefs(1M).


Next topic: Inter-process communication (IPC) parameters
Previous topic: Dynamically loadable kernel module (DLKM) parameters

© 2004 The SCO Group, Inc. All rights reserved.
UnixWare 7 Release 7.1.4 - 22 April 2004