vxtunefs(1M)


vxtunefs - tunes a VERITAS File System

Synopsis

vxtunefs [ options] [{mount_point | block_special}] ...

Description

vxtunefs is used to set or print tunable I/O parameters of mounted file systems. It can be used to set parameters describing the desired I/O properties of the underlying device, parameters to indicate when an I/O should be treated as direct I/O, or parameters to control the extent allocation policy for the concerned file system.

With no options, vxtunefs will print the existing vxfs parameters for the specified file systems.

vxtunefs can work on a list of mount points specified on the command line or it can be used to process all the mounted file systems listed in the tunefstab file. The default tunefstab file is /etc/vx/tunefstab; this can be changed by using the VXTUNEFSTAB environment variable.

The vxtunefs command can be run at any time on a mounted file system and all parameter changes will take immediate effect. Parameters specified on the command line override any parameters listed in the tunefstab file.

If the file /etc/vx/tunefstab exists, then the vxfs specific mount command will invoke vxtunefs to set any parameters for the device from /etc/vx/tunefstab. If the file system is built on top of the VERITAS Volume Manager, then the vxfs specific mount command will interface with the Volume Manager to get default values for the tunables, so it is only necessary to specify tunables for Volume Manager devices if the the defaults are not acceptable.

Options

-f tunefstab
Use tunefstab instead of /etc/vx/tunefstab as the file containing tuning parameters.

-p
Print the tuning parameters used by VxFS for all the file systems specified either on the command line or in tunefstab.

-s
Set the new tuning parameters for the VxFS file systems specified on the command line or in tunefstab.

-o parameter=value
Specify parameters to be set on the file systems listed on the command line.

VXFS tuning parameters and guidelines

read_pref_io
This is the preferred read request size. The file system uses this in conjunction with the read_nstream value to determine how much data to read ahead. The default value is 64K and read_pref_io can't be smaller than read_unit_io.

read_nstream
This is the desired number of parallel read requests of size read_pref_io to have outstanding at one time. The file system uses the product of read_nstream multiplied by read_pref_io to determine its read ahead size. The default value for read_nstream is 1.

read_unit_io
This is a less preferred request size. The file system doesn't use this tunable.

write_pref_io
This is the preferred write request size. The file system uses this in conjunction with the write_nstream value to determine how to do flush behind on writes. The default value is 64K and write_pref_io can't be smaller than write_unit_io.

write_nstream
This is the desired number of parallel write requests of size write_pref_io to have outstanding at one time. The file system uses the product of write_nstream multiplied by write_pref_io to determine when to do flush behind on writes. The default value for write_nstream is 1.

write_unit_io
This is a less preferred request size. The file system doesn't use this tunable.

pref_strength
This parameter indicates to the file system how large a performance gain might be made by adhering to the preferred I/O sizes. The file system doesn't use this tunable.

buf_breakup_size
This parameter tells the file system how large an I/O it can issue without a driver breaking up the request. The file system doesn't use this tunable.

max_direct_iosz
This is the maximum size of a direct I/O request that will be issued by the file system. If a larger I/O request comes in, then it is broken up into max_direct_iosz chunks. This parameter defines how much memory an I/O request can lock down at once, so it shouldn't be set to more than 20% of memory.

discovered_direct_iosz
Any file I/O requests larger than the discovered_direct_iosz are handled as discovered direct I/O. A discovered direct I/O is unbuffered like direct I/O, but it doesn't require a synchronous commit of the inode when the file is extended or blocks are allocated. For larger I/O requests, the CPU time for copying the data into the page cache and the cost of using memory to buffer the I/O becomes more expensive than the cost of doing the disk I/O. For these I/O requests, using discovered direct I/O is more efficient than regular I/O. The default value of this parameter is 256K.

default_indir_size
On vxfs, files can have up to 10 variable size extents stored in the inode. Once these extents are used up, the file must use indirect extents which are a fixed size that is set when the file first uses indirect extents. These indirect extents are 8K by default. The file system doesn't use larger indirect extents because it must fail a write and return ENOSPC if there aren't any extents available that are the indirect extent size. For file systems with a lot of large files, the 8K indirect extent size is too small. The files that get into indirect extents use a lot of smaller extents instead of a few larger ones. By using this parameter, the default indirect extent size can be increased so that large files in indirects use fewer larger extents.

Note that this tunable should be used carefully. If it is set too large, then writes will fail when they are unable to allocate extents of the indirect extent size to a file. In general, the fewer and the larger the files on a file system, the larger the default_indir_size can be set. This parameter should generally be set to some multiple of the read_pref_io parameter.

This tuneable is not applicable on version 3 or version 4 disk layouts.

max_diskq
This tuneable limits the maximum disk queue generated by a single file. When the filesystem is flushing data for a file and the number of pages being flushed exceeds max_diskq, processes will block until the amount of data being flushed decreases. Although this doesn't limit the actual disk queue, it prevents flushing processes from making the system unresponsive. The default value is 1 MB.

qio_cache
This tuneable enables or disables caching on Quick I/O files. The default behavior is to disable caching. To enable caching, set qio_cache to 1.

On systems with large memories, the database cannot always use all of the memory as a cache. By enabling file system caching as a second level cache, performance may be improved.

If the database is performing sequential scans of tables, the scans may run faster by enabling file system caching so the file system will perform aggressive read-ahead on the files.

max_seqio_extent_size
This parameter increases or decreases the maximum size of an extent.

When the file system is following its default allocation policy for sequential writes to a file, it allocates an intial extent which is large enough for the first write to the file. When additional extents are allocated, they are progressively larger (the algorithm tries to double the size of the file with each new extent) so each extent can hold several writes worth of data. This is done to reduce the total number of extents in anticipation of continued sequential writes. When the file stops being written, any unused space is freed for other files to use.

Normally this allocation stops increasing the size of extents at 2048 blocks which prevents one file from holding too much unused space.

max_seqio_extent_size is measured in filesystem blocks.

def_init_extent
This parameter can be used to change the default size of the initial extent.

VxFS determines, based on the first write to a new file, the size of the first extent to be allocated to the file. Normally the first extent is the smallest power of 2 that is larger than the size of the first write. If that power of 2 is less than 8KB, the first extent allocated is 8KB. After the initial extent, the file system increases the size of subsequent extents (see max_seqio_extent_size) with each allocation.

Since most applications write to files using a buffer size of 8KB or less, the increasing extents start doubling from a small initial extent. def_init_extent is used to change the default initial extent size to be larger, so the doubling policy will start from a much larger initial size and the file system won't allocate a set of small extents at the start of file.

This paramter should only be used on file systems that will have a very large average file size. On these file systems it will result in fewer extents per file and less fragmentation.

def_init_extent is measured in filesystem blocks.

The values for all the above parameters except read_nstream, write_nstream, qio_cache, and pref_strength can be specified in bytes, kilobytes, megabytes or sectors (512 bytes) by appending k, K, m, M, s, or S. There is no need for a suffix to be specified for the value in bytes.

If the file system is being used with the VERITAS Volume Manager, then it is advisable to let the parameters get set to default values based on the volume geometry.

If the file system is being used with a hardware disk array or another volume manager, then try to align the parameters to match the geometry of the logical disk. For disk striping and RAID-5 configurations, set read_pref_io to the stripe unit size or interleave factor and set read_nstream to be the number of columns. For disk striping configurations, set write_pref_io and write_nstream to the same values as read_pref_io and read_nstream, but for RAID-5 configurations set write_pref_io to the full stripe size and set write_nstream to 1.

An application that wishes to do efficient direct I/O or discovered direct I/O should issue read requests that are equal to the product of read_nstream multiplied by read_pref_io. Generally any multiple or factor of read_nstream multiplied by read_pref_io should be a good size for performance. For writing, the same rule of thumb applies to the write_pref_io and write_nstream parameters. When tuning a file system, the best thing to do is try out the tuning parameters under a real workload.

If an application is doing sequential I/O to large files, it should try to issue request larger than the discovered_direct_iosz. This will cause the I/O requests to be performed as discovered direct I/O requests which are unbuffered like direct I/O but do not require synchronous inode updates when extending the file. If the file is larger than will fit in the cache, then using unbuffered I/O will avoid throwing a lot of useful data out of the cache and it will avoid a lot of CPU overhead.

VERITAS volume manager maximum i/o size

If the file system is being used in conjunction with the VERITAS Volume Manager, then the Volume Manager will (by default) break up I/O requests larger than 256K. If you are using striping, for optimal performance the file system wants to issue I/O requests that are full stripes. If the full stripe size is larger than 256K, then those requests will be broken up. To avoid undesirable I/O breakup, the vol_maxio parameter should be increased. To increase the value of vol_maxio add an entry to /etc/system and reboot for the change to take effect. As an example, adding this line sets the maximum I/O size to 16M:

set vxio:vol_maxio=32768 

The value of vol_maxio is in sectors and it is stored as a 16 bit number so it can't be set larger than 65535. This line must be added to /etc/system after the "forceload: drv/vxio" line.

The value of vol_maxio determines the largest amount of memory that an I/O request can lock down, so it should not be set to more than about 20% of memory. For more information on setting Volume Manager parameters, see the VERITAS Volume Manager System Administrator's Guide.

Files

/etc/vx/tunefstab
VxFS file system tuning parameters table.

References

mkfs(1M), mount(1M), tunefstab(4)



© 1997 The Santa Cruz Operation, Inc. All rights reserved.