Managing NFS and KRPC Kernel Configurations in HP-UX 11i v3 HP Part Number: 762807-001 Published: March 2014 Edition: 1
© Copyright 2009, 2014 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license required from Hewlett-Packard for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Contents Contents ............................................................................................................................................................. 3 HP secure development lifecycle .............................................................................................................................. 4 1 Introduction..................................................................................................................................................
HP secure development lifecycle Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides the ability to authenticate HP-UX software. Software delivered through this release has been digitally signed using HP's private key. You can now verify the authenticity of the software before installing the products, delivered through this release. To verify the software signatures in signed depot, the following products must be installed on your system: • • B.11.31.
1 Introduction NFS is a network-based application that offers transparent file access across a network. The behavior and performance of NFS depends on numerous kernel tunables. Tunables are variables that control the behavior of the HP-UX kernel. To achieve optimal performance, the system administrator can modify the values of the tunables.
2 Managing Kernel Tunables using Kctune 2.1 NFS Client Tunables Table 2.1-1 lists the NFS client tunables. The last column specifies which ONCplus version first introduced the tunable. Table 2.1-1 NFS Client Tunables Kctune Tunable Name Range Default value Units ONCplus Version nfs_async_timeout 0 to MAXINT 6000 Milliseconds B.11.31_LR nfs_disable_rddir_cache 0 or 1 0 Boolean B.11.31_LR nfs_enable_write_behind 0 or 1 0 Boolean B.11.31_LR nfs_nacache 0 to MAXINT 0 Hash Queues B.11.
Kctune Tunable Name Range Default value Units ONCplus Version nfs3_nra 0 to MAXINT 4 Requests B.11.31_LR nfs3_pathconf_disable_cache 0 or 1 0 Boolean B.11.31_LR nfs4_async_clusters 0 to MAXINT 1 Requests B.11.31_LR nfs4_bsize 4096 to MAXINT 32768 Bytes B.11.31_LR (The value must be a power of 2.) nfs4_cots_timeo 10 to MAXINT 600 Tenths of a second B.11.31_LR nfs4_do_symlink_cache 0 or 1 1 Boolean B.11.31_LR nfs4_lookup_neg_cache 0 or 1 1 Boolean B.11.
The nfs_async_timeout tunable is dynamic. System reboot is not required to activate changes made to this tunable. Changes made to the nfs_async_timeout tunable are applicable to all NFS mounted filesystems. Modifying the Value Modify this tunable only if you can accurately predict the rate of asynchronous I/O. To avoid the overhead of creating and deleting threads, increase the value of this tunable. To free up resources for other subsystems, decrease the value of this tunable.
2.1.3 nfs_enable_write_behind Restrictions on Changing Description The nfs_enable_write_behind tunable controls the write behind feature when writing to files over NFS. When the write behind feature is enabled, over-the-wire NFS writes are scheduled by the writer/application thread. While this can result in NFS write data being sent to the server more frequently, the server is not affected by the frequent arrival of writes.
The nfs_nacache tunable is static. System reboot is required to activate changes made to this tunable. Modifying the Value Increase the value of this tunable only in extreme cases where a large number of users are accessing the same NFS file or directory simultaneously. Decreasing the value of this tunable to a value less than nfs_nrnode can result in long hash queues and slower performance. HP does not recommend decreasing the value of this tunable below the value of nfs_nrnode or ncsize. 2.1.
Tested Values Restrictions on Changing Default: 5 seconds Min: 0 Max: 360000 seconds (100 hours) Note: If the tunable is set to a value greater than 360000 seconds, an informational warning is issued. Any value greater than 360000 seconds is outside the tested limit. Restrictions on Changing The nfs_write_error_interval tunable is dynamic. System reboot is not required to activate changes made to this tunable.
The client attempts to service these different requests without favoring one type of operation over another. However some NFSv2 servers can take advantage of clustered requests from NFSv2 clients. For instance, write gathering is a server function that depends on the NFSv2 client sending out multiple WRITE requests in a short time span. If requests are taken out of the queue individually, the client defeats this server functionality designed to enhance performance.
The nfs3_bsize tunable controls the logical block size used by NFSv3 clients. The nfs4_bsize tunable controls the logical block size used by NFSv4 clients. For more information on these tunables, see: • nfs3_bsize • nfs4_bsize Tested Values Default: 8192 Min: 8192 Max: 65536 Note: If the tunable is set to a value greater than 65536 bytes, an informational warning is issued at runtime. Any value greater than 65536 is outside the tested limits. The value of the tunable must be a power of 2.
Note: If the tunable is set to a value less than 10 tenths of a second or greater than 36000 tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. Restrictions on Changing The nfs2_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the timeout duration is set per filesystem at mount time.
2.1.12 nfs2_dynamic Description The nfs2_dynamic tunable controls the dynamic retransmission feature for NFSv2 mounted filesystems. The dynamic retransmission feature is designed to reduce NFS retransmissions by monitoring server response time and adjusting read and write transfer sizes on NFSv2 mounted filesystems using connectionless transports such as UDP. The nfs3_dynamic tunable controls the dynamic retransmission feature for NFSv3 mounted filesystems. For more information, see nfs3_dynamic.
Modifying the Value If filesystems are mounted read-only on the client, and applications running on the client need to immediately see any filesystem changes made on the server, disable this tunable. If you disable this tunable, also consider disabling the nfs_disable_rddir_cache tunable. For more information, see nfs_disable_rddir_cache. 2.1.14 nfs2_max_threads Description The nfs2_max_threads tunable controls the number of kernel threads that perform asynchronous I/O for NFSv2 filesystems.
sequential access to a file is discovered. Read-ahead operations increase concurrency and read throughput. The nfs3_nra tunable controls the number of read-ahead operations queued by NFSv3 clients. The nfs4_nra tunable controls the number of read-ahead operations queued by NFSv4 clients. For more information on these tunables, see: • nfs3_nra • nfs4_nra Tested Values Default: 4 Min: 0 Max: 16 Note: If the tunable is set to a value greater than 16, an informational warning is issued at runtime.
Enable this tunable to ensure the client does not generate a READDIR request for more than 1024 bytes of directory information. Disable the tunable to allow the client to issue READDIR requests containing up to 8192 bytes of data. 2.1.17 nfs3_async_clusters Description The nfs3_async_clusters tunable controls the mix of asynchronous requests that are generated by the NFSv3 client. There are four types of asynchronous requests: • read-ahead • putpage • pageio • readdir-ahead.
Note: If the tunable is set to a value greater than 10 asynchronous requests, an informational warning is issued at runtime. Any value greater than 10 is outside the tested limits. Restrictions on Changing The nfs3_async_clusters tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the cluster setting is per filesystem at mount time.
Note: If the tunable is set to a value greater than 1048576 bytes, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. The value of the tunable must be a power of 2. Restrictions on Changing The nfs3_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time.
Note: The nfs3_bsize tunable affects every NFSv3 filesystem. To control the transfer sizes of specific NFS filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1M) man page for more information. 2.1.19 nfs3_cots_timeo Description The nfs3_cots_timeo tunable controls the default RPC timeout for NFSv3 mounted filesystems using a connection-oriented transport such as TCP. The nfs2_cots_timeo tunable controls the default RPC timeout for NFSv2 mounted filesystems.
The nfs2_do_symlink_cache tunable caches the contents of symbolic links in NFSv2 mounted filesystems. The nfs4_do_symlink_cache tunable caches the contents of symbolic links in NFSv4 mounted filesystems. For more information on these tunables, see: • nfs2_do_symlink_cache • nfs4_do_symlink_cache Tested Values Default: 1 (Symbolic link cache is enabled) Min: 0 (Symbolic link cache is disabled) Max: 1 Restrictions on Changing The nfs3_do_symlink_cache tunable is dynamic.
results in increased throughput. However, if the server response is delayed or the network is overloaded, the number of timeouts can increase. HP recommends leaving this tunable enabled because it helps the system minimize NFS packet loss on congested networks. 2.1.
on HP-UX 11i v3). When an NFS client mounts a filesystem with the forcedirectio option, data is transferred directly between the client and server without buffering on the client. By default the direct I/O data transfers are synchronous, where the client sends a single write request to the server and waits for the server to respond with the requested data before initiating a new request.
Restrictions on Changing The nfs3_jukebox_delay tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected when you change the value of this tunable. Modifying the Value If it takes a considerable amount of time for files to migrate from your HSM storage devices, increase the value of this tunable. However, if you increase the value of the tunable, it can prevent the file from becoming immediately visible when it becomes available.
Restrictions on Changing • The nfs3_max_async_directio_requests tunable is dynamic. System reboot is not required to activate changes made to this tunable. • Only NFSv3 TCP mount points that are mounted with forcedirectio options are affected by changing the value of this tunable.
Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued at runtime. Any value greater than 256 is outside the tested limits. Restrictions on Changing The nfs3_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount each filesystem after changing this tunable.
Note: If the tunable is set to a value greater than 1048576, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. The value of the tunable must be a power of 2. Restrictions on Changing The nfs3_max_transfer_size tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the transfer size for a filesystem is set when the filesystem is mounted.
Note: The nfs3_max_transfer_size tunable affects every NFSv3 filesystem. To control the transfer sizes of specific NFS filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1M) man page for more information. 2.1.29 nfs3_max_transfer_size_clts Description The nfs3_max_transfer_size_clts tunable specifies the maximum size of the data portion of NFSv3 READ, WRITE, READDIR, and READDIRPLUS requests.
To decrease the size of NFSv3 UDP requests, decrease the value of the nfs3_max_transfer_size_clts tunable. For example, to decrease the size of I/O requests on all NFSv3 UDP filesystems to 8 KB, set the value of nfs3_max_transfer_size_clts to 8192. Caution: HP strongly discourages increasing nfs3_max_transfer_size_clts above the default value of 32768 as this can cause NFS/UDP requests to fail.
Note: If the tunable is set to a value greater than 1048576, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. The value of the tunable must be a power of 2. Restrictions on Changing The nfs3_max_transfer_size_cots tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the transfer size for a filesystem is set when the filesystem is mounted.
2.1.31 nfs3_nra Description The nfs3_nra tunable controls the number of read-ahead operations queued by NFSv3 clients when sequential access to a file is discovered. Read-ahead operations increase concurrency and read throughput. The nfs2_nra tunable controls the number of read-ahead operations queued by NFSv2 clients. The nfs4_nra tunable controls the number of read-ahead operations queued by NFSv4 clients.
The nfs3_pathconf_disable_cache tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv3 mount points are affected by changing the value of the nfs3_pathconf_disable_cache tunable. Modifying the Value If you have an application that is making pathconf calls and needs real-time information from the backend filesystem, turn off pathconf caching by setting the value of the tunable to 1. 2.1.
Modifying Restrictionsthe on Value Changing If server functionality depends upon clusters of operations coming from the client, increase the value of this tunable. However, this increase impacts the operations in other queues if they have to wait until the current queue is empty or the cluster limit is reached. Note: Setting the value of nfs4_async_clusters to 0 causes all of the queued requests of a particular type to be processed before moving to the next type. 2.1.
Tested Values The nfs4_bsize tunable is dynamic. System reboot is not required to activate changes made to this tunable. Any change made to the value of the tunable is effective immediately. However, the logical block size is set per filesystem at mount time. The system administrator must unmount and re-mount the filesystem after changing this tunable. Only NFSv4 mount points are affected by changing the value of the nfs4_bsize tunable.
Restrictions on Changing Default: 600 tenths of a second (1 minute) Min: 10 tenths of a second (1 second) Max: 36000 tenths of a second (1 hour) Note: If the tunable is set to a value less than 10 tenths of a second or greater than 36000 tenths of a second, an informational warning is issued at runtime. These values are outside the tested limits. Restrictions on Changing The nfs4_cots_timeo tunable is dynamic. System reboot is not required to activate changes made to this tunable.
Modifying the Value Tested Values Enable this tunable to cache the contents of symbolic links. Because the client uses the cached version, changes made to the contents of the symbolic link file are not immediately visible to applications running on the client. To make the changes made to the symbolic link file immediately visible to applications on the client, disable this tunable.
Default: 8 Min: 0 Max: 256 Note: If the tunable is set to a value greater than 256 threads, an informational warning is issued, at runtime. Any value greater than 256 is outside the tested limits. Restrictions on Changing The nfs4_max_threads tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the number of threads is set per filesystem at mount time. The system administrator must unmount and re-mount the filesystem after changing this tunable.
Tested Values Note: If the tunable is set to a value greater than 1048576, an informational warning is issued at runtime. Any value greater than 1048576 is outside the tested limits. Restrictions on Changing The nfs4_max_transfer_size tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the transfer size for a filesystem is set when the filesystem is mounted.
server returns as well as the maximum size of the request the client generates over a connectionoriented transport, such as TCP. The nfs4_max_transfer_size_cots tunable works in conjunction with the nfs4_bsize and nfs4_max_transfer_size tunables when determining the maximum size of these I/O requests. For NFSv4 TCP traffic, the transfer size corresponds to the smallest value of nfs4_bsize, nfs4_max_transfer_size, and nfs4_max_transfer_size_cots.
administrator must unmount and re-mount the NFS filesystem to use the new value. Note: The nfs4_max_transfer_size_cots tunable affects every NFSv4 filesystem. To control the transfer sizes of specific NFS filesystems, use the rsize and wsize mount options. Refer to the mount_nfs(1M) man page for more information. 2.1.41 nfs4_nra Description The nfs4_nra tunable controls the number of read-ahead operations queued by NFSv4 clients when sequential access to a file is discovered.
2.1.42 nfs4_pathconf_disable_cache Description The nfs4_pathconf_disable_cache tunable controls the caching of pathconf(2) information for NFSv4 mounted filesystems. The nfs3_pathconf_disable_cache tunable controls the caching of pathconf(2) information for NFSv3 mounted filesystems. For more information on the tunable, see nfs3_pathconf_disable_cache.
threshold because of change in the filecache_max parameter. System reboot is not required to activate a change made to this tunable. Modifying the Value Enable this tunable to force the NFS client to use only limited UFC file cache which is controlled by nfs_ufc_threshold_percentage tunable. HP recommends enabling this tunable if you configure very less value for UFC filecache_max and also have loopback mount points. 2.1.
2.1.45 nfs_ufc_threshold_percentage Description The nfs_ufc_threshold_percentage tunable controls the amount of UFC file cache that can be consumed by the NFS client at any given point of time. This tunable will be effective only when the nfs_enable_ufc_threshold tunable is enabled.
Modifying the Value If the NFS client sends a very high number of commit calls to server, it can consume the bandwidth of both the network and the NFS server. It can even reduce the NFS write performance. In such environments, enabling this tunable can significantly reduce the commit calls over the wire. Due to the reduction in the commit calls and having a dedicated set of kernel threads for sending those commit calls, it might aid improving the overall write performance of the application. 2.1.
Tested Values Default: 0 (Shared file handles search optimization is disabled) Min: 0 Max: 1 (Shared file handles search optimization is enabled) Restrictions on Changing The nfs4_sfh_boost_search tunable is static. System reboot is required to activate changes made to this tunable. Modifying the Value Enable this tunable to optimize the search operation in the shared file handles list. 2.2 NFS Server Tunables Table 2.2-1 lists the NFS server tunables.
Max: 36000 seconds (10 hours) Note: If the tunable is set to a value greater than 36000 seconds, an informational warning is issued at runtime. Any value greater than 36000 is outside the tested limits. Restrictions on Changing The nfs_exi_cache_time tunable is dynamic. System reboot is not required to activate changes made to this tunable. Modifying the Value The size of the NFS authentication cache can be modified by changing the duration of time a cache entry is held before purging.
The nfs3_srv_read_copyavoid tunable controls the server-side read copy avoidance feature for NFSv3 filesystems. For more information on this tunable, see nfs3_srv_read_copyavoid.
Note: Before you enable this tunable, you should install the Virtual Memory fix (SR: 8606472738) included in patch PHKL_36457. Restrictions on Changing The nfs2_srv_read_copyavoid tunable is dynamic. System reboot is not required to activate changes made to this tunable. Only NFSv2 mount points are affected by changing the value of this tunable. Modifying the Value Enable this tunable to potentially improve the performance of applications that issue read requests in NFSv2 filesystems.
Modifying the Value To disable the READDIRPLUS functionality, disable the tunable. Once disabled, the NFS client reverts to the READDIR operation to retrieve directory contents and the LOOKUP operation to retrieve extended attributes. For example, if the READDIRPLUS functionality is disabled, commands such as ls which display only the directory information, can have a shorter response time. Commands such as find which displays inode numbers and attribute data, can have a longer response time.
HP recommends disabling the nfs3_srv_read_copyavoid tunable until these problems have been fixed. Fixes for these problems are planned for a future ONCplus release.
2.3 KRPC Client Tunables Table 2.3-1 lists the KRPC client tunables. The last column specifies which ONCplus version first introduced the tunable. Table 2.3-1 KRPC Client Tunables Kctune Tunable Name Range Default Value Units ONCplus Version rpc_clnt_idle_timeout 0 to 600000 300000 Milliseconds B.11.31_LR rpc_clnt_max_conns 1 to 10 1 Connections B.11.31_LR rpc_clnt_udpresvports 0 to 256 0 Ports B.11.31.03 2.3.
In most situations, this single connection paradigm works well. However, in certain cases the client can perform better if more than one TCP connection is used to communicate with the NFS server. Tested Values Default: 1 Min: 1 Max: 10 Note: If the tunable is set to a value greater than 10 connections, an informational warning is issued, at runtime. Any value greater than 10 is beyond the tested limit. Restrictions on Changing The rpc_clnt_max_conns tunable is dynamic.
of reserved ports is exhausted. In these situations you can try increasing the value of rpc_clnt_udpresvports to stop KRPC from consuming all UDP reserved ports and allow applications a better chance at obtaining a reserved port when they need one.
2.4 KRPC Server Tunables Table 2.4-1 lists the KRPC server tunables. The last column specifies which ONCplus version first introduced the tunable. Table 2.4-1 KRPC Server Tunables Range Default Value Units ONCplus Version rpc_svc_cltsmaxdupreqs 1 to MAXINT 1024 Cache Entries B.11.31_LR rpc_svc_cotsmaxdupreqs 1 to MAXINT 1024 Cache Entries B.11.31_LR rpc_svc_default_max_same_xprt 1 to MAXINT 8 Requests B.11.31_LR rpc_svc_idle_timeout 0 to MAXINT 360000 Milliseconds B.11.
2.4.2 rpc_svc_cotsmaxdupreqs Description The rpc_svc_cotsmaxdupreqs tunable controls the size of the duplicate request cache that detects RPC level transmissions on connection-oriented transports such as TCP. This cache avoids processing retransmitted requests that are non-idempotent. Tested Values Default: 1024 Min: 1 Max: 2048 Note: If the tunable is set to a value greater than 2048, an informational warning is issued at runtime. Any value greater than 2048 is beyond the tested limit.
Note: If the tunable is set to a value greater than 64 requests, an informational warning is issued, at runtime. Any value greater than 64 is beyond the tested limits. Restrictions on Changing The rpc_svc_default_max_same_xprt tunable is dynamic. System reboot is not required to activate changes made to this tunable. However, the maximum number of requests that can be processed before switching transport endpoints is set when the transport endpoint is configured into the KRPC subsystem.
2.4.5 rpc_svc_reducepoolcontention Description The KRPC Server processes the incoming requests by a pool of service threads. These threads contend for the pool's resources while servicing the incoming requests. The rpc_svc_reducepoolcontention tunable reduces the contention for the pool's resources by optimizing the usage of the pool's resources. This optimization results in performance improvement of certain workloads.
Note: Once the tunable is enabled, all the NFS server threads that service client requests exhibit CPU bind functionality immediately. If the tunable is disabled, the NFS server threads working on the existing TCP connections continue to exhibit bind functionality until the TCP connections are either closed or timed out. Modifying the Value Enable this tunable to force the spawned service thread to bind and run on the CPU where the network packet is processed as part of the network stack.
Note: For overall performance improvement, HP recommends you uniformly distribute interrupt lines from all the network interface cards across the available CPUs on the system. For details on how to configure interrupt lines, see intctl (1M). 2.4.7 rpc_svc_preempt_enable Description: This tunable makes the nfsd thread to give up the CPU (voluntarily) after serving some requests and allow other high priority threads to get a chance to run.
Kctune Tunable Name klm_log_level Range Default Value Units ONCplus Version 0 to 9 0 Logging level B.11.31.01 2.5.1 klm_log_level Description The klm_log_level tunable controls the logging of debug messages by the Kernel Lock Manager. In addition, it also controls the level of detail in the debug messages that are logged to the dmesg buffer and the syslog file.
Appendix A. Obsolete tunables This section lists the tunables that are currently available on HP-UX 11i v2 but are not provided on HPUX 11i v3. Table A.1 lists the tunable names and the reasons why these tunables have been discontinued. Table A.1 Obsolete Tunables Kctune Tunable Name Reason for Discontinuance nfs_async_read_avoidance_enabled This feature is enabled by default on 11i v3 and is not configurable.