Open System Services Porting Guide (G06.24+, H06.03+)

Table Of Contents
OSS Porting Considerations
Open System Services Porting Guide520573-006
6-21
File Caching for Regular Disk Files
OSS Name Server Caching
The OSS name server caches both inodes and name entries and maintains a catalog
for each OSS fileset. With the OSS name server cache, previously found entries
remain in cache; thus no disk access is required for these items. Further, repeated
requests for nonexistent files do not require repeated access to the catalog; the cache
stores in memory that these entries do not exist.
For example, in a list of executed commands, the cache remembers that the entries
used in the commands are used repeatedly and subsequent access to the catalog is
not required, thus improving performance.
Data Block Caching
A data block cache consists of recently accessed blocks from files. Data blocks for an
open file can be cached in one or multiple processors. When you must access the
same block of a file many times, performance is not impaired, because the block is
cached in memory; no disk access is involved.
As in the UNIX environment, data blocks for an open file in the OSS environment can
be cached when read and write operations are involved. As long as all opens of a
regular file are in the same processor, or all opens are readers, caching of data blocks
for that file can occur in each processor where there are opens. When there are opens
in multiple processors with at least one writer, caching is done by Disk Process 2
(DP2), which always performs caching, in addition to disk block caching.
Data caching improves performance by communicating with the disk process through
read and write messages that contain multiple data buffers. This practice results in
fewer messages and in helping DP2 to issue multiblock disk I/Os.
File Caching for Regular Disk Files
The OSS environment includes a distributed cache for regular disk files. OSS files in a
disk volume are cached unless caching is disabled for that disk volume. (File caching
can be enabled or disabled through SCF commands).
Single Processors Versus Multiple Processors for Files
The following list presents the performance considerations when dealing with single
and multiple readers and with single or multiple processors:
If there are multiple readers in multiple processors, caching is done in each
processor.
If there are multiple readers and one writer in multiple processors, caching is done
in the DP2 process.
For multiple readers in multiple processors, there can be a significant performance
improvement over using one processor, even though there is more overhead when
reading from multiple processors.