Open System Services Porting Guide (G06.29+, H06.06+, J06.03+)

OSS Name Server Caching
The OSS name server caches both inodes and name entries and maintains a catalog for each OSS
fileset. With the OSS name server cache, previously found entries remain in cache; thus no disk
access is required for these items. Further, repeated requests for nonexistent files do not require
repeated access to the catalog; the cache stores in memory that these entries do not exist.
For example, in a list of executed commands, the cache remembers that the entries used in the
commands are used repeatedly and subsequent access to the catalog is not required, thus improving
performance.
Data Block Caching
A data block cache consists of recently accessed blocks from files. Data blocks for an open file
can be cached in one or multiple processors. When you must access the same block of a file many
times, performance is not impaired, because the block is cached in memory; no disk access is
involved.
As in the UNIX environment, data blocks for an open file in the OSS environment can be cached
when read and write operations are involved. As long as all opens of a regular file are in the same
processor, or all opens are readers, caching of data blocks for that file can occur in each processor
where there are opens. When there are opens in multiple processors with at least one writer,
caching is done by Disk Process 2 (DP2), which always performs caching, in addition to disk block
caching.
Data caching improves performance by communicating with the disk process through read and
write messages that contain multiple data buffers. This practice results in fewer messages and in
helping DP2 to issue multiblock disk I/Os.
File Caching for Regular Disk Files
The OSS environment includes a distributed cache for regular disk files. OSS files in a disk volume
are cached unless caching is disabled for that disk volume. (File caching can be enabled or disabled
through SCF commands).
Single Processors Versus Multiple Processors for Files
The following list presents the performance considerations when dealing with single and multiple
readers and with single or multiple processors:
If there are multiple readers in multiple processors, caching is done in each processor.
If there are multiple readers and one writer in multiple processors, caching is done in the DP2
process.
For multiple readers in multiple processors, there can be a significant performance improvement
over using one processor, even though there is more overhead when reading from multiple
processors.
For multiple readers and one writer in multiple processors, there can still be significant
performance improvement over using one processor, depending on the mix of read versus
write traffic.
When an application program calls one of the tdm_spawn or tdm_execve set of function and
specifies a processor different from the caller’s processor, this activity is called open migration. In
this case, some or all of the files that are open in the caller must be opened by the new process
created in the other processor. After these opens, the calling process is terminated, causing all of
its files to be closed. Creating processes in another processor involves more file opens and the
management of more copies of data than with file operations within a single processor.
As needed, perform multiprocessor tasks in the OSS environment with multiple processors. But be
aware of the system costs and design trade-offs involved for your operations, and take appropriate
design steps to make your code run efficiently.
102 OSS Porting Considerations