Open System Services Porting Guide (G06.24+, H06.03+)
Table Of Contents
- What’s New in This Manual
- About This Manual
- 1 Introduction to Porting
- 2 The Development Environment
- 3 Useful Porting Tools
- 4 Interoperating Between User Environments
- Purpose of Interoperability
- The OSS User Environment
- OSS Commands for the Guardian User
- Guardian Commands for the UNIX User
- OSS Pathname and Guardian Filename Conversions
- Running the OSS Shell and Commands From TACL
- Running Guardian Commands From the OSS Shell
- Running OSS Processes With Guardian Attributes
- Using OSS Commands to Manage Guardian Objects
- 5 Interoperating Between Programming Environments
- 6 OSS Porting Considerations
- 7 Porting UNIX Applications to the OSS Environment
- 8 Migrating Guardian Applications to the OSS Environment
- General Migration Guidelines
- C Compiler Issues for Guardian Programs
- Using New and Extended Guardian Procedures
- Using OSS Functions in a Guardian Program
- Interoperating With OSS Programs
- Starting an OSS Program From the Guardian Environment
- C Compiler Considerations for OSS Programs
- Porting a Guardian Program to the OSS Environment
- How Arguments Are Passed to the C or C++ Program
- Differences in the Two Run-Time Environments
- Which Run-Time Routines Are Available
- Use of Common Run-Time Environment (CRE) Functions
- Replacing Guardian Procedure Calls With Equivalent OSS Functions
- Which IPC Mechanisms Can Be Used
- Interactions Between Guardian and OSS Functions
- 9 Porting From Specific UNIX Systems
- 10 Native Migration Overview
- 11 Porting or Migrating Sockets Applications
- 12 Porting Threaded Applications
- A Equivalent OSS and UNIX Commands for Guardian Users
- B Equivalent Guardian Commands for OSS and UNIX Users
- C Equivalent Inspect Debugging Commands for dbx Commands
- D Equivalent Native Inspect Debugging Commands for dbx Commands
- E Standard POSIX Threads Functions: Differences Between the Previous and Current Standards
- Glossary
- Index
OSS Porting Considerations
Open System Services Porting Guide—520573-006
6-21
File Caching for Regular Disk Files
OSS Name Server Caching
The OSS name server caches both inodes and name entries and maintains a catalog
for each OSS fileset. With the OSS name server cache, previously found entries
remain in cache; thus no disk access is required for these items. Further, repeated
requests for nonexistent files do not require repeated access to the catalog; the cache
stores in memory that these entries do not exist.
For example, in a list of executed commands, the cache remembers that the entries
used in the commands are used repeatedly and subsequent access to the catalog is
not required, thus improving performance.
Data Block Caching
A data block cache consists of recently accessed blocks from files. Data blocks for an
open file can be cached in one or multiple processors. When you must access the
same block of a file many times, performance is not impaired, because the block is
cached in memory; no disk access is involved.
As in the UNIX environment, data blocks for an open file in the OSS environment can
be cached when read and write operations are involved. As long as all opens of a
regular file are in the same processor, or all opens are readers, caching of data blocks
for that file can occur in each processor where there are opens. When there are opens
in multiple processors with at least one writer, caching is done by Disk Process 2
(DP2), which always performs caching, in addition to disk block caching.
Data caching improves performance by communicating with the disk process through
read and write messages that contain multiple data buffers. This practice results in
fewer messages and in helping DP2 to issue multiblock disk I/Os.
File Caching for Regular Disk Files
The OSS environment includes a distributed cache for regular disk files. OSS files in a
disk volume are cached unless caching is disabled for that disk volume. (File caching
can be enabled or disabled through SCF commands).
Single Processors Versus Multiple Processors for Files
The following list presents the performance considerations when dealing with single
and multiple readers and with single or multiple processors:
•
If there are multiple readers in multiple processors, caching is done in each
processor.
•
If there are multiple readers and one writer in multiple processors, caching is done
in the DP2 process.
•
For multiple readers in multiple processors, there can be a significant performance
improvement over using one processor, even though there is more overhead when
reading from multiple processors.