Open System Services Porting Guide (G06.24+, H06.03+)
Table Of Contents
- What’s New in This Manual
- About This Manual
- 1 Introduction to Porting
- 2 The Development Environment
- 3 Useful Porting Tools
- 4 Interoperating Between User Environments
- Purpose of Interoperability
- The OSS User Environment
- OSS Commands for the Guardian User
- Guardian Commands for the UNIX User
- OSS Pathname and Guardian Filename Conversions
- Running the OSS Shell and Commands From TACL
- Running Guardian Commands From the OSS Shell
- Running OSS Processes With Guardian Attributes
- Using OSS Commands to Manage Guardian Objects
- 5 Interoperating Between Programming Environments
- 6 OSS Porting Considerations
- 7 Porting UNIX Applications to the OSS Environment
- 8 Migrating Guardian Applications to the OSS Environment
- General Migration Guidelines
- C Compiler Issues for Guardian Programs
- Using New and Extended Guardian Procedures
- Using OSS Functions in a Guardian Program
- Interoperating With OSS Programs
- Starting an OSS Program From the Guardian Environment
- C Compiler Considerations for OSS Programs
- Porting a Guardian Program to the OSS Environment
- How Arguments Are Passed to the C or C++ Program
- Differences in the Two Run-Time Environments
- Which Run-Time Routines Are Available
- Use of Common Run-Time Environment (CRE) Functions
- Replacing Guardian Procedure Calls With Equivalent OSS Functions
- Which IPC Mechanisms Can Be Used
- Interactions Between Guardian and OSS Functions
- 9 Porting From Specific UNIX Systems
- 10 Native Migration Overview
- 11 Porting or Migrating Sockets Applications
- 12 Porting Threaded Applications
- A Equivalent OSS and UNIX Commands for Guardian Users
- B Equivalent Guardian Commands for OSS and UNIX Users
- C Equivalent Inspect Debugging Commands for dbx Commands
- D Equivalent Native Inspect Debugging Commands for dbx Commands
- E Standard POSIX Threads Functions: Differences Between the Previous and Current Standards
- Glossary
- Index
OSS Porting Considerations
Open System Services Porting Guide—520573-006
6-8
Using Pipes and FIFO Files
Using a Pipe Across Processors
In the OSS environment, there is one OSS pipe server for each processor. The OSS
pipe server process is started at reload time and handles all pipe-creation tasks per
processor; it is the destination for all cross-processor pipe and FIFO I/O traffic. When
you create a pipe across processors, you have to communicate with the OSS pipe
server in the processor that owns that pipe. This interaction involves copying
information between pipe buffers across processors, thereby using more system
resources.
If the child process is created in another processor, all pipe I/O by the child processes
is handled by the processor in which the parent process is running. This strategy
requires interprocessor traffic and incurs more system overhead than when the parent
and child processes are running in the same processor.
Despite the steps involved, using multiple processors has performance advantages
such as load balancing. If your code uses a lot of pipes between processors, be aware
of the additional system overhead.
Opening and Reading a FIFO File
Reading a FIFO file in the OSS environment also involves more system resources
when done across processors. Reading a FIFO is the same operation as reading a
pipe—the only difference being that a FIFO has a name in the OSS file system.
When you open a FIFO file, the OSS name server resolves the pathname for your
OSS file open request. When you open and read a FIFO across processors, there is
more communication added by the interaction between OSS pipe servers.
The following are the steps for opening a FIFO:
1. An application (client) uses a file open call for a FIFO called afifo. The open()
function sends a file open request to the OSS name server.
2. The OSS name server resolves the pathname and determines that it refers to a
FIFO that is not currently open.
3. The OSS name server picks an OSS pipe server and stores the processor number
of the OSS pipe server for the local opener.
4. The OSS name server sends a FIFO open request to the OSS pipe server.
5. The OSS pipe server creates a FIFO.
6. The OSS pipe server returns the FIFO’s ID to the OSS name server.
7. The OSS name server replies to the application with the FIFO’s ID. (The returned
FIFO ID has encoded within it the processor number of the OSS pipe server
handling this FIFO—see Step 3.)
Despite the overhead associated with dealing with pipes and FIFOs across processors,
there are often good reasons to distribute the processes across processors to improve
the overall performance and throughput of the application.