TS/MP Pathsend and Server Programming Manual (H06.05+, J06.03+)
• One server for similar business functions: Similar business transactions (for instance, enter
orders less than $100 in value or delete orders) are handled by a single server.
• One server for similar business functions and all database functions: A single server services
similar business transactions and handles both business decisions and database navigation.
• Update or read-only server: A single server exclusively handles either update transactions or
inquiry transactions.
One approach to packaging server functions is to first group server functions based on management
considerations (for example, all servers within a server class must freeze and stop as a unit) and
security considerations (for example, the server class must execute under one user ID). Then, partition
server functions based on the database files that are most frequently accessed.
To ensure acceptable response times for users and allow you to tune your application for
performance, it is very important to partition server functions based on service times. If the same
set of servers handles short and long transactions, some requests for short transactions will be
queued behind long transactions, resulting in poor response times for the short requests. If the short
and long transactions perform different functions, put those functions in separate server programs.
If the short and long transactions perform essentially the same work—for example, a simple database
lookup—but some requests could be for multiple lookups, you can configure two or more server
classes for requests of different lengths, all using the same program code.
Nested Servers
A server written in C, C++, COBOL85, Pascal, TAL, or pTAL can use the Pathsend procedures to
send a request message to a server in another server class and receive a reply. In such a case,
the server is acting as a requester. Servers communicating with each other in this manner are called
nested servers.
For example, consider a situation where a requester on one node requires the services of two
server classes on another node. Instead of sending to server class A, waiting for a reply, and then
sending to server class B, the requester could send to server class A, and server class A could send
to server class B, get the response, and then reply to the requester. This use of nested servers
reduces the number of messages sent across data communications lines and enables application
logic to be distributed near the resources it manages.
Consider these when considering the use of nested server programs:
• Single-threaded servers that send to other server classes can cause process deadlocks. A
process deadlock is a situation in which two processes cannot proceed because each is waiting
for a reply from the other.
For example, if a process in server class A sends a request to server class B and the process
in server class B then sends a request to server class A, a deadlock might occur. Even if there
is more than one process in server class A, there is no guarantee that the second request would
not be sent to the same process that sent the original request.
To avoid this problem, the server program for server class A should keep a read operation
posted on $RECEIVE and wait for completion of either the send operation or the read operation.
Although this multithreading increases the complexity of the program, it is necessary to prevent
deadlock.
• Single-threaded servers that send to other server classes can cause low server utilization in
the same way that any single-threaded process that calls another process can: the server
process sending the request is idle until it receives a reply from the server to which it sent the
request.
• Single-threaded servers that send to other server classes can, therefore, result in longer queues
for a server class, and these longer queues can affect application performance.
Designing Server Programs 45










