TCP/IP Programming Manual

sends on the same socket). AWAITIOX returns a tag and a socket ID so the application can identify
which operation just completed. At that point, the application issues a FILE_GETINFO_ call using
that file number to get back the completion status of the operation the application just performed
(and any other fields such as return length, depending on the operation).
Considerations for Using socket_nw
If you have a server which cannot afford to wait, rather than using the socket call, you should use
socket_nw. Similarly, if your server cannot afford to wait, use send_nw.
Concurrency and Considerations for Blocking and Nonblocking
Asynchrony is a way an application can achieve concurrency of your server’s execution with the
execution of the TCP/IP protocol. By using asynchronous operations, you ensure the concurrent
execution of your program with the completion of the work done by the TCP/IP protocol stack.
In OSS, mechanisms for asynchrony are similar to but distinct from the Guardian mechanisms for
asynchrony. The OSS mechanism is derived from the UNIX world, where instead of waited and
nowaited operations, you have the notion of blocking and nonblocking operations. Blocking
operations are similar to Guardian waited operations. Control does not return back to your program
until the operation has completed.
Nonblocking means that the application can issue an operation as nonblocking and the application
can get the completion of the operation later. This way, the operation proceeds concurrently with
your application’s operation. (See Nowait I/O (page 32) for a more in-depth comparison of waited
and nowaited operations compared to blocking and nonblocking operations.)
NOTE: A receive must be posted on a socket for the data to be acted on.Your application should
post the receive before the send is issued so there is no time lag.
Considerations for a Server Posting Receives
From a system standpoint, a server should post the biggest receives it can consistent with the
maximum size of what the other can send. The larger the receive the server can post, the better.
If the other side has control over how much can be sent, the more sent the better. A server should
have at least one receive pending on every socket on which it can simultaneously receive data.
Because TCP is a streaming protocol, you might want to have more than one receive pending on
any socket because you may get data coming in a little at a time. More importantly, you want to
ensure a large enough receive-space parameter by setting a socket option (SO_RCVBUF).
Basic Steps for Programs
This subsection summarizes the basic steps performed by a client and server program for the
NonStop TCP/IP, Parallel Library TCP/IP, and NonStop TCP/IPv6 subsystems.
NonStop TCP/IP, Parallel Library TCP/IP, and NonStop TCP/IPv6 Basic Steps
The basic steps performed by a client or server program are the same whether your program uses
TCP sockets, UDP sockets, or RAW sockets. This subsection summarizes these steps for each type
of program. Important considerations for each type of program are presented later in this section.
Client Program
The basic steps performed by a client program are:
1. Designate the NonStop TCP/IP, Parallel Library TCP/IP, or TCP6SAM process name (optional).
2. Create a socket.
3. Bind the socket to any port (optional; not done for RAW).
4. Connect the socket (required for TCP; optional for UDP and RAW).
5. Start data transfer.
Basic Steps for Programs 35