TCP/IP (Parallel Library) Configuration and Management Manual

HP NonStop TCP/IP (Parallel Library) Configuration and Management Manual522271-006
3-1
3
Configuring Parallel Library TCP/IP
for Complex and Heavy-Use
Environments
This section shows you how to configure your listening applications to take advantage
of the architectural features of Parallel Library TCP/IP. The first part of this section
describes different listening-application models and how those models can be
configured to take advantage of Parallel Library TCP/IP. The second part provides
configuration examples for each of the listener-application models. Finally, a
configuration example is provided that emphasizes the networking aspect of
configuring Parallel Library TCP/IP.
As of the G06.14 RVU, complex, heavy-use SWAN configurations can benefit from
using Parallel Library TCP/IP. The advantages of Parallel Library TCP/IP for SWAN are
documented in this section (see Parallel Library TCP/IP for Complex, Heavy-Use WAN
Environments on page 3-29).
Introduction and Definitions
In this discussion, scalable, parallel, and load-balancing mean:
Scalable — refers to the architectural capacity to grow to accommodate growing
computing demands. A scalable architecture allows you to add processing power
as your computing needs grow.
Parallel — refers to the division of work among different processes and/or
processors.
Load-balancing — refers to algorithms that balance work-load between processes
and/or processors.
Scalable and parallel are closely related. An architecture is scalable if you can add
parallel processing to it. By dividing the work-load among multiple processes and/or
processors, you can scale your applications to meet increasing demand. However,
parallel processing does not in and of itself provide scalability; you need load-balancing
algorithms and/or architecture to avoid bottlenecks when your computing needs grow.
With conventional TCP/IP, only one socket can be bound exclusively to a given
incoming port number. With Parallel Library TCP/IP, multiple server-process instances
in a system can all share the same incoming TCP (or UDP) port number if round-robin
filtering is enabled. Round-robin filtering allows you to scale your system by multiplying
the number of listening processes and by taking better advantage of the load-balancing
applications available on NonStop S-series systems.