Expand Configuration and Management Manual (G06.24+)

Subsystem Description
Expand Configuration and Management Manual523347-008
18-5
Expand Line-Handler Processes
Host channel connections
The SNAX/APN subsystem consists of a service-manager process and one or more
SNAX/APN line-handler processes. Each Expand-over-SNA line-handler process is
configured to use a particular SNAX/APN line and logical unit (LU). At least one
SNAX/APN line and one Expand line must be configured and started at each end of
the SNA network through which the Expand-over-SNA line-handler processes will
communicate.
Expand-Over-IP Line-Handler Process
The Expand-over-IP line-handler process uses the NonStop TCP/IP subsystem to
provide connectivity to an Internet Protocol (IP) network.
The Expand-over-IP line-handler process is a client to a NonStop TCP/IP process. The
Expand-over-IP process communicates with the NonStop TCP/IP process through the
shared memory of the QIO subsystem.
The NonStop TCP/IP process provides a Guardian file-system interface to the
Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP), as well
as raw (direct) access to IP. The Expand-over-IP line-handler process uses the UDP
services provided by the TCP/IP process to transmit data across an IP network.
Expand-Over-ServerNet Line-Handler Process
The Expand-over-ServerNet line-handler process uses a pair of NonStop cluster
switches, Modular ServerNet expansion boards (MSEBs), plug-in cards (PICs), fiber-
optic cables, and the ServerNet monitor process ($ZZSCL) to connect to a ServerNet
cluster.
Each ServerNet cluster uses at least two NonStop cluster switches for routing; one for
the X-fabric and one for the Y-fabric. For the star topology, introduced with the G06.09
RVU, these switches can support up to eight nodes per switch. For the split-star
topology, introduced with the G06.12 RVU, two switches for each fabric can support up
to 16 nodes (eight nodes per switch). For the tri-star topology, introduced with the
G06.14 RVU, three switches for each fabric can support up to 24 nodes (eight nodes
per switch). For more information about the cluster switches, refer to the ServerNet
Cluster Manual.
Each switch connects to two MSEBs per node. At least two plug-in cards are required
for ServerNet connections between system enclosures in each node. Two fiber-optic
cables are required for each node, for attachment to the X and Y cluster switches. The
Note. For more detailed information about the Expand-to-IP interface, refer to Expand-to-IP
Interface on page 18-54.