Disk Load Balancing, Fault Tolerance, and Configuration Limits for NonStop Systems

2
Summary
A frequently asked question about disk load balancing is "how can I balance the load of FCSA-
attached disk I/O across the X and Y ServerNet fabrics?" The short answer is that it is done
automatically by SCS (T8456), which is used for all I/O to FCSA-attached devices. SCS also
automatically balances the ServerNet I/O to CLIM-attached devices.
The remainder of this document provides more complete discussion of several topics related to load
balancing, including:
Processor fault tolerance and load balancing
ServerNet fault tolerance and load balancing
Adapter, SAC, and CLIM fault tolerance and load balancing
I/O bus fault tolerance and load balancing
Maximum device configuration limits
Introduction
The underlying principles of fault tolerance and load balancing are always the same. They:
Don't allow any single failure in any hardware or software component to make data inaccessible.
Spread the workload evenly across all available system components to get the most capacity and
responsiveness at the lowest cost.
Application of these principles to different generations of NonStop systems can result in different
advice because each system generation packages the hardware components differently, resulting in
different potential points of failure and different potential performance bottlenecks. There are 3
significantly different generations of NonStop systems in common use today:
S-series systems, characterized by S-series processor enclosures and running G-series RVUs. The
ServerNet topology is Tetra-8 or Tetra-16.
S-series I/O consists of S-series I/O enclosures containing IOMF adapters and SNDA adapters
with S-PIC (SCSI) SACs. The I/O bus is SCSI and the disks are internal SCSI disks in the S-series
enclosures.
S-series systems support a backward-compatible I/O generation with SNDA adapters with F-PIC
(fiber) SACs connected to 45xx disk modules.
S-series systems support a forward-compatible I/O generation by replacing some S-series I/O
enclosures with IOAME enclosures.
NS-series systems, characterized by a P-switch ServerNet topology and running H-series RVUs.
Note: Neoview systems running N-series RVUs are architecturally similar to NS-series P-switch
systems, but Neoview system configuration is tightly constrained and optimized for the application
suite which runs on Neoview systems, so this document does not discuss Neoview systems.
NS-series I/O consists of IOAME enclosures containing FCSA adapters connected to ESS (HP XP
Enterprise Storage Systems) and FCDM disk enclosures. The I/O bus is Fibre Channel. VIO
systems integrate parts of an NS-series system and IOAME together into shared packaging.
NS-series systems support a backward-compatible I/O generation by replacing some IOAME
enclosures with S-series I/O enclosures.
NS-series systems support a forward-compatible I/O generation by replacing some IOAME
enclosures with CLIMs.