SQL/MX 2.x Installation and Management Guide (H06.10+, J06.03+)
Reorganizing SQL/MX Tables and Maintaining Data
HP NonStop SQL/MX Installation and Management Guide—544536-007
10-39
Using import to Append Data to Tables or Partitions
Appending Data to Multiple Partitions in Parallel
Use import to append data to partitioned tables in parallel. This strategy can improve
append performance if table partitions are distributed across disks, CPUs, and I/O
channels. To append data to partitions in parallel:
1. Start an import process for each partition. One way is to start each import
process in the CPU associated with the target partition. Another way is to start
import processes in the CPUs associated with the data sources for the import
command.
2. Issue one import request for each table partition.
3. Supply each import command with the specific range of input data for the target
partition. Possible strategies are:
•
Arrange the input data so that it is divided into separate files, each containing
input for a specific target partition. Use these files as input to import
commands.
•
Run one instance of the import command to load data into a partitioned table
by using a single input file.
•
Use DataLoader/MX to read input data. Start each DataLoader/MX process as
a named process before entering the import command. Use the process
name as the input file for the import command. The DataLoader/MX process
must wait for requests on its $RECEIVE file and then supply data by replying to
those requests. When using this approach, be sure to balance processing for
optimal performance.
You can use the DataLoader/MX product to help implement these tasks. For
example, use DataLoader/MX to arrange to have the input data delivered to the
correct target partitions. For more information about the DataLoader/MX product,
see the DataLoader/MX Reference Manual.
Example of Using import to Append Data
Suppose that you have a history table containing 80 weeks of data. You could partition
the table so that each week of data resides on one partition. This scheme, however,
might create disproportionate requests for data from certain partitions, particularly
those containing the most recent weeks of data. Therefore, the table is partitioned by a
hash value so that each week of data is striped (partitioned) across 16 partitions. To
maximize parallel execution, each partition is associated with a different CPU.
The next example uses the import command each week to add the most recent week
of data to the appropriate 16 partitions. It starts an import process for each target
partition (in each associated CPU). You can enter the next import command from
another OSS shell that is running on another CPU.
The example divides the input data into 16 files. Each file’s data is appended to the
corresponding target partition. The input files reside on volumes $VOL1 through










