DataLoader/MX Reference Manual (G06.24+)

DataLoader/MX Examples
DataLoader/MX Reference Manual525872-002
6-4
Single Source Parallel Maintenance
Command lines 6 through 9 start the import processes that read the converted and
blocked records and do a load. Effectively, each import process loads a single
partition of the table:
6$ import cat.sch.mytab -ip \$dbl1 -u mytab.fmt &
7$ import cat.sch.mytab -ip \$dbl2 -u mytab.fmt &
8$ import cat.sch.mytab -ip \$dbl3 -u mytab.fmt &
...
9$ import cat.sch.mytab -ip \$dbln -u mytab.fmt &
HP recommends that you specify a format file to import from a DataLoader/MX
process. For more information about import format files, see the SQL/MX Reference
Manual.
Single Source Parallel Maintenance
This highly parallel and self-balancing maintenance scenario is implemented by writing
only three user exits. In this case, the downstream DataLoader/MX processes have
user exits that can insert and update records.
Figure 6-2 shows a single input stream. Suppose that you are inserting or updating a
database as appropriate—not doing a load, but updating changed data.
Again, you must have one initial DataLoader/MX process for your single data source.
You must figure out the best number of additional DataLoader/MX processes by using
the DataLoader/MX statistics report.
The steps are:
1. The initial DataLoader/MX process reads a block of records from the input stream
and sends it to whichever downstream DataLoader/MX process next requests a
block.
2. DataLoader/MX process 1 reads a block of records from the initial DataLoader/MX
process, does any needed data conversions, and performs an insert or update
operation.
3. At the same time, DataLoader/MX process 2 reads a block of records from the
initial DataLoader/MX process, does any needed data conversions, and performs
an insert or update operation. An optimum number of DataLoader/MX processes
should run in parallel.
Figure 6-2 illustrates this process.