SQL/MX 3.2.1 Management Manual (H06.26+, J06.15+)
table’s clustering key. Index maintenance degrades import performance. For more information,
see the “Managing Indexes to Improve import Performance” (page 200).
• RI constraints require a lookup of each row of the referenced table to confirm that the row
exists, which imposes an extra I/O for each row being inserted. The extra I/O can impose
a significant performance penalty when running import on a table with one or more RI
constraints. If necessary, drop the RI constraint first and add it back after the import operation
completes.
Managing Indexes to Improve import Performance
A given table with one or more indexes takes much longer to load with import than does the
same table without indexes.
When you perform an import operation on an SQL/MX table with indexes, you should first drop
all indexes, and then re-create them after the operation completes:
1. Perform a SHOWDDL command on the table and save the SHOWDDL output.
2. Drop all indexes on the table.
3. Perform all required import operations on the table.
4. After the import operations complete, refer to the SHOWDDL output to locate all the CREATE
INDEX statements. Update the CREATE INDEX statements and add the NO POPULATE option.
5. Re-create the indexes.
6. Run POPULATE INDEX to load the data into the indexes.
For more information, see the SQL/MX Reference Manual.
Managing Triggers to Improve import Performance
Triggers degrade import performance by causing triggered actions to be performed during the
operation. When you perform an import operation on a table with triggers, use the import
-D option to disable all triggers. After the data has been inserted, import automatically enables
the disabled triggers.
Managing Data Types to Improve import Performance
import requires additional processing for all date time (DATE, TIME, and TIMESTAMP), numeric,
and decimal data types. import reads the format file, extracts the value of the input file, and
converts it to a form recognized by NonStop SQL/MX. For numeric and decimal data types,
import converts the value to the scale and precision required by NonStop SQL/MX.
Using import to Load Partitions
• For range-partitioned tables, you can load the data from a single input file or several input
files into a table’s range partitions. The data in the input file or files must first be ordered by
the table’s range partitioning key.
To perform a parallel import from a single input file or a serial import from multiple input
files, use the -F option to specify the number of rows to be skipped at the beginning of the
input file. Use the -C option to specify the number of rows to be imported after the skipping
occurs. You can use the -F and -C options to instruct import to use the specified rows to
load a particular range partition. You can also specify the start and end positions of the range
partition based on the key column data in the input file. However, you cannot use these options
to specify an actual key value at the command line.
• For hash-partitioned tables, import loads all hash partitions in the destination table in a
single operation.
200 Reorganizing SQL/MX Tables and Maintaining Data










