SQL/MX 3.2.1 Management Manual (H06.26+, J06.15+)
data being imported and the number of partitions being loaded to determine where the data
should go.
• Whether you are inserting the data into a single partition or multiple partitions. The method
for inserting data into a single partition is different from the method for inserting data into
multiple partitions.
Managing Partitions to Improve import Performance
Importing Data Into a Nonpartitioned Table
When you import data into a single range partition, sort the data in the input file according to the
clustering key order to avoid extra sorting. Make sure that the attribute UPD_ORDERED is set to
OFF.
Importing Data Into Multiple Range Partitions
Before you import data into a table with multiple range partitions, first identify all affected partitions
and where they are located in the table.
When you use import to load multiple partitions, running multiple instances of import is usually
faster than running a single instance.
If you are running NonStop SQL/MX on a single node, you have these options for using import
with range-partitioned tables:
• Put the input data into several files, each of which is destined for a different partition. Run an
import instance for each partition.
• Put the input data into several files, each of which is destined for all the partitions that reside
on a particular disk. Run an import instance for each disk.
If you are running import in a distributed SQL/MX environment:
• For optimum speed, always run import on the node where the table partitions to be loaded
are located. Avoid running import instances on local table partitions from a remote node.
• The greater the number of nodes running import, the slower its overall performance.
As is the case with loading single partitions, import runs faster when input data for multiple range
partitions is sorted according to clustering key order.
Importing Data Into Multiple Hash Partitions
Because imported data cannot be presorted for hash-partitioned tables, using a single import
instance for a hash-partitioned table is generally more efficient than running multiple instances in
parallel. If you do run parallel instances, keep the transaction size to a minimum to avoid lock
escalation to the table level.
After running import operations on a hash-partitioned table, perform a FUP RELOAD on the
affected partitions of the table to make the table file’s b-tree structure more efficient and improve
DML performance.
Managing Constraints to Improve import Performance
Consider these guidelines for improving import performance with constraints:
• Each check constraint, including NOT NULL constraints, causes an extra check of the data
before the import operation completes. Because no additional I/O is needed to process the
data, check constraint checking should be relatively efficient. However, it does increase the
time required to process each table row.
• Droppable primary key constraints, unique constraints, and RI constraints usually require an
index to handle validation. RI constraints do not create an index if the foreign key is on the
Using import to Load SQL/MX Tables 199










