SQL/MX 3.2 Management Manual (H06.25+, J06.14+)
Table Of Contents
- HP NonStop SQL/MX Release 3.2 Management Guide
- Contents
- About This Manual
- 1 Introduction to SQL/MX Database Management
- 2 Understanding and Planning SQL/MX Tables
- Planning Object Naming
- Understanding SQL/MX Table and File Structures
- Using Keys in SQL/MX Tables and Indexes
- The Key-Sequenced File Structure
- Planning Table and Index Partitioning
- Determining a Database Layout
- Using SQL/MX Tables
- Using Views
- Determining When to Use Indexes
- 3 Planning Database Security and Recovery
- Planning Database Security
- Planning Database Recovery
- Using TMF for Transaction Recovery, Database Consistency, and Database Recovery
- Using RDF to Set Up a Backup System
- Using Backup and Restore 2 for SQL/MX Database Recovery
- Frequency of Backups
- Backup Objects Hierarchy
- SQL/MX Objects Backed Up Explicitly
- SQL/MX Objects Backed Up Implicitly
- SQL/MX Objects Not Restored by BRCOM
- Other Restrictions on Backup and Restore 2 Operations
- Backing Up and Restoring OSS Program Files and SPJs
- Using the RESTORE SHOWDDL ON Option to Re-Create Objects
- 4 Reviewing and Setting System Defaults
- 5 Creating an SQL/MX Database
- Preparing to Create Your SQL/MX Database
- SQL/MX Subvolume and File Naming Guidelines
- Creating SQL/MX Metadata
- Creating Catalogs
- Creating Schemas
- Access Privileges for SQL/MX Database Objects
- Creating SQL/MX Tables
- Security Rules for Creating and Altering SQL/MX Tables
- Creating and Using Keys
- Creating and Using a Primary Key
- Example for Creating an SQL/MX Table With a User-Defined Primary Key
- Creating and Using a Clustering Key
- Example for Creating an SQL/MX Table Using the STORE BY Clause
- Using the SYSKEY
- Creating and Using a Partitioning Key
- Partially Decoupling the Clustering Key and the Partitioning Key
- Creating and Managing Partitions for SQL/MX Tables
- Performance Benefits of Partitioning
- Restrictions on Creating and Placing Partitions
- Naming Partitions
- Recommended Maximum Number of Partitions
- Prohibition Against Using Floating-Point Columns in Partitioning Keys
- Using HASHPARTFUNC to Plan and Build Hash-Partitioned Tables
- Using DDL_DEFAULT_LOCATIONS to Distribute Primary Range Partitions
- Special Considerations for Decision Support Systems (DSS) Applications
- Examples for Creating Partitioned SQL/MX Tables
- Additional Guidelines for Creating Tables
- Managing Table Data
- Creating Indexes for SQL/MX Tables
- Creating Constraints on SQL/MX Tables
- Creating Views of SQL/MX Tables
- Creating Triggers
- Creating Stored Procedures in Java
- Database Design Guidelines for Improving OLTP Performance
- Accessing an SQL/MP Database From NonStop SQL/MX
- 6 Querying SQL/MX Metadata
- Commands for Displaying Information
- SQL/MX Metadata Tables
- Understanding ANSI External and Internal Names
- Displaying System Schema Information
- Displaying Catalog Information
- Displaying Schema Information
- Displaying Users With Catalog and Schema Creation Rights
- Displaying Table Information
- Displaying Information About Views
- Displaying Information About SQL/MP Aliases
- Displaying Information About SPJs
- Displaying Index Information
- Displaying Partition Information
- Displaying Constraint Information
- Displaying Column Information
- Displaying all Columns in a Table
- Displaying all Columns in a View
- Displaying all Columns in an Index
- Displaying all Columns in a Primary Key or Unique Constraint
- Displaying all Columns in a NOT NULL Constraint
- Displaying all Columns in a Referential Integrity Constraint
- Displaying the Attributes of a Column
- Displaying all Tables Containing a Selected Column
- Displaying Information About Privileges
- Displaying Object Integrity and Consistency
- Displaying Version Numbers
- 7 Adding, Altering, and Dropping SQL/MX Database Objects
- Planning Operations on SQL/MX Objects
- Adding Objects to an SQL/MX Database
- Authorization Requirements for Adding Database Objects
- Adding Catalogs
- Adding Columns to an SQL/MX Table
- Adding Constraints
- Adding Indexes to SQL/MX Tables
- Adding Partitions to SQL/MX Tables and Indexes
- Adding Schemas
- Adding SQL/MP Aliases
- Adding Stored Procedures in Java (SPJs)
- Adding SQL/MX Tables
- Adding Triggers
- Adding Views
- Altering Objects in an SQL/MX Database
- Authorization Requirements for Altering Database Objects
- Altering SQL/MX Indexes
- Altering Partitions for SQL/MX Tables and Indexes
- Altering SQL/MP Aliases
- Altering SPJs
- Altering System Defaults
- Altering SQL/MX Tables
- Altering Triggers
- Altering Views
- Dropping Objects From an SQL/MX Database
- Authorization Requirements for Dropping Database Objects
- Dropping Catalogs
- Dropping Columns from an SQL/MX Table
- Dropping Constraints
- Dropping SQL/MX Indexes
- Dropping Partitions for SQL/MX Tables and Indexes
- Dropping Schemas
- Dropping SQL/MP Aliases
- Dropping SPJs
- Dropping SQL/MX Tables
- Dropping Triggers
- Dropping Views
- 8 Reorganizing SQL/MX Tables and Maintaining Data
- Purging Dropped Tables From the DDL Directory
- Using FUP RELOAD to Reorganize Tables
- DDL Lock Considerations for MODIFY, import, POPULATE INDEX, DUP, FASTCOPY, and PURGEDATA
- Using MODIFY to Manage Table and Index Partitions
- MODIFY and EMS Messages
- Online and Offline Partition Management
- MODIFY and TMF
- MODIFY and Table Reloading
- Recovering a Failed MODIFY Request and Resetting Flags
- Using MODIFY to Manage Range-Partitioned Tables and Indexes
- Examples of Using MODIFY With Range-Partitioned Tables
- Example of Adding a New Range Partition
- Example of Dropping an Existing Empty Range Partition
- Example of Moving an Existing Partition to a New Location
- Example of Splitting an Existing Range Partition and Merging the First or Last Part to an Existing Partition
- Example of Merging Two Adjacent Range Partitions Into One Range Partition
- Example of Reusing an Existing Range Partition by Setting the FIRST KEY Values to New Values
- Examples of Using MODIFY With Range-Partitioned Indexes
- Examples of Using MODIFY With Range-Partitioned Tables
- Using MODIFY to Manage Hash-Partitioned Tables and Indexes
- Managing System-Clustered Tables and Indexes
- Using import to Load SQL/MX Tables
- Guidelines for Using import
- Summary of import Options
- Other import Features
- Recommended Practices for Improving import Performance
- Running import on Empty Tables
- Running import on Populated Tables
- Managing Partitions to Improve import Performance
- Managing Constraints to Improve import Performance
- Managing Indexes to Improve import Performance
- Managing Triggers to Improve import Performance
- Managing Data Types to Improve import Performance
- Using import to Load Partitions
- Support for restarting import
- Examples of Using import to Load an SQL/MX Table
- Using import to Append Data to Tables or Partitions
- Using DUP to Copy Tables Into Tables
- Using FASTCOPY to Copy Tables into Tables
- Using PURGEDATA to Delete Data From Tables
- 9 Managing Database Applications
- Writing SQL/MX Applications to Recover From Temporary Network or Hardware Service Interruptions
- Moving Programs From Development to Production
- Distributing Programs Across Nodes
- Ensuring Proper Name Resolution
- Assigning Permissions for Running Database Applications
- Maintaining Query Execution Plan Validity
- Managing Modules
- Producing Locally Placed Modules or Globally Placed Modules
- Securing User Modules
- Checking Module Dependencies with DISPLAY USE OF
- Displaying all Modules and Dependent Objects
- Display all Modules and Their Corresponding Source SQL File Names
- Displaying Dependent Objects for One or More Specified Modules
- Display Source SQL File Name for a Given Module
- Display All Modules Dependent on a Specific Object
- Display all Modules and Their Corresponding Source SQL File Names Associated With an Object
- Display all the Invalid Modules and Their Corresponding Source SQL File Names Associated With an Object
- Display Source SQL File for a Given Module in a Given OSS Directory
- Display all Modules and Their Corresponding Source SQL Files in a Given OSS Directory
- Display Invalid Modules from a Given Directory (Along With Their Corresponding Source SQL Files) for a Given Object
- Display all Modules From a Given directory (Along With Their Corresponding Source SQL Files) for a Given Object
- Displaying the Modules in a Local Directory
- Grouping Applications and Modules to Run Multiple DISPLAY USE OF Operations
- Removing Modules
- Converting Globally Placed Modules to Locally Placed Modules
- Managing Module Files and Their Applications During Fallback From SQL/MX Release 3.2
- Backing Up and Restoring Programs
- 10 Performing Recovery Operations
- Recovering Databases After Disk or Node Failures
- Recovering Disk Volumes
- Recovering Files
- Recovering Metadata
- Recovering Database Objects
- Restoring Objects With BRCOM RESTORE
- Repairing Damaged SQL/MX Metadata and Objects
- Using GOAWAY to Delete Damaged Objects
- Recovering From SCF Commands
- 11 Managing an SQL/MX Distributed Database
- Managing a Locally Distributed SQL/MX Database
- Managing a Network-Distributed SQL/MX Database
- Creating a Distributed SQL/MX Database
- Altering Distributed Objects
- Dropping Distributed Objects
- Enhancing the Performance of a Distributed Database
- Changing Network Environments
- Managing Mixed Versions of NonStop SQL/MX
- 12 Measuring Performance
- 13 Enhancing SQL/MX Database Performance
- Using Queries in an SQL/MX Database
- Understanding the Implications of Concurrency
- Keeping Statistics Current
- Managing SQL/MX Buffer Space
- How DP2 Manages and Reuses Query Plan Fragments
- Causes and Symptoms of Query Plan Fragment Reuse Failures
- Reduction of Plan Fragment Size for Unique Queries
- Using SCF STATS DISK to Monitor SQL/MX Statistics and Reuse Failures
- Using SCF ALTER DISK to Resize the SQLMXBUFFER Attribute
- Strategies for Reducing Reuse Failures
- Managing DP2 Data Cache Memory Size
- Maximizing Disk Process Prefetch Capabilities
- Optimizing SQL/MX Memory Management
- Optimizing Index Use
- Performing FUP RELOADs to Generate More Accurate Query Plans
- Checking Data Integrity
- Creating Logical Views of Data
- Adding and Dropping Partitions
- Avoiding Automatic Recompilations
- Matching Block Split Operation to Table Usage
- A Using Guardian Names with TMF, RDF, and Measure
- Using the MXGNAMES Utility
- MXGNAMES Input Files
- MXGNAMES Output Files
- MXGNAMES Examples
- TMF Example 1: Input Is a List of SQL/MX Table Names
- TMF Example 2: Input Is SHOWDDL Text
- TMF Example 3: Input Is an SQL/MX Table Name
- TMF Example 4: Input Is a List of SQL Names with File Length Specified
- Restore Example 1: Input Is a List of SQL/MX Table Names
- Restore Example 2: Input Is SHOWDDL Text
- Restore Example 3: Input Is an SQL/MX Table Name
- Using the MXGNAMES Utility
- Index
is striped (partitioned) across 16 partitions. To maximize parallel execution, each partition is
associated with a different CPU.
The next example uses the import command each week to add the most recent week of data to
the appropriate 16 partitions. It starts an import process for each target partition (in each
associated CPU). You can enter the next import command from another OSS shell that is running
on another CPU.
The example divides the input data into 16 files. Each file’s data is appended to the corresponding
target partition. The input files reside on volumes $VOL1 through $VOL4. The target partitions
reside on volumes $VOL33 through $VOL48. The record layout corresponds exactly to the target
table layout.
The -l option ensures that the import operation proceeds even if an input record contains data
that cannot be parsed for loading into the target table.
/user/bin> import cat1.schema1.t1 -I /G/VOL33/TARGET/HIST01 -L 1000
/user/bin> import cat1.schema1.t1 -I /G/VOL33/TARGET/HIST02 -L 1000 T 1000
/user/bin> import cat1.schema1.t1 -I /G/VOL33/TARGET/HIST03 -L 1000 T 1000
.
.
.
/user/bin> import cat1.schema1.t1 -I /G/VOL33/TARGET/HIST16 -L 1000 T 1000
This example issues 16 similar import commands, one for each target partition. The append
operation executes in parallel against the 16 partitions.
Using DUP to Copy Tables Into Tables
DUP is a syntax-based utility you can execute from MXCI to copy a source table—and optionally
its index and constraints—into an existing target table.
For more information, see the SQL/MX Reference Manual.
Guidelines for Using DUP
• You must have select privileges for the source table and must own the structure where the
target table will reside. An error is returned if an access violation occurs.
• Referential integrity constraints and triggers are ignored.
• The catalog that defines the source table or tables can exist on a remote node and be
referenced by the current DUP operation if the remote node is visible to the local node. The
target table can also exist in a catalog and structure that reside on a visible remote node.
Finally, the Guardian files associated with the source and target tables can exist on remote
nodes if those nodes are visible from the local node. For more information, see “Managing
an SQL/MX Distributed Database” (page 263).
• DUP does not check disk space before running the request. You must confirm that enough disk
space is available before running the DUP request. Check the EOF field of FILE INFO or FUP
INFO to identify the size of each partition and determine whether there is sufficient disk space
for the partitions.
• DUP displays errors if the source table or target table and its dependent indexes cannot be
accessed or if the load fails in response to a resource or file system problem.
If the DUP operation fails because the DUP process does fails unexpectedly, use the RECOVER
command to complete or roll back the operation. If the DUP operation fails after all the data
has been successfully copied to the target objects, specify RECOVER with the RESUME option
to complete the DUP operation. If the DUP operation fails before the data is successfully copied,
specify RECOVER with the CANCEL option to roll back the DUP operation. You can access
this status information by reading the DDL_LOCK metadata table. If you specify the wrong
RECOVER operation, RECOVER reports this so that you can use the correct operation, possibly
with RECOVER...CANCEL. the RECOVER command, see the SQL/MX Reference Manual.
204 Reorganizing SQL/MX Tables and Maintaining Data










