HP NonStop SQL/MX Release 3.2.1 Management Guide HP Part Number: 691120-002 Published: February 2013 Edition: J06.14 and subsequent J-series RVUs; H06.
© Copyright 2013 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About This Manual......................................................................................12 Supported Release Version Updates (RVUs)................................................................................12 Audience...............................................................................................................................12 Related Documentation............................................................................................................
Types of Key-Sequenced File Access.....................................................................................32 Key-Sequenced Tree Structure..............................................................................................32 Planning Table and Index Partitioning........................................................................................34 Range Partitioning and Hash Partitioning..............................................................................
SQL/MX Subvolume and File Naming Guidelines.......................................................................73 SQL/MX Subvolume Naming Guidelines..............................................................................74 SQL/MX Data File Naming Guidelines.................................................................................74 Resource Forks...................................................................................................................
mxtool INFO Command...................................................................................................106 SHOWLABEL Command...................................................................................................106 mxtool VERIFY command..................................................................................................106 SHOWDDL Command.....................................................................................................107 SQL/MX Metadata Tables.....
Displaying all Columns in a Table......................................................................................139 Displaying all Columns in a View.......................................................................................139 Displaying all Columns in an Index....................................................................................140 Displaying all Columns in a Primary Key or Unique Constraint...............................................
Dropping Dropping Dropping Dropping Dropping Dropping Dropping Dropping Dropping SQL/MX Indexes...............................................................................................172 Partitions for SQL/MX Tables and Indexes.............................................................172 Schemas...........................................................................................................173 Sequence Generators...........................................................................
SQL/MX and SQL/MP Differences in Recovery Action for Read-Only Queries..........................213 Moving Programs From Development to Production...................................................................214 Reasons Not to Move a Program and Reuse Its Compiled Modules.........................................214 Moving a Program and Recompiling Modules.....................................................................215 Moving a Program to Production and Re-compiling its SQL Statements...........
Using GOAWAY to Delete Damaged Objects..........................................................................263 Recovering From SCF Commands...........................................................................................263 ABORT DISK...................................................................................................................263 ALTER DISK, ALTNAME and ALTER DISK, VOLNAME............................................................264 ALTER DISK, LABEL....................
Performing FUP RELOADs to Generate More Accurate Query Plans.............................................306 Checking Data Integrity.........................................................................................................306 Creating Logical Views of Data..............................................................................................307 Adding and Dropping Partitions.............................................................................................
About This Manual This manual discusses other issues related to managing an SQL/MX system. In this manual, references to SQL/MX Release 2.x indicate SQL/MX Release 2.3.1 and subsequent releases, and SQL/MX Release 3.x indicates SQL/MX Release 3.0 and subsequent releases, until indicated in a replacement publication. Supported Release Version Updates (RVUs) This publication supports J06.14 and all subsequent J-series RVUs and H06.
Specialized Guides SQL/MX Installation and Upgrade Guide Describes how to plan for, install, create, and upgrade a SQL/MX database. SQL/MX Management Manual SQL/MX Query Guide Describes how to understand query execution plans and write optimal queries for an SQL/MX database. SQL/MX Data Mining Guide Describes the SQL/MX data structures and operations to carry out the knowledge-discovery process.
Related SQL/MP Manuals SQL/MP Reference Manual Describes the SQL/MP language elements, expressions, predicates, functions, and statements. SQL/MP Installation and Management Guide Describes how to plan, install, create, and manage an SQL/MP database. Describes installation and management commands and SQL/MP catalogs and files.
“Adding, Altering, and Dropping SQL/MX Database Objects” (page 150) Describes how to add, alter, and drop objects from an SQL/MX database. “Reorganizing SQL/MX Tables and Maintaining Data” (page 178) Describes how to reorganize SQL/MX tables and maintain the data. “Managing Database Applications” (page 212) Describes how to manage SQL/MX applications. “Performing Recovery Operations” (page 235) Describes how to perform recovery operations on an SQL/MX database.
aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines. For example: LISTOPENS PROCESS { $appl-mgr-name } { $process-name } ALLOWSU { ON | OFF } | Vertical Line. A vertical line separates alternatives in a horizontal list that is enclosed in brackets or braces. For example: INSPECT { OFF | ON | SAVEABEND } … Ellipsis.
HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to docsfeedback@hp.com. Include the document title, part number, and any comment, error found, or suggestion for improvement concerning this document.
1 Introduction to SQL/MX Database Management This section provides an introduction to managing an SQL/MX database. For more information about managing an SQL/MP database, see the SQL/MP Installation and Management Guide. NOTE: The HP NonStop SQL/MX Release 3.1 Installation and Management Guide is consolidated with HP NonStop SQL/MX Release 3.1 Migration Manual and is split into two manuals: • HP NonStop SQL/MX Release 3.2 Installation and Upgrade Guide—Starting with SQL/MX Release 3.
For more information, see “Planning Database Security and Recovery” (page 42).
multiple sites into one central site. For more information, see the RDF/IMP, IMPX, and ZLT System Management Manual. • The Guardian Backup and Restore 2 utility, which you can use to perform tape backups and restore SQL/MX objects. You can also save database creation and loading scripts in OBEY command files or write an automated program to handle TMF and backup and restore operations.
INSERT, UPDATE, or DELETE operation to change a row in a table. If the change would violate the constraint, NonStop SQL/MX prohibits it. Concurrency control for database access provides different degrees of database consistency to meet different needs. For any DML statement, you can specify access options by using the FOR option ACCESS clause and, for a SELECT statement, by using this same clause, you can also specify access options for individual tables referenced in the FROM clause.
Parallel index maintenance reduces the effect of multiple indexes on performance. Each index on a table is automatically updated whenever a row is inserted into the table and whenever a value is updated in any key column of the index. Multiple indexes on a table can be updated in parallel by different disk processes or by the same disk process servicing multiple requests. Indexes are loaded in parallel.
Guardian Physical Names The underlying Guardian physical names of SQL/MX objects and SQL/MP objects are qualified with the system node, volume, and subvolume names. For example, SQL/MP tables and views are created with Guardian physical names of the form: [\node.] [[$volume.]subvol.
secured for PUBLIC SELECT access so that you can query them. Table 1 (page 24) describes these metadata tables and the schemas in which they are stored. Table 1 SQL/MX Metadata Schemas Schema Name Schema Contents SYSTEM_SCHEMA One set of five system schema tables resides in the system catalog under this schema. Use these system metadata tables, which record the existence of catalogs and schemas, to find the correct user catalog on which to perform data lookup.
Table 2 SQL/MX Object Types (continued) SQL/MX Object Type Description particular record. Every table must have one or more columns, but the number of rows can be zero. There is no inherent order of rows within a table. An SQL/MX table is described in an SQL/MX schema and its data is stored in one or more physical files in the Guardian environment. An SQL/MX table name must be a fully qualified ANSI name of the form: catalog-name.schema-name.
The SQL/MX Environment The SQL/MX query compiler and executor operate on SQL/MX database objects and SQL/MP database objects. From the SQL/MX conversational interface, MXCI, you can perform DDL and DML operations and most SQL/MX commands on SQL/MX objects. From MXCI, you can also perform DML operations—but not DDL operations—on SQL/MP tables. MXCI commands are SQL/MX extensions that typically affect the attributes of an MXCI session.
Database Management Tasks Managing an SQL/MX database involves the tasks required to create the database, ensure its availability to users, and perform required changes. Because the database is an integral part of the application, measuring application performance and tuning the database configuration to enhance performance are also database management tasks.
NonStop SQL/MX imposes these restrictions on the types of operations you can run on SQL/MP tables: • DDL and utility operations on SQL/MP tables must be run from NonStop SQL/MP, either from SQLCI or an SQL/MP application. You cannot invoke DDL or utility operations on SQL/MP tables from NonStop SQL/MX. • DML operations on SQL/MP tables can be run from NonStop SQL/MX. Through its Guardian environment, the NonStop OS provides additional utilities and subsystems for managing an SQL/MX database.
2 Understanding and Planning SQL/MX Tables An understanding of the types of organizations of SQL/MX tables and their key-sequenced file structure is essential for effective use and functioning of the database.
Understand the Schema Ownership Rule Starting with the SQL/MX Release 3.1, new security features are introduced. For more information on these features, see the SQL/MX Reference Manual. Avoid Using Duplicate Catalogs on Multiple Distributed Nodes If you create catalogs with the same name on two or more nodes, each node can access only the objects in the local catalog. You cannot join the tables or indexes in the same-named catalogs on different nodes.
Using Keys in SQL/MX Tables and Indexes At the physical level, a key is a field or group of fields that the system can use to order records and to identify records for processing. SQL/MX tables and indexes use the keys described in this subsection. Primary Key A primary key is the column or set of columns that define a unique key for an SQL/MX table. The primary key is a constraint and must conform to the ANSI rules for constraints. All columns defined for the primary key must be defined as NOT NULL.
The Key-Sequenced File Structure All SQL/MX tables are key-sequenced files, which store rows (records) that contain a clustering key. New rows are stored in sequence by clustering key value. A user performing update operations can update or delete rows and lengthen or shorten values in a varying-length column (VARCHAR, NCHAR VARYING) when the column is not part of the clustering key.
Figure 1 Key-Sequenced B-Tree Structure Each data block contains a header plus one or more data records, depending on the record size and data-block size. For each data block, an entry in an index block contains the value of the key field for the first record in the data block and the address of that data block. The position of a new record inserted into a key-sequenced file is determined by the value of its clustering key field.
Planning Table and Index Partitioning With medium to very large databases, it is often necessary to use partitioned tables and indexes to break down the data into smaller and more manageable units. Each partition can be managed individually and can function independently of the other partitions. Partitioning can provide significant benefits to a wide variety of applications by improving manageability, performance, and availability.
tunable method for data placement because you can influence availability and performance by spreading these evenly sized partitions across I/O devices (striping). Hash partitioning achieves many of the scan reduction benefits of range partitioning without requiring you to know the distribution of the partitioning key in advance. In addition, use hash partitioning in a “decoupled” fashion to cluster data on a key other than the partitioning key.
INSERT, and ALL privileges provide the user with access to all columns in the table. To protect your tables from access by unauthorized users, provide GRANT privileges to table views. • Alterations to tables generally have a greater impact on application code that uses the tables directly. For example, an application should not use SELECT * to retrieve values from a table, because adding a column to the table would mean that the application must change.
• Ad hoc queries are limited to data returned by views. Restricting users to using only views can prevent them from accessing sensitive data. • Views provide the same performance as the tables. • Views can help provide data integrity. Disadvantages include: • You cannot update views that provide views to more than one table. • Managing the database is more difficult because of the greater number of objects.
A second use of an index is to eliminate a run-time sort of data columns by providing an access path to the data in the order of the desired selection or by reverse order: SELECT A,B FROM ATABLE ORDER BY A, B; A third use of an index is to avoid a full table scan: SELECT ITEM_NAME, RETAIL_PRICE FROM INVNTRY WHERE RETAIL_PRICE = 100; Without an index on RETAIL_PRICE and assuming RETAIL_PRICE is not the leading column on the table’s clustering key, NonStop SQL/MX must scan the table and evaluate the predicate
The use of an alternate index does not ensure that the optimizer will choose the index as the access path. Index use depends highly on selectivity, described in the SQL/MX Query Guide. In general, these guidelines apply: • If the query can be satisfied with an index-only access, the optimizer normally uses the index. • Low selectivity of a predicate means that many rows are selected. For base table access through an index, the optimizer performs a random access read against the table.
key of the underlying table are not included in the clustering key of the index, but are physically included in the index file. The clustering key for an index can contain columns with null values. However, two null values in the same column are treated as equal values for purposes of unique constraint violation checking. In calculating the length of an index key, consider the extra bytes added to the length of any column that is not specified as NOT NULL NOT DROPPABLE.
If the INVNTRY table is large, the cost of sorting the table might be very high. An index on the columns QTY_ORDERED and RETAIL_PRICE, defined as follows, might mean that no sort is required to satisfy the ORDER BY clause: CREATE INDEX RPRICE ON INVNTRY (QTY_ORDERED, RETAIL_PRICE) ; To avoid a sort, define an index on the same columns that appear in the ORDER BY clause. The sequence of these columns in the ORDER BY clause should then match the sequence of columns in the index.
3 Planning Database Security and Recovery Database security and recovery are essential topics to consider before creating an SQL/MX database. Planning for security is the primary protection against unauthorized user intervention. Security, however, cannot eliminate errors by authorized users. Planning for recovery is essential for protecting your database.
Database Access and Security Access to SQL/MX database objects is governed by access privileges. Access privileges for SQL/MX tables, views, and indexes are provided through the GRANT and REVOKE statements. For more information about assigning and removing access privileges for SQL/MX objects, see “Access Privileges for SQL/MX Database Objects” (page 79) and the SQL/MX Reference Manual. For more information about Guardian security, see the Security Management Guide.
File and Directory Commands Only the file owner or the super ID can alter a file’s permission bits and thereby control access to that file. For detailed descriptions of the commands to alter these bits, see the Open System Services User’s Guide. Like the FILEINFO and FUP INFO commands in the Guardian environment, the ls command in the OSS environment allows users to display information, including the permission codes, for their files and directories.
• Safeguard volume protection records can control who is authorized to create disk files on specific disk volumes. • Safeguard process-protection records can control who is authorized to use specific process names. Safeguard access control lists cannot be used to protect OSS files. Access to OSS files is controlled by OSS file-permission bits.
Fault-tolerant hardware and software strategies provide maximum protection against most equipment failures, power failures, and some catastrophic failures. This protection, however, does not eliminate your need to plan carefully to protect database and application software. After formulating a comprehensive recovery strategy, practice carrying out the plan on a regular, consistent basis.
Recovering Dropped Database Objects The method for recovering an accidentally dropped SQL/MX object depends on whether that object is a catalog, table, index, or view. Recovery of a single view or index is usually a straightforward operation. Recovery of a table or catalog, however, can be complex and difficult if there are multiple dependent objects.
is that it requires a lot of work: you need to continually keep track of all interdependent objects, and process many tapes during recovery. ◦ If you perform collective dumps of the SQL/MX catalog and all its objects, this command might enable faster recovery, but you must continually update the SQL OBEY (script) files that you use to rebuild SQL/MX objects. NOTE: You can use OBEY command files containing TMFCOM command scripts for TMF tasks that you perform repeatedly.
Audit Trails If a system, disk, or program fails during a transaction, TMF uses audit trails to restore the files to their original state before the start of the transaction. Each audit trail is a series of files in which TMF records information about transaction changes to a database. The information includes: • Before-images, which are copies of data records before a transaction changes them. • After-images, which are copies of the changed records.
Volume Recovery Volume recovery recovers the database in the event of a disk crash or system failure. When TMF is restarted after a failure, volume recovery is initiated automatically for each accessible data volume on the system (except for volumes explicitly disabled in TMF). To recover the files, the volume recovery process reapplies committed transactions to ensure they are reflected correctly in the database, and then backs out all transactions that were incomplete at the time of the interruption.
Size Considerations When determining the size requirements of TMF for NonStop SQL/MX, consider these guidelines: • DDL statements execute within system-defined TMF transactions, generating audit-trail entries. DDL statements that refer to large tables can generate a large volume of audit-trail entries. Transaction volume includes database use by both application programs and MXCI interactive capabilities. The interactive volume might be minimal or might generate many audit-trail entries.
is necessary, your online dumps already reflect the batch updates. TMF would need to apply only those database changes that occurred after online dumps were taken. • When you create a new table and want to provide file recovery protection for it, make an initial online dump of the table after creation. • Certain DDL and statements and utility commands invalidate previous online dumps.
Table 4 SQL/MX Operations That Impact TMF Online Dumps (continued) Operation Option Effect Recovery Strategy ensure TMF file recovery protection. (This includes dumps of the table’s partitions, plus dumps of partitions associated with dependent indexes.) Fastcopy FIXUP utility Invalidates online dumps for the target tables and target indexes. Make online dumps of the target object after the fastcopy operation to ensure TMF file recovery protection.
For example, if an SQL/MX table to be dumped has the definition: CREATE TABLE CAT.SCH.T056T11 ( WREN INT NO DEFAULT -- NOT NULL NOT DROPPABLE , JUNCO INT NO DEFAULT -- NOT NULL NOT DROPPABLE , BLACKBIRD INT DEFAULT NULL , CONSTRAINT CAT.SCH.T056T11_104569472_0002 PRIMARY KEY (WREN ASC, JUNCO ASC) NOT DROPPABLE , CONSTRAINT CAT.SCH.T056T11_104569472_0001 CHECK (CAT.SCH.T056T11.WREN IS NOT NULL AND CAT.SCH.T056T11.JUNCO IS NOT NULL) NOT DROPPABLE ) LOCATION \NSK.$DATA1.ZSDADLON.
Note how individual resource fork names are accounted for. Alternately, the same list could have been specified: DUMP FILES & ( & $DATA1.ZSDADLON.NSPD2Q0?,& $DATA2.ZSDADLON.N0FK2Q0? & $DATA3.ZSDADLON.N75R2Q0?,& $DATA4.ZSDADLON.OFVY2Q0?,& $DATA5.ZSDADLON.ONL42Q0?,& $DATA1.ZSDADLON.R70U5Q0?,& $DATA2.ZSDADLON.SFQ15Q0?,& $DATA3.ZSDADLON.SNG85Q0? & ) In this case, the final question mark (.SNG85Q0?) includes both the data fork and the resource fork.
others, are described next. For more information about volume recovery and file recovery, see the TMF Operations and Recovery Guide. Using TMF to Recover Dropped SQL/MX Objects TMF is your primary means of recovering dropped SQL/MX objects.
NOTE: The format of the TMF audit records for SQL/MX objects changed in SQL/MX Release 3.0. As a result, the R2.3.x systems cannot apply the new format (R3.x) audit information whereas, R3.x systems can apply R2.3.x audit information. The R2.3.x primary systems can replicate audit trail content to R3.x disaster recovery systems. This allows you to upgrade the backup system to R3.x and test the upgrade prior to production. When the R3.x system becomes the active node, the constraints are: • R2.3.
When you create the schema on the RDF backup node, create it with the same schema subvolume name as that on the primary node. Using Backup and Restore 2 to Create Offline RDF Backup Databases Use Backup and Restore 2 to specify: • An RDF backup catalog name that is different from the primary catalog, using the TGT CATALOG syntax. • The physical location of SQL/MX objects as they are restored, using the LOCATION option to restore an SQL/MX table file to a different location.
Source files not explicitly mapped with the LOCATION clause will have their destination location generated by the system, using the default volumes and internally generated file names, as for any newly created table. Creating an RDF Backup Database With Identical File Names Suppose that an SQL/MX table has these source locations: \NODE1.$DATA01.ZSDABCDEF.FILEX100 \NODE1.$DATA02.ZSDABCDEF.FILEX100 \NODE1.$DATA03.ZSDABCDEF.FILEX100 Suppose that the desired RDF primary-to-backup volume mappings are: \NODE1.
Synchronizing an RDF Backup Database Whether created and populated online or offline, an RDF backup database must be synchronized with the primary database before use. This process is described in the RDF/IMP, IMPX, and ZLT System Management Manual. Using RDF to Recover Dropped Tables NonStop SQL/MX can automatically save the DDL for any individually dropped table, which can be retrieved later if the dropped table needs to be re-created.
In each case, the quotes delimiting the identifier are removed. For example, if the table CAT.”S&C%H”.”T*A*B?01” is dropped at 12:57:15 am on April 24, 2011, the saved DDL file would be: /usr/tandem/sqlmx/ddl/CAT.S_C_H.T_A_B_01-20110424-215715.ddl If the three-part ANSI name exceeds the maximum OSS file name length of 248, it is truncated to 248 characters. Files with similar names can be distinguished by: • The trailing timestamp portion of the name.
Table 5 Backup and Restore of SQL/MX and OSS Objects SQL/MX or OSS Object 62 BACKUP Command RESTORE Command Catalog Explicitly backs up the named catalog and all Explicitly restores the named catalog and all subordinate objects. See “Backing Up Catalogs” subordinate objects. (page 64). Schema Explicitly backs up the named schema and all subordinate objects. Unless specifically excluded, a schema is automatically backed up when its associated catalog is backed up. See “Backing Up Schemas” (page 64).
Table 5 Backup and Restore of SQL/MX and OSS Objects (continued) SQL/MX or OSS Object BACKUP Command RESTORE Command OSS program executable file or user module Does not automatically back up program Use OSS backup objects to restore program executables or user modules. You back up executables or user modules in one or more sets program files or executables by specifying OSS of OSS directories and subdirectories. backup objects.
BACKUP provides a mechanism to perform a partial backup automatically on only those files that have changed since the last backup date. You can filter certain objects, such as files with a specified user ID, files created or modified within a certain timestamp expression, or files with a certain file code. By using the WHERE expression, you can back up only SQL/MX files that have been modified from a certain date.
You can back up one or more table partitions, either implicitly, by backing up associated tables, or explicitly, by using the PARTITION backup object specification: BR> BACKUP $tape-drive, MX (TPART cat1.sch1.table1 PARTITION (part1)), PARTONLY ON; BR> BACKUP $tape-drive, MX (TPART cat1.sch1.table1 PARTITION (part1, part2)) BR+>, PARTONLY ON; You can use the BACKUP job option PARTONLY only when a database has partitioned tables and indexes.
SQL/MX Objects Not Restored by BRCOM You cannot use Backup and Restore 2 to restore these SQL/MX objects: • GRANT and REVOKE privileges • Referential integrity constraints • SPJs • Triggers • Views However, except for GRANT and REVOKE privileges, you can capture the DDL for the CREATE statements executed for these objects and use this information to manually re-create these objects after a Restore operation.
Backing Up and Restoring SPJ Metadata Information Information about an SPJ is stored in the SQL/MX metadata tables. When you back up a catalog or schema, Backup and Restore 2 automatically backs up the SQL/MX metadata for all SPJs in the catalog or schema. Backup and Restore 2 does not automatically restore SPJ metadata information. To restore it, run the OBEY command file for the SPJ generated by the RESTORE SHOWDDL ON command.
4 Reviewing and Setting System Defaults NonStop SQL/MX uses system-defined default settings for attributes associated with compiling and executing queries. The system-defined default settings, which are hard-coded settings, are optimal in most cases. Under some circumstances, however, you might want to override a specific system-defined default setting.
The COMPILERCONTROLS is the table-valued stored procedure, which is implemented from the compiler. For more information, see the SQL/MX Reference Manual. Consequences of Inserting Rows Into the SYSTEM_DEFAULTS Table Changes you make through the SYSTEM_DEFAULTS table are permanent until changed by a DELETE statement or another UPDATE statement. However, the values you insert into the SYSTEM_DEFAULTS table do not affect your current session immediately.
Table 6 Settings for Selected System Defaults (page 1 of 5) (continued) System Default Default Value Alternative Setting NATIONAL_CHARSET The default value is UCS2. The value ISO88591 is appropriate if ISO88591 is the preferred national character set. A value for NATIONAL_CHARSET is set when you run the InstallSqlmx script during installation of NonStop SQL/MX. This value can be changed only by reinstalling NonStop SQL/MX. For more information, see SQL/MX Installation and Upgrade Guide.
Table 6 Settings for Selected System Defaults (page 1 of 5) (continued) System Default Default Value Alternative Setting behaves like RESTRICT; ON means that NO ACTION behaves like RESTRICT, without warning or errors. SAVE_DROPPED_TABLE_DDL The default value is ON. DDL is saved only if SAVE_DROPPED-TABLE_DDL is set to the default ON setting. HP recommends that you do not change this default value in production environments.
Settings Required to Achieve ANSI Compliance Table 7 (page 72) lists the system default settings required to achieve ANSI compliance.
5 Creating an SQL/MX Database When you install and initialize NonStop SQL/MX on a system node, NonStop SQL/MX creates the system catalog and several system schema. After the installation is complete, you can create your own user catalogs and schemas. Creating a catalog updates the system schema. The definition schema for a user catalog is created when you create the first schema in that catalog. For more information , see the “Creating SQL/MX Metadata” (page 76).
NOTE: NonStop SQL/MX additionally restricts system-generated names by excluding vowels to prevent names containing actual words that could be unexpected or misleading. SQL/MX Subvolume Naming Guidelines • The subvolume name must begin with the letters ZSD, followed by a letter, not a digit (for example, ZSDa, not ZSD2). • The subvolume name must be exactly eight characters long.
Resource forks contain similar information to Guardian file labels, but have the capacity to store a much greater variety and quantity of information. Examples of data fork/resource fork pairs are: • $DATA.ZSDADMM8.WDE6TI00 / $DATA.ZSDADMM8.WDE6TI01 • $DATA.ZSDADMM8.HEPQTI00 / $DATA.ZSDADMM8.HEPQTI01 • $DATA.ZSDADMM8.SS8BSI00 / $DATA.ZSDADMM8.SS8BSI01 Data files are sometimes referred to as data forks to distinguish them from resource forks.
where schema-clause is one of: • schema • schema AUTHORIZATION auth-id • schema AUTHORIZATION auth-id location-clause • schema location-clause location-clause is: LOCATION subvolume [reuse-clause] reuse-clause is: REPEAT USE ALLOWED schema-element is one of: • table-definition • view-definition • grant-statement • index-definition One use of the schema subvolume is to identify all Guardian files in an schema for use with Guardian-based commands for TMF, RDF, and other subsystems.
When NonStop SQL/MX is installed and initialized on a node, the system catalog, NONSTOP_SQLMX_nodename, and its schemas are automatically created. These metadata schemas are: • SYSTEM_SCHEMA schema, which contains the five system schema tables used for catalog reference and recording catalog and schema information. • SYSTEM_DEFAULTS_SCHEMA schema, which contains the default settings for options and other attributes used with MXCI commands and SQL/MX statements.
• A nonempty catalog cannot be dropped. • In a distributed database environment, only the super ID user and the person who creates a catalog can register and unregister it. For more information, see the “Managing an SQL/MX Distributed Database” (page 266).
CREATE SCHEMA mycat.myschema; This example creates a schema that is owned by the user sql.gdavis and located on the volume ZSDA142A: CREATE SCHEMA mycat.myschema AUTHORIZATION "sql.gdavis" LOCATION ZSDA142A; The authorization ID must be the current authorization ID unless it is a super ID user. A super ID user can specify any currently valid authorization ID as the owner of the schema. For more information, see the “Access Privileges for SQL/MX Database Objects” (page 79) and the SQL/MX Reference Manual.
For more information, see the SQL/MX Reference Manual. Examples for Granting Privileges This example grants SELECT privilege on a table, but not the privilege of granting SELECT privileges to others: GRANT SELECT ON TABLE persnl.employee TO "sql.user1"; This example grants SELECT and DELETE privileges on a table, including the privilege of granting SELECT and DELETE privileges to others: GRANT SELECT, DELETE ON TABLE sales.odetail TO "sql.user1", "sql.
The rules for creating and altering tables are: • To create an SQL/MX table or other object (including index, view, SPJ, or SQL/MP alias), you must own its schema or be a super ID user acting on behalf of the owner. • To alter a table, you must own its schema or be a super ID user acting on behalf of the owner. For more information, see the “Access Privileges for SQL/MX Database Objects” (page 79). Creating and Using Keys Create and use these keys for SQL/MX tables and their indexes: • Primary key.
Create a clustering key by using one of these methods: • Use STORE BY key-column-list in CREATE TABLE to specify the columns that compose the clustering key. The key columns must be specified as NOT NULL NOT DROPPABLE and cannot have a combined length of more than 2048 bytes. • Use STORE BY PRIMARY KEY to base the clustering key on the primary key columns. This STORE BY option requires that a primary key first be defined in the PRIMARY KEY constraint for CREATE TABLE and that it is NOT DROPPABLE.
Creating and Using a Partitioning Key The partitioning consists of the columns you specify in the PARTITION BY clause of CREATE TABLE and CREATE INDEX statements. The partitioning key determines how data is distributed into partitions that are associated with table or index objects. For a range-partitioned table or index, the partition key uses the FIRST KEY option to specify the beginning of the range for the table or index partition.
To promote parallel processing of queries and parallel index maintenance, you should partition data across available disk volumes. For a very large table or a table used at different geographical sites, use partitions to make the data more accessible and reduce the time required for table scans by a factor almost equal to the number of partitions. NOTE: SQL/MX tables must have a user-defined clustering key to have partitions.
Therefore, NonStop SQL/MX does not support the use of floating-point data type columns in partitioning keys. Attempts to create SQL/MX tables or indexes with floating-point partitioning key columns fail with error message 1124. Attempts to perform DML from NonStop SQL/MX on SQL/MP tables with floating-point partitioning key columns fail with error message 1120. For more information, see the SQL/MX Messages Manual.
data skew is proportionately greater in a table with a unique entry count of 100 and two partitions than in a table with a unique entry count of 1000 and 20 partitions. One solution is to use a partitioning key that produces a higher unique entry count. Using DDL_DEFAULT_LOCATIONS to Distribute Primary Range Partitions You can use the DDL_DEFAULT_LOCATIONS system default to specify one or more default volumes for the primary range partition in a CREATE statement that does not include a LOCATION clause.
Example for Creating an SQL/MX Table With a Single Hash Partition CREATE TABLE cat1.sch1.ordersh (location char(16) not null not droppable, ordernumber integer unsigned not null not droppable, ordertime timestamp, primary key (location, ordernumber) not droppable) location $DATA01 hash partition; Example for Creating an SQL/MX Table With Multiple Hash Partitions CREATE TABLE cat1.sch1.
• Use the LIKE option in the CREATE TABLE statement to create a table like the existing table, source-table. This approach omits constraints (with the exception of the NOT NULL and PRIMARY KEY NOT DROPPABLE constraints), headings, and partitions unless include-option clauses are specified. Optionally, create the new table with the same constraints and headings as the source table. Alternately, override these attributes and create the new table with different constraints and headings.
When determining the data type and attributes for a column, consider these guidelines: • NonStop SQL/MX supports the ISO88591 character set and several other character sets for character data. For more information, see the “Defining Character Data” (page 90). • Specify a column default value for each column. For more information, see the “Using Default and Null Values” (page 94). • Specify the HEADING and UPSHIFT attributes for your application’s use, if applicable.
Defining Character Data NonStop SQL/MX includes both fixed-length character data and variable-length character data. The data types for character data are: Fixed-length characters CHAR PIC NCHAR, NATIONAL CHAR, or NATIONAL CHARACTER Variable-length characters CHAR VARYING VARCHAR NCHAR VARYING or NATIONAL CHAR VARYING For NonStop SQL/MX, the maximum row size is approximately 32 KB, provided the Control Query Default DEFAULT_BLOCKSIZE is set to 32768.
ISO88591 Field Where Found SQL/MX names Names of catalogs, schemas, tables, views, indexes, constraints, and columns View text CREATE VIEW $volume specification CREATE CATALOG, CREATE TABLE, CREATE INDEX, CREATE VIEW, DUP, PURGEDATA, MODIFY, RESTORE In addition, user data fields must use either ISO88591 or USC2. Defining Numeric Data Numeric data types are either exact or approximate. Exact numeric data types can represent a value exactly. Approximate numeric data types do not.
the data type, its meaning, and the range of data allowed in fields of a value of this type. The fields in these data types are not equivalent to columns. A column value of type DATE is made up of these contiguous fields: YEAR Year 0001 to 9999 MONTH Month of year 01 to 12 DAY Day of Month 01 to 31 The maximum value of DAY depends on the length of the month.
Guidelines for Date and Time Interval Data When you define a column to hold date and time, date, time, or time interval values, use these general guidelines: • A column of the DATE, TIME, or TIMESTAMP type holds a value that represents a date or an instant in time, and a column of the INTERVAL type holds a value that represents a time interval, or duration.
Using Default and Null Values In NonStop SQL/MX, a null value is a marker that indicates that a column in a specified row has no value. The null value is not treated as a normal data value. It serves only as a placeholder necessary for certain relational operations. To an application interacting with a database, the null value indicates “unknown” or “do not know.” The DEFAULT and NULL clauses determine the value used when a column value is not supplied for a row during data entry.
• Use the DEFAULT NULL clause to specify that the column takes on a null value if no value is supplied for it. • Use the NOT NULL clause to specify that the column cannot contain null values. If you omit NOT NULL, nulls are allowed in the column. If you specify both NOT NULL and NO DEFAULT, each row inserted in the table must include a value for the column. ◦ Use of the DROPPABLE clause with NOT NULL means that you can later drop the NOT NULL constraint by using the ALTER TABLE statement.
to 128 bytes in length if it is displayed with a SELECT statement in MXCI. The heading name is an SQL identifier that can be delimited and its characters must be from the ISO88591 character set. If you specify a heading that is identical to the column name, INVOKE and SHOWDDL do not display that heading. If you specify NO HEADING for the column, the application uses the column name as the default.
Creating an Index To create an index, use the CREATE INDEX statement. If the underlying table contains data, the creation process automatically loads the index unless you specify NO POPULATE. When you define an alternate index, first consider the column-related guidelines described under “Managing Table Data” (page 88).
• If an index is added or removed, query execution plans can be rendered inoperable. The SQL/MX executor will perform a similarity check on such a plan and, if necessary, automatically recompile it. To avoid this, you can explicitly SQL compile the program. For specific guidelines, see the information on similarity checks and automatic recompilation in the SQL/MX Programming Manual for C and COBOL. • To influence the optimizer’s choice of index, use the CONTROL QUERY SHAPE statement.
CREATE UNIQUE INDEX XEMP ON persnl.employee (LAST_NAME, EMPNUM) LOCATION $data1 ATTRIBUTE NO AUDITCOMPRESS HASH PARTITION (ADD LOCATION $data2, ADD LOCATION $data3, ADD LOCATION $data4, ADD LOCATION $data5); Creating Constraints on SQL/MX Tables Constraints are rules that protect the integrity of data in a table by restricting the values in a particular column or set of columns to those that meet the conditions of the constraints.
Naming Constraints When you create a constraint, either specify a name for it or let NonStop SQL/MX name it. Constraint names are three-part logical ANSI names. Constraints have their own namespace within a schema, so a constraint can have the same name as a table, index, or view in the schema. However, two constraints in a schema cannot have the same name.
Examples for Creating Constraints for SQL/MX Tables To add a UNIQUE table constraint to an existing table: ALTER TABLE persnl.project ADD CONSTRAINT projtimestamp_uc UNIQUE (projcode, ship_timestamp); To add a FOREIGN KEY constraint to an existing table: ALTER TABLE persnl.project ADD CONSTRAINT projlead_fk FOREIGN KEY (projlead) REFERENCES persnl.employee; Creating Views of SQL/MX Tables SQL/MX views comply with the SQL:1999 standard.
This example creates a view from two tables by using an INNER JOIN: CREATE VIEW MYVIEW4 (v_ordernum, v_partnum) AS SELECT od.ordernum, p.partnum FROM SALES.ODETAIL OD INNER JOIN SALES.PARTS P ON od.partnum = p.partnum; Creating Sequence Generators Use the CREATE SEQUENCE statement to create a sequence generator within a given schema. To create a sequence generator, you must be the schema owner or the super ID.
Database Design Guidelines for Improving OLTP Performance To improve database performance in an online transaction processing (OLTP) environment, observe these guidelines when you design an SQL/MX database: • Make columns NOT NULL NOT DROPPABLE unless null values are needed. Null processing imposes disk space and performance overhead that should be avoided whenever possible.
• • Use datetime fields carefully: ◦ Datetime operations are expensive. ◦ Use datetime fields if datetime arithmetic is needed in an SQL query (for example, where date-col + interval '1' day > date '2001-09- 11'). ◦ Use LARGEINT or CHAR if datetime is needed for storage and retrieval only. Where possible, avoid adding columns to a table: ◦ Added columns turn off internal expression optimizations. ◦ If you add a column, reload the table.
6 Querying SQL/MX Metadata Use the commands described in this section to obtain the information you need to manage a database.
Commands for Displaying Information Topics in this subsection: • “DISPLAY USE OF Command” (page 106) • “mxtool INFO Command” (page 106) • “SHOWLABEL Command” (page 106) • “mxtool VERIFY command” (page 106) • “SHOWDDL Command” (page 107) DISPLAY USE OF Command Use the MXCI DISPLAY USE OF command to display a list of modules and, for each module, to display a list of dependent objects (tables and indexes) that are used by the module.
SHOWDDL Command Use the MXCI SHOWDDL command to display the DDL syntax equivalent to the syntax that was used to create an SQL/MX table, view, or SPJ and its dependent objects. This command also supports SQL/MP alias names. You can choose to display the DDL for the underlying SQL/MP object and its dependent objects either in SQL/MX or in SQL/MP DDL syntax. The command is especially useful for displaying a thorough description of an object as it exists in metadata.
Figure 2 SQL/MX Metadata Table Structure • SQL/MX metadata tables are quite different from SQL/MP catalogs. • Each node contains a single NONSTOP_SQLMX_nodename catalog. This catalog contains metadata pertinent to the node as a whole. Figure 2 (page 108) shows the metadata schemas and catalogs in the NONSTOP_SQLMX_nodename catalog. • Typically, a node has additional user catalogs; Figure 2 (page 108) does not show these user catalogs.
Figure 3 SQL/MX Metadata Tables SQL/MX Metadata Tables 109
Figure 4 SQL/MX Metadata Tables Starting with SQL/MX Release 3.1, a new schema, called SYSTEM_SECURITY_SCHEMA is introduced. Figure 5 (page 111) displays the tables in the new schema.
Figure 5 SQL/MX System Security Schema Understanding ANSI External and Internal Names ANSI SQL objects have both external and internal names. When objects are created in NonStop SQL/MX, they are given external names. However, when the same objects are stored in the metadata, they are stored by their internal name. You need to be aware of external and internal naming when performing metadata queries. When you use an object name in a search condition, such as a comparison predicate, use the internal name.
(select schema_uid from nonstop_sqlmx_figaro.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_figaro.system_schema.catsys where cat_name = 'my " catalog' ) ) for read uncommitted access order by object_name; Note that "my "" catalog" is in external format, and 'SALES'and 'my " catalog'are in internal format. The query does not work correctly if the internal and external names are incorrect.
2> volume $data02.zsd0 $DATA02 ZSD0 3> files $DATA02.ZSD0 CATREF00 CATREF01 CATSYS00 CATSYS01 SCHEMA00 SCHEMA01 SCHREP00 SCHREP01 4> fup info catref00, detail $DATA02.ZSD0.CATREF00 27 Jan 2011, 19:05 SQL METADATA TABLE ANSI NAME NONSTOP_SQLMX_DMR15.SYSTEM_SCHEMA.CAT_REFERENCES RESOURCE FORK \DMR15.$DATA02.ZSD0.CATREF01 SYSTEM METADATA \DMR15.$DATA02.
ASSOC200 ASSOC201 DATASO00 DATASO01 ENVIRO00 ENVIRO01 NAME2I00 NAME2I01 RESOUR00 RESOUR01 $DATA02 ZSD1 12> fup info assoc200, detail $DATA02.ZSD1.ASSOC200 27 Jan 2011, 14:09 SQL ANSI TABLE ANSI NAME NONSTOP_SQLMX_DMR15.MXCS_SCHEMA.ASSOC2DS RESOURCE FORK \DMR15.$DATA02.ZSD1.ASSOC201 SYSTEM METADATA \DMR15.$DATA02.
Use the SHOWLABEL command to locate the SYSTEM_DEFAULTS table. • In this example, for a node called \DMR15, the physical Guardian name of the SYSTEM_DEFAULTS table is $ DATA02.ZSD5VXWZ.M26FB700: showlabel nonstop_sqlmx_dmr15.system_defaults_schema.system_defaults; ============================================================= GuardianName: \DMR15.$DATA02.ZSD5VXWZ.M26FB700 AnsiName: NONSTOP_SQLMX_DMR15.SYSTEM_DEFAULTS_SCHEMA.
--- SQL operation complete. >>select substring(object_name, 1, 23) as table_name, substring(p.system_name, 1, 15) as system, p.data_source, p.file_suffix from partitions p, objects o where o.object_type = 'BT' and o.object_name = 'SYSTEM_DEFAULTS' and o.object_uid = p.object_uid and o.schema_uid = (select schema_uid from nonstop_sqlmx_dmr15.system_schema.schemata where schema_name = 'SYSTEM_DEFAULTS_SCHEMA' and cat_uid = -- get catalog uid for CAT (select cat_uid from nonstop_sqlmx_dmr15.system_schema.
Owner: SUPER.
FORMAT 2 CODE 563 EXT ( 128 PAGES, 256 PAGES, MAXEXTENTS 512 ) PACKED REC 48 BLOCK 4096 KEY ( COLUMN 0, ASC , COLUMN 1, ASC ) AUDIT BUFFERED AUDITCOMPRESS OWNER -1 SECURITY (RWEP): *SQL DATA MODIF: 19 Jan 2011, 17:24, OPEN CREATION DATE: 27 Nov 2009, 19:40 REDEFINITION DATE: 27 Nov 2009, 19:40 LAST OPEN: 28 Jan 2011, 3:01 EOF: 12288 (0.
PSDB SAMDBCAT 6742156871006226409 6742156872955712923 \FIGARO \FIGARO Displaying all Catalogs Visible on a Node The CATSYS table contains one (and only one) row for each catalog that is visible on that node. This example displays a list of all the visible catalogs on a local node (where node-name is the name of the local node): >> select cat_name from nonstop_sqlmx _node-name.system_schema.
order by schema_name for read uncommitted access; SCHEMA_NAME ------------------------------ SCHEMA_UID -------------------- DEFINITION_SCHEMA_VERSION_3000 INVENT PERSNL SALES 1950548695223685006 1950548695240365422 1950548695235454934 1950548695238563835 Displaying the Guardian Location of the Metadata Tables for a Catalog • This example displays the Guardian subvolumes where the system and user tables are located for schemas in the catalog SAMDBCAT: >> set schema nonstop_sqlmx_.
when 'A' then 'AUTOMATIC' when 'M' then 'MANUAL' else 'UNKNOWN' end as rep_rule, c.local_smd_volume as smd_vol, c.local_user_schema_count as schema_cnt, c.cat_owner as owner from catsys c where c.
and c.cat_uid=cr.cat_uid and cr.
and s.schema_uid = o.schema_uid and s.
s.schema_name = 'SALES' and cat_uid = (select cat_uid from catsys where cat_name='SAMDBCAT') for read uncommitted access; NAME --------------- OWNER ----------- VERSION ----------- SUBVOL -------- CUR_OP ----------- SALES 65535 3000 ZSDS53DQ NONE Displaying the Owner of a Schema This example queries the SCHEMATA system schema table and returns the owner ID: >> set schema nonstop_sqlmx_.system_schema; --- SQL operation complete.
c, NONSTOP_SQLMX_COBOLT.SYSTEM_SECURITY_SCHEMA.MGM_PRIVILEGES m where c.cat_uid = m.target_uid; GRANTEE ----------- PRIVILEGE_TYPE -------------- CAT_NAME ---------- 44802 CS CAT1 --- 1 row(s) selected.
order by schema_name, table_name for read uncommitted access; 126 TABLE_NAME ------------------------- SCHEMA_NAME ------------------------------ ACCESS_PATHS ACCESS_PATH_COLS CK_COL_USAGE CK_TBL_USAGE COLS COL_PRIVILEGES DDL_LOCKS DDL_PARTITION_LOCKS EXCEPTION_USAGE KEY_COL_USAGE MP_PARTITIONS MVGROUPS MVS MVS_COLS MVS_JOIN_COLS MVS_TABLE_INFO MVS_USED OBJECTS PARTITIONS REF_CONSTRAINTS REPLICAS RI_UNIQUE_USAGE ROUTINES SCH_PRIVILEGES SEQUENCE_GENERATORS SG_USAGE SYNONYM_USAGE TBL_CONSTRAINTS TBL_PRIVI
ORDERS PARTS SALES SALES Displaying all Tables in a Schema This example displays all table objects in the schema SALES and catalog SAMDBCAT. Table objects include base tables, views, MP aliases, and SPJs: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete.
cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys where cat_name = 'SAMDBCAT' ) ) for read uncommitted access order by l.object_uid; LOCK_NAME UTIL ------------------------- ---- STATUS ----------- (EXPR) ---------------------- CUSTOMER_LOCK 4 \DMR15.$:2:619:1565916 IM Displaying the Attributes of a Table This example displays some of the attributes of the base tables defined in the SAMDBCAT.SALES schema: >> set schema samdbcat.
Displaying all Tables on a System Displaying all tables on a node (system) is a two-step process. To display all tables on a node, you must: 1. Run a query that displays all the catalogs on a node (see “Displaying all Catalogs Visible on a Node” (page 119)). 2. Run a query for each catalog that displays all the tables for that catalog (see “Displaying all Tables in a Catalog” (page 125)).
check_option, is_updatable, is_insertable from nonstop_sqlmx_.system_schema.catsys c, nonstop_sqlmx_.system_schema.schemata s, objects o, vws v where c.cat_uid = s.cat_uid and c.cat_name = 'SAMDBCAT' and s.schema_uid = o.schema_uid and s.schema_name = 'SALES' and o.object_type = 'VI' and v.object_uid = o.
nonstop_sqlmx_.system_schema.schemata s, objects o, mp_partitions m where c.cat_uid=s.cat_uid and c.cat_name='SAMDBCAT' and s.schema_uid=o.schema_uid and s.schema_name='SALES' and o.object_type='MP' and m.object_uid = o.object_uid for read uncommitted access; SQLMP_ALIAS ------------------------- MPPARTITION_NAME ------------------------------------ OLDCUSTS \DMR15.$DATA04.SQLMP.
>> select count(*) as num_indexes from samdbcat.definition_schema_version_.access_paths a, samdbcat.definition_schema_version_.objects o1 where a.access_path_uid = o1.object_uid and a.access_path_uid <> a.table_uid and a.table_uid = (select object_uid from samdbcat.definition_schema_version_.objects o where o.object_name = 'CUSTOMER' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata s where s.
INDEX_NAME ---------------------------------------XCUSTNAM Displaying all DDL Locks on an Index This example (when run on the node on which the metadata is located) displays the DDL locks on the index SAMDBCAT.SALES.XCUSTNAM: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring (o1.object_name from 1 for 25) as lock_name, l.operation as util, l.status, substring (l.process_id from 1 for 25) from objects o, objects o1, ddl_locks l where o.
o1.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata s where s.schema_name = 'SALES' and s.cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys c where c.
Displaying Partition Information Topics in this subsection: • “Displaying all Partitions for a Table or Index” (page 135) • “Displaying Attributes of a Partition” (page 135) The PARTITIONS table, located in the schema DEFINITION_SCHEMA_VERSION_version-number of each catalog, describes all partitions in a catalog. For more information about all metadata tables, see Figure 3 (page 109) in this manual or the SQL/MX Reference Manual.
when 'AV' then 'AVAILABLE' when 'UO' then 'OFFLINE' when 'UC' then 'CORRUPT' when 'UD' then 'DROPPED' when 'UR' then 'RECREATED' end as status from partitions p, objects o where o.object_name_space = 'TA' and o.object_name = 'CUSTOMER' and o.object_uid = p.object_uid and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
>> select count(*) as num_constraints from tbl_constraints t, objects o where t.table_uid = o.object_uid and o.object_name = 'CUSTOMER' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
Displaying the Attributes of a Constraint This example displays some of the attributes of the constraints in the table SAMDBCAT.SALES.CUSTOMER. To display additional fields, select the fields from the TBL_CONSTRAINTS table: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select t.constraint_uid, t.table_uid, case t.constraint_type when 'C' then 'CHECK' when 'F' then 'FOREIGN' when 'P' then 'PRIMARY KEY' when 'U' then 'UNIQUE' end as type, t.
All queries for information about columns must access information from the CATSYS, SCHEMATA, and OBJECTS tables, and from other appropriate tables, such as the COLS table. You can also use the MXCI SHOWDDL command to display the approximate CREATE TABLE statement that might have created the columns in that table. For examples of the SHOWDDL command, see the SQL/MX Reference Manual. Displaying all Columns in a Table This example displays all columns in the table SAMDBCAT.PERSNL.
from nonstop_sqlmx_.system_schema.catsys where cat_name = 'SAMDBCAT' ) ) order by column_number for read uncommitted access; COLUMN_NAME ------------------------- COLUMN_NUMBER ------------- CUSTNUM CUSTNAME STREET CITY STATE POSTCODE 0 1 2 3 4 5 Displaying all Columns in an Index This example displays all columns in the index SAMDBCAT.SALES.XCUSTNAM in the schema SAMDBCAT.SALES: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete.
when 'F' then 'FOREIGN' when 'P' then 'PRIMARY KEY' when 'U' then 'UNIQUE' else 'UNKNOWN' end as type, substring(o2.object_name , 1, 24) as constraint_name from cols c, objects o1, objects o2, key_col_usage k, tbl_constraints t where c.object_uid = t.table_uid and k.constraint_uid = t.constraint_uid and k.column_number = c.column_number and t.table_uid = o1.object_uid and o1.object_name = 'ODETAIL' and o2.object_uid = t.constraint_uid and o1.schema_uid = (select schema_uid from nonstop_sqlmx_.
and o2.object_uid = t.constraint_uid and o1.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata s where s.schema_name = 'SALES' and s.cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys c where c.
) for read uncommitted access; COLUMN_NAME -------------- COL_NUM ---------- UPDATE_RULE ----------- CONSTRAINT_NAME ---------------------- PARTNUM 1 NA P1 Displaying the Attributes of a Column This example displays some of the attributes of the columns in the table SAMDBCAT.PERSNL.EMPLOYEE. To display additional attributes, select different fields from the COLS table: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring(c.
c.object_uid = o.object_uid and column_name = 'ORDERNUM' and o.object_type = 'BT' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
when 'D' then 'Delete' when 'U' then 'Update' when 'R' then 'Refer' else 'Unknown' end as prvtype, is_grantable as grantable from tbl_privileges t, objects o where t.table_uid = o.object_uid and o.object_name = 'ORDERS' and o.object_type = 'BT' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
(select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
where cat_name = 'SAMDBCAT' ) ) for read uncommitted access; GRANTOR_ID ---------- GRANTOR_T --------- GRANTEE_ID ----------- GRANTEE_T --------- PRVTYPE ------- GRANTABLE --------- -2 -2 System System 65535 65535 Unknown Unknown Refer Update Y Y Displaying Information About Sequence Generators The SEQUENCE_GENERATORS metadata table is located in the schema DEFINITION_SCHEMA_VERSION_version-number of each catalog. It stores information about all the sequence generators in the catalogs.
• “Displaying the Object Schema Version (OSV)” (page 148) • “Displaying the Object Feature Version (OFV)” (page 148) Displaying the NonStop SQL/MX Release Identifier To obtain the release number of the SQL/MX software running on a specified node, see the banner generated by mxci. For SQL/MX Release 3.2, mxci displays the release identifier 3.2. For example, the mxci banner displays the following information for SQL/MX Release 3.2: Hewlett-Packard NonStop(TM) SQL/MX Conversational Interface 3.
and o.object_name = 'object-name' and o.
7 Adding, Altering, and Dropping SQL/MX Database Objects After you create an SQL/MX database, you can assume the database is consistent and the application data is valid. Database management operations that add, alter, or drop database objects must maintain this same level of data consistency and validity, and must be planned carefully.
Table 8 Adding Objects to an SQL/MX Database (continued) Object Operation Statement, Command or Utility Schema Add CREATE SCHEMA statement Sequence Generator Add CREATE SEQUENCE statement SQL/MP alias Add CREATE SQLMP ALIAS statement Stored procedures in Java (SPJs) Add CREATE PROCEDURE statement Table Add CREATE TABLE statement Trigger Add CREATE TRIGGER statement View Add CREATE VIEW statement For more information on creating (adding) objects, see: • “Understanding and Planning SQ
Table 9 Authorization Requirements for Adding Database Objects (continued) Operation Authorization Requirements “Adding Triggers” (page 161) Adding a trigger can affect up to three schemas, each with its own authorization requirements: • The schema where the trigger is created. You must either own this schema or be the super ID. • The schema where the subject table exists. You must either own this schema or be the super ID. • The schema where the referenced table exists.
NOTE: Where possible, avoid adding columns to a table. When you add a column, internal expression optimizations are turned off, and the executor incurs a performance penalty whenever a column is added. The extent of the impact on performance depends in large part on data types used in added columns. For more information, see the “Database Design Guidelines for Improving OLTP Performance” (page 103).
To integrate the new column into the existing database or application programs: • If you want to create a new index using the new column, follow the steps for “Adding Indexes to SQL/MX Tables” (page 155). • If you want to add the new column to an existing index, first follow the steps for “Dropping SQL/MX Indexes” (page 172), then follow the steps for “Adding Indexes to SQL/MX Tables” (page 155) to add the new column definition. You cannot alter an index or view to add a column.
5. Use the DISPLAY USE OF command to identify which user modules are associated with this object. See the similarity check criteria in the SQL/MX Programming Manual for C and COBOL to determine if your changes are likely to cause similarly check to fail and force automatic recompilation. If they will, you should SQL compile these modules after making the changes to avoid expensive automatic recompilations at run time.
For additional guidelines related to index creation, including performance-related considerations, see the SQL/MX Reference Manual, “Determining When to Use Indexes” (page 37), “Defining an Index” (page 39), and “Creating Indexes for SQL/MX Tables” (page 96). Evaluating the Benefits of a New Index Use indexes to improve the performance of your database and to implement constraints. Indexes can improve performance by eliminating the need for the disk process to access the underlying table.
to avoid expensive automatic recompilations at run time. SQL applications that are running while you make these changes will still undergo automatic recompilation. For more information about explicit and automatic recompilation, see the SQL/MX Programming Manual for C and COBOL. For more information about using DISPLAY USE OF, see “Checking Module Dependencies with DISPLAY USE OF” (page 226) and the SQL/MX Reference Manual. 6. Enter the CREATE INDEX statement.
• A certain subset of a remote table’s data is accessed more frequently at the local node than from a remote node, partitioning the table so that the frequently accessed portion of the data resides on the local node can increase local performance. • Queries are processed in parallel, partitioning a table or index is often required. Partitioning is necessary, for example, for parallel execution of a SELECT statement on a single table.
>> MODIFY TABLE cat3.sch3.ordersh add partition +> location $DATA04; --- SQL operation complete. In response to this command, a portion of the data in each of the original three partitions is moved automatically to the fourth partition, rebalancing the data evenly across the four partitions. For more detailed examples of adding, modifying, and deleting range-partitioned and hash-partitioned tables and indexes, see “Using MODIFY to Manage Table and Index Partitions” (page 184).
3. 4. 5. 6. 7. Determine a name for the sequence generator you want to add. Query the system metadata to ensure the new sequence generator name is unique among existing sequence generator, table, view, SQL/MP alias, and SPJ names in the schema. Enter the CREATE SEQUENCE statement. Use the GRANT statement to assign the USAGE privilege on the new sequence generator to authorized users. Revise the application source code as needed to reflect your changes to the database.
Steps for Adding an SQL/MX Table 1. 2. 3. 4. 5. 6. 7. 8. 9. Plan the column definitions, checking that the data type of any column that might be necessary for join or predicate search operations matches the joined column. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Make sure your new table name is unique among existing table, view, SQL/MP alias, and SPJ names in the schema.
Steps for Adding a View 1. 2. 3. 4. 5. 6. 7. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the names of any tables whose columns you want to include in the view., and a name for the view you want to add. Query the system metadata to ensure the new view name is unique among existing view, table, SQL/MP alias and SPJ names in the schema.
Table 10 Altering Objects in an SQL/MX Database (continued) Object Trigger View Operation Statement, Command or Utility Revoke privileges REVOKE statement Enable trigger Disable trigger nl ALTER TRIGGER statement ALTER TRIGGER statement Grant privilege Revoke privilege GRANT statement REVOKE statement nl nl nl For more information on altering SQL/MX database objects, see: • “Understanding and Planning SQL/MX Tables” (page 29) • “Reviewing and Setting System Defaults” (page 68) • “Creating
Table 11 Authorization Requirements for Altering Database Objects (continued) Operation Authorization Requirements “Altering Triggers” (page 168) To alter a trigger with the ALTER TRIGGER statement, you must own its schema or be the super ID. Only the super ID can use ALTER TRIGGER DISABLE ALL or ALTER TRIGGER ENABLE ALL. “Altering Views” (page 168) To alter a view with GRANT and REVOKE statements, you must have both that privilege and the right to grant or revoke that privilege.
3. Use the DISPLAY USE OF command to identify which user modules are associated with this object. See the similarity check criteria in the SQL/MX Programming Manual for C and COBOL to determine if your changes are likely to cause similarly check to fail and force automatic recompilation. If they will, you should SQL compile these modules after making the changes to avoid expensive automatic recompilations at run time.
Altering SQL/MX Tables You can use various statements and utilities to alter SQL/MX tables in many ways. For more information about these statements and utilities, see the SQL/MX Reference Manual. Altering SQL/MX Table Columns You cannot alter table columns directly. You can alter them indirectly, however, by creating a new table with the different column sizes or data type definitions: 1. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session.
Altering a table's file attributes neither invalidates any programs nor affects dependencies of the table. To alter security attributes of SQL/MX tables, however, use the GRANT and REVOKE statements. (See “Altering Table Privileges” (page 167).) Steps for Altering Table File Attributes 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the table attribute you want to alter.
Steps for Altering a Sequence Generator 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the sequence generator you want to alter. Enter the ALTER SEQUENCE statement. For more information, see the SQL/MX Reference Manual. Altering Sequence Generator Privileges To alter sequence generator privileges, use the GRANT and REVOKE statements with the USAGE privilege.
Table 12 Dropping Objects From an SQL/MX Database Object Operation Statement, Command or Utility Catalog Drop DROP CATALOG statement Constraint Drop ALTER TABLE DROP CONSTRAINT statement Index Drop data, file and definition DROP INDEX statement Partition Drop data, file and definition MODIFY utility Drop data only PURGEDATA utility Schema Drop schema DROP SCHEMA statement Sequence Generator Drop Sequence DROP SEQUENCE statement SQL/MP alias Drop the map DROP SQLMP ALIAS statement S
Table 13 Authorization Requirements for Dropping Database Objects (continued) Operation Authorization Requirements “Dropping Schemas” (page 173) To drop a schema with the DROP SCHEMA statement, you must own the schema or be the super ID. In a distributed environment, you must have remote password set up for all nodes where the schema's catalog is visible.
Steps for Dropping a Column From an SQL/MX Table 1. 2. 3. 4. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine a temporary name for the new table. Query the system metadata to ensure the temporary table name is unique among existing table, view, SQL/MP alias and SPJ names in the schema. Use the DISPLAY USE OF command to identify which user modules are associated with this object.
4. 5. Enter the ALTER TABLE DROP CONSTRAINT statement. Revise the application source code as needed to reflect your changes to the database. Process and compile the updated source file. For more information, see the SQL/MX Programming Manual for C and COBOL. For more information, see the SQL/MX Reference Manual. Dropping SQL/MX Indexes To drop an index, use the DROP INDEX statement. This statement purges the physical files that contain the index and eliminates the access path to the underlying table.
Guidelines for Dropping Partitions You can drop table and index partitions within these guidelines: • The partition for a range-partitioned object must be empty. • All partitions must be available when you use the MODIFY utility on an SQL/MX table or index. • If you plan to use the TMF subsystem for recovering a table or index, see “Recovering Database Objects” (page 239) before proceeding. For more information, see the SQL/MX Reference Manual.
If the schema is fairly large and contains many rows, DROP SCHEMA with the CASCADE might fail with file system error 35, “Unable to obtain an I/O process control block, or the transaction or open lock unit limit has been reached.” In this case, too many locks were requested. When this occurs, you need to update MAXLOCKSPERTCB to 10000 or more. For more information, see the SQL/MX Reference Manual. Steps for Dropping a Schema 1. 2. 3. 4. Start an MXCI session.
Dropping SPJs For more information about and examples of dropping SPJs, see the SQL/MX Guide to Stored Procedures in Java. Dropping SQL/MX Tables To remove a table and its dependent objects from the database, use the DROP TABLE statement. Use the PURGEDATA utility to remove only the data from a table and its dependent objects and leave the objects intact. For more information, see the SQL/MX Reference Manual. Dropping SQL/MX Tables and Their Data To drop a table and its data, use the DROP TABLE statement.
For more information, see the SQL/MX Reference Manual. Recovering SQL/MX Tables Dropped Accidentally If you drop a table accidentally, you can recover the table using the DDL file generated automatically by NonStop SQL/MX. For more information, see “Using TMF to Recover Dropped SQL/MX Objects” (page 56) and the “Recovering Tables” (page 240). Dropping SQL/MX Table Data Only To drop data from a table while leaving the table intact, use the PURGEDATA utility.
4. 5. Enter the DROP TRIGGER statement. Revise the application source code as needed to reflect your changes to the database. Process and compile the updated source file. For more information, see the SQL/MX Programming Manual for C and COBOL. For more information, see the SQL/MX Reference Manual. Dropping Views To drop a view of SQL/MX data, use the DROP VIEW statement. To drop a view, you must be the view owner, or own its schema or be the super ID.
8 Reorganizing SQL/MX Tables and Maintaining Data Sometimes you might need to restructure the data in an SQL/MX table by reloading or reorganizing the table. NonStop SQL/MX offers several tools to aid the restructuring task: • Purge DDL files for dropped tables from the directory at /usr/tandem/sqlmx/ddl to provide more space for DROP operations. See “Purging Dropped Tables From the DDL Directory” (page 178). • Choose a method for reorganizing the data.
SQL/MX tables can be reorganized online. Online reorganization involves: • “Reorganizing Table Files With FUP RELOAD” (page 179) • “Determining the Status of a Reorganization With FUP STATUS” (page 180) • “Suspending a Reorganization Operation” (page 180) After you use FUP RELOAD to defragment a table, update the physical statistics using UPDATE STATISTICS to update the metadata information. This allows the optimizer to generate better query plans.
• The FUP RELOAD operation might take a long time, depending upon the size of the file and the rate specified for the command. • You can suspend the FUP RELOAD operation or request a status report about the progress of the operation, as explained in “Determining the Status of a Reorganization With FUP STATUS” (page 180). This command initiates a FUP RELOAD operation for the table named CUSTOMER. The SLACK option sets a minimum amount of free space in the blocks.
FUP - RELOAD $DATA08.ZSDAWVHH.J8MGRZ00, RATE 20, SLACK 50 If you want to keep the same RATE and SLACK values when you restart the reorganization process, enter the FUP RELOAD command without the RATE and SLACK parameters: FUP - RELOAD $DATA08.ZSDAWVHH.J8MGRZ00 After a FUP RELOAD operation has been suspended and you want to start this operation completely over again, enter the FUP RELOAD command with the NEW option: FUP - RELOAD $DATA08.ZSDAWVHH.
must perform a RECOVER operation on the affected SQL/MX object to restore the utility operation to a known good state. Consequences of a Failed Utility Operation If a failure, such as an unavailable resource or TMF failure, prevents a utility operation from completing successfully, the utility responds by attempting to perform its own internal recovery procedure for the purpose of completing the request or, if necessary, rolling back the operation to the known good state from which it began.
You can also use SHOWDDL to display information about DDL locks. An example SHOWDDL output would include: -- WARNING: A DDL lock exists on the following object, SHOWDDL may be inaccurate -CREATE TABLE CAT.SCH.CUSTOMER For more information, see the SQL/MX Reference Manual. Performing Recovery on Failed Utility Operations If the utility request fails to complete successfully, the request must be recovered to a known good state (resumed or cancelled), which removes the DDL lock and resets all flags.
Operations are allowed to continue for views because creating and dropping views do not change the structure of the dependent tables or the dependent table’s indexes. This contrasts with NonStop SQL/MP, for which structure information is stored about views in the dependent table’s label. Structure changing operations are sometimes allowed to run and succeed when inactive DDL locks are present.
partitions that are not being used by other applications or that are being used with READ access. WRITE access is prohibited. MODIFY’s with-shared-access option supports online partition management of range-partitioned tables and indexes in which the partitioning key is a prefix of the clustering key. This feature provides full read and write access to a range-partitioned table or index for the duration of most operations.
MODIFY and Table Reloading Some MODIFY command options start the online reload process ORSERV, which runs in the background. Until the ORSERV process completes, you cannot perform DDL operations and utility operations—including another MODIFY command—on the affected table or index. ORSERV keeps the file open and does not terminate for five minutes after the reload operation completes.
successfully or, if necessary, rolling it back to the known good state that existed at the start of the operation. If a flag remains set after a MODIFY request completes, this signifies that the table or index the MODIFY request is acting upon contains unusable space. When the MODIFY request completes, it starts an ORSERV process that reclaims the unused file space, reloads the file, and calls on DP2 to reset any UNRECLAIMEDSPACE or INCOMPLETE SQLDDL OPERATION flag that remains set.
a Flag Is Set in a Partition” (page 187)) and perform a FUP STATUS on the partition, using its Guardian name, to determine whether or not the reload process is running: • ◦ If the reload operation is still running, wait until it completes. ◦ If the reload operation is not running, perform a FUP RELOAD on the partition that has the UNRECLAIMEDSPACE or INCOMPLETE SQLDDL OPERATION flag set.
add first key ('Chicago') location $DATA02, add first key ('New York') location $DATA03 ); Also suppose that the table ORDERSR contains orders with these locations: • Atlanta • Chicago • Cincinnati • New York Example of Adding a New Range Partition You expect several thousand orders for ‘Seattle’ to arrive shortly. As the table ORDERSR is currently defined, you would insert these orders into the partition on $DATA03.
Note that the DROP PARTITON syntax is not used in this example because the partition $DATA03, which contains ‘New York,’ is not yet empty. Example of Reusing an Existing Range Partition by Setting the FIRST KEY Values to New Values You have reorganized your business so that all orders that originate in ‘Chicago’ and all locations with values greater than ‘Chicago’ are processed through an agent in ‘Boston.
Example of Dropping an Existing Empty Range Partition Suppose that your company has archived all the orders from 2003 and earlier and no longer needs to keep them in the ORDERSR table. You have deleted all records with ORDERTIMEs in 2003 and earlier from the table. As a result, the index partition $DATA01 is empty. Use this MODIFY command to drop this empty index partition: MODIFY INDEX cat2.sch2.
Example of Dropping an Existing Hash Partition and Rebalancing Data Orders suddenly decline, and you decide to free up some of the disk resources used by the table ORDERSH. Use this MODIFY TABLE command to drop the last partition and redistribute its data to the remaining partitions: MODIFY TABLE cat3.sch3.
By using the syntax last partition in the example, you avoid the need to specify the last partition by its location. Example of Moving an Existing Hash Partition to a New Location You need to use $DATA03 for other purposes. Use the MODIFY command to move the index partition there to another volume: MODIFY INDEX cat2.sch2.
ordernumber integer unsigned not null not droppable, ordertime timestamp) location $DATA09; Example of Moving an Entire System-Clustered Partition to a New Location You need to use $DATA09 for other purposes. You decide to move the system-clustered table ORDERSSC to a different volume. Use this MODIFY TABLE command to move the table to the volume $DATA03: MODIFY TABLE cat4.sch4.
Table 14 import Command Options (continued) import Option Description errors. The name of the error message file must be specified in OSS format, for example /usr/jdoe/errfile, may be the same as the one specified with the -E option, but must not be the same as any other output file.
Format File Considerations for import For more information, see the SQL/MX Reference Manual. Format File for a DELIM Input File If the input file type is DELIM and you want to use a format file, you must include the [COLUMN FORMAT] section. The other sections are optional. Format File for a FIXED Input File If the input file type is FIXED, you must specify a format file that includes the [COLUMN FORMAT] and [FIXED WIDTH FORMAT] sections.
3. Any string which includes the field delimiter, the row delimiter, or the field-qualifier character may appear differently. For example, if the field qualifier is a double quote character ("): • Use a double-quote(""") may come out as "Use a double-quote(")" 4. If the format file said to skip say column 1 of each row of the original input file, then that data will be missing completely in the execution error log file.
• “Managing Triggers to Improve import Performance” (page 200) • “Managing Data Types to Improve import Performance” (page 200) Running import on Empty Tables You can import data into a table most quickly by using the fast loading technique, which import automatically uses when the table meets all these conditions and no transaction size is specified on the import command line: • It is empty. • It has no indexes. • It has no droppable primary key, unique key, or foreign key constraints.
data being imported and the number of partitions being loaded to determine where the data should go. • Whether you are inserting the data into a single partition or multiple partitions. The method for inserting data into a single partition is different from the method for inserting data into multiple partitions.
table’s clustering key. Index maintenance degrades import performance. For more information, see the “Managing Indexes to Improve import Performance” (page 200). • RI constraints require a lookup of each row of the referenced table to confirm that the row exists, which imposes an extra I/O for each row being inserted. The extra I/O can impose a significant performance penalty when running import on a table with one or more RI constraints.
Using import to Perform Serial Partition Loads To perform a serial load on the partitions of an SQL/MX table, use one instance of import with the data in the input file sorted first by partition number and, for performance reasons, sorted secondarily by the clustering key. Using import to Perform Parallel Partition Loads Use multiple instances of import to perform a parallel load when the destination table is range-partitioned and it has no indexes.
Examples of Using import to Load an SQL/MX Table Example 1: Importing Data From a Delimited File This example is of an import of data from a delimited file. It shows the structure of the table to be loaded, the data to be loaded (the input data file), the format file describing the data, and the import command used to load it.
Data to Input The input file, COINPUT_FX, contains records like this: 00000000001,"Test String 3456","111-222-3333","444-555-6666" 00000000002,"ibm ","408-111-2222","408-222-3333" 00000000003,"apple ","408-222-1111","408-333-2222" 00000000004,"tandem ","408-285-5000","408-285-2227" 00000000005,"diyatech ","510-111-2222","510-222-3333" Format File Describing the Data Create a format file, FORMFILE2, which consists of [DATE FORMAT], [COLUMN FORMAT], and [FIXED WIDTH FORMAT] sections: [DATE FORMAT] DateOrder=
Note that there are three hidden spaces and a newline character at the end of each line.
55 12345678 --- 4 row(s) selected. The X.perrs file will contain: A8,22345678 14,423456789 17,52345678,9 BZ,XYZ The X.perrmsg file will contain: *** ERROR[20081] Row number 2 and column number 1 could not be processed. Column Data: A8 *** ERROR[20291] The data specified in row number 4 column number 2 is longer than the actual column size definition. Column Data:123456789. *** ERROR[20070] Columns in the datafile are not correct.
DataLoader/MX can be customized to perform data-format transformation while using the import utility for append operations. For more information, see the DataLoader/MX Reference Manual. Guidelines for Appending Data to Tables • Use the import command to add data to a table or partition without purging existing data. • Use import to add data to SQL/MX tables. You cannot use import to add data to ASCII files or any other files other than SQL/MX tables.
associated CPU). You can enter the next import command from another OSS shell that is running on another CPU. The example divides the input data into 16 files. Each file’s data is appended to the corresponding target partition. The input files reside on volumes $VOL1 through $VOL4. The target partitions reside on volumes $VOL33 through $VOL48. The record layout corresponds exactly to the target table layout.
has proceeded and how much longer it needs to run. Utilities provide reports that indicate what step is in progress. Utility operations periodically place operation progress reports in the metadata tables through the DDL lock mechanism. You can examine metadata to get the latest information. The DUP operation has the option to log these progress reports to an OSS text file. See “Querying SQL/MX Metadata” (page 105)for instructions on how to access information from the DDL_LOCKS table.
command) followed by the required number of additional FASTCOPY INDEX commands and one FASTCOPY TABLE...INDEXES EXPLICIT command, in any order. • There is an administrative overhead associated with using the FASTCOPY command as compared to the INSERT...SELECT command. For large tables, the overhead is negligible. However, for tables containing less data, the overhead might be significant.
ERROR[8580] No partitions of table could be accessed DUP versus FASTCOPY Table 15 (page 210) lists the differences between the DUP and FASTCOPY commands. Table 15 Difference Between DUP and FASTCOPY Commands DUP FASTCOPY Copies tables and indexes in a single command. Copies tables and indexes conveniently using a single command. Also, it might copy tables and indexes using separate commands.
STATISTICS statement to record current statistics for the table in the catalog. If the statistics are incorrect, the SQL compiler might not select the best access path for performance. • You cannot use the PURGEDATA command on a view or in an SQL program. PURGEDATA returns errors if: • You specify a list of partition for a hash-partitioned table. For hash-partitioned objects, you must delete the entire table.
9 Managing Database Applications Managing an SQL/MX database includes supporting the operating requirements and access requirements of application programs and maintaining valid application programs. Providing this support and maintenance can include both performance-related and operation tasks.
COMMIT WORK or ROLLBACK WORK. The result is the same since the transaction is already aborted. Some errors that might cause a TMF transaction abort might not be retryable, such as errors caused by conditions that must be resolved with some corrective action by the user or application. For example, depending on the application, it might not be useful to retry a duplicate key error returned by an INSERT statement using ROWSETS.
NonStop SQL/MX and TMF absolutely guarantee the integrity of the database following a CPU failure, application failure, or other component failure. Moving Programs From Development to Production The structure of the database may differ on development and production systems. The execution plans generated on the development system may be inefficient or invalid on the production system.
The Similarity Check determines if the query execution plan of a statement is still operable. If the Similarity Check fails (or is disabled), the SQL/MX executor, by default, invokes the SQL/MX compiler to automatically recompile the query execution plan. The automatically compiled plan is not saved for subsequent executions of the same program or for multiple concurrent executions of the same program. Therefore, automatic recompilation is expensive in terms of performance.
Name Resolution The SQL statements in the program must refer to database object names that exist on the production system or that enable compile-time name resolution.
Assumptions For the purpose of the examples, suppose that: • Two NonStop server systems, \DEV and \PROD, are connected by an Expand network. For more information, see the Introduction to Networking for HP NonStop NS-Series Servers. • The development node catalogs and schemas are not visible from the production node and vice versa. To ensure this, do not use the REGISTER CATALOG statement from either node to create an empty catalog reference on the other node.
The Similarity Check determines if the query execution plan of a statement is still valid. During the Similarity Check process, the redefinition timestamps and Guardian names of runtime objects are obtained and used during statement processing. This information is not saved for subsequent executions or for multiple concurrent executions of the same program. Whenever a new process starts, the Similarity Check is triggered when the statement is executed. Starting with SQL/MX Release 3.
Using REGISTER CATALOG to Access Objects on Remote Nodes Statements in an embedded SQL program can refer to SQL/MX objects on a remote node without changes to the database object names in the source code. However, the catalog that contains the objects must be visible from the local node before you can process and run the program. Use the REGISTER CATALOG statement to create an empty catalog reference on the local node that enables local programs to query the catalog’s objects on a remote node or nodes.
Ensuring Proper Name Resolution SQL/MX statements in programs must refer to database objects that exist on the production system or are visible on a remote node. If the database object names in programs refer to DEFINEs, these DEFINEs should be set in the environment where the programs will run. If the database object names in programs refer to SQL/MP aliases, the SQL/MP aliases must be created on the production system.
registering catalogs to make them visible in an SQL/MX distributed database environment, see “Using Catalog References in an SQL/MX Distributed Database” (page 271). You can use OBEY command files or scripts to automate the process of creating large numbers of SQL/MP aliases for applications that you will run in the SQL/MX production environment. Create the SQL/MP aliases before you start processing or running applications in the production environment.
Maintaining Query Execution Plan Validity A query execution plan defines the semantics and execution characteristics for a single compiled SQL statement. Statically compiled applications typically have many query execution plans. The query execution plans of an application are stored in an SQL/MX user module. Globally placed modules are stored in the /usr/tandem/sqlmx/USERMODULES directory. Locally placed modules are stored in the same directory as the C, C++, or COBOL executables that use them.
For more information, see the SQL/MX Query Guide. For more information about upgrading SQL/MX databases, see the SQL/MX Installation and Upgrade Guide. Displaying and Analyzing Query Execution Plans Use the EXPLAIN function to display the query execution plans of a module. For more information, see the SQL/MX Programming Manual for C and COBOL and the SQL/MX Reference Manual. For details on how to analyze query execution plans and determine an optimal plan, see the SQL/MX Query Guide.
Securing Globally Placed Modules You secure globally placed modules from the /usr/tandem/sqlmx/USERMODULES directory. The objective should be to assign the responsibility and accountability for safeguarding the integrity of each globally placed module to its own creator and owner. CAUTION: Only trusted accounts should be given write access to the directory /usr/tandem/sqlmx/USERMODULES.
4. Alternatively, the SUPER.SUPER user can make sure that all modules can be written to only by their respective owners by entering: chmod 0644 /usr/tandem/sqlmx/USERMODULES/* 5. Make sure that all future modules that you create are writable only by you or the USERMODULES directory owner by entering: umask 022 Regardless of the current umask settings, this command makes module files (and OSS files in general) that the user creates readable by all but writable only by the module owner and creator. 6.
7. Alternatively, the SUPER.SUPER user can make sure that all modules in the local directory can be written to only by their respective owners by entering: chmod 0644 /usr/local-directory/* 8. Make sure that all future modules that you create are writable only by you or the owner of the local directory owner by entering: umask 022 This command makes module files (and OSS files in general) that the user creates readable by all but writable only by the module owner and creator. 9.
Object: CAT.SCH.T28 Module: Object: Object: Object: Object: CAT.SCH.CURSOMEM \NODE1.DATA10.ORDERS.T01 CAT.SCH.T02 CAT.SCH.T03 CAT.SCH.T04 Module: Object: Object: Object: Object: CAT.SCH.TESTMEM CAT.SCH.T15 CAT.SCH.T16 CAT.SCH.T17 CAT.SCH.T18 Display all Modules and Their Corresponding Source SQL File Names >>DISPLAY USE OF SOURCE; Module: CAT.SCH.CONSTRAINTM Source Name: /E/NODE1/usr/user1/file1.sql Module: CAT.SCH.CURSOMEM Source Name: /E/NODE2/usr/user1/file2.sql Module: CAT.SCH.
Display all the Invalid Modules and Their Corresponding Source SQL File Names Associated With an Object >>DISPLAY USE OF INVALID MODULES FOR ‘CAT.SCH.TABLE1’; Object: CAT.SCH.TABLE1 Module: CAT.SCH.MODTABLE1 Source Name: /E/NODE11/modulelist/module1.sql Module: CAT.SCH.MODTABLE2 Source Name: /E/NODE11/modulelist/module2.sql Display Source SQL File for a Given Module in a Given OSS Directory >>DISPLAY USE OF MODULE_DIR '/usr/user1/all/modulestorage' SOURCE 'CAT.ALL.MODULE1'; Module: CAT.ALL.
CPUs to simultaneously access each of the subsets in far less time than it would take to run a single DISPLAY USE OF operation on the modules. This subsection provides instructions and examples for precompiling embedded applications to generate different modules using the module grouping precompiler option.
1 modules found, 1 modules extracted. 1 mxcmp invocations: 1 succeeded, 0 failed. 5. Perform SQL compilation on the other executables: % % % % 6. mxCompileUserModule mxCompileUserModule mxCompileUserModule mxCompileUserModule t02.exe t03.exe t11.exe t12.exe Check the SQL compiled module files using INVENTORYAPP: % ls /usr/tandem/sqlmx/USERMODULES/*INVENTORYAPP^*^^ /usr/tandem/sqlmx/USERMODULES/T01CAT.T01SCH.INVENTORYAPP^T01MOD^^ /usr/tandem/sqlmx/USERMODULES/T01CAT.T01SCH.
% % % % /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: mxCompileUserModule mxCompileUserModule mxCompileUserModule mxCompileUserModule -g -g -g -g moduleLocal moduleLocal moduleLocal moduleLocal t02.exe t03.exe t11.exe t12.exe 11. Verify how the local modules are generated: /user/smith/tmp/tmp/DUO: ls -l *.*.INVENTORYAPP^*^^ -rw-rw---- 1 JAVAQA.MODMAN JAVAQA 48293 Nov 29 14:00 T01CAT.T01SCH.INVENTORYAPP^T01MOD^^ -rw-rw---- 1 JAVAQA.
To delete the locally placed module file cat.sch.locmod1 from its local directory, enter: rm /usr/local01/CAT.SCH.LOCMOD1 In some cases, you might want to remove all modules of an application. For specific details, see the grouping technique of module management naming in the SQL/MX Programming Manual for C and COBOL. To delete all the module files of a group, enter: rm /usr/tandem/sqlmx/USERMODULES/*.*.
Converting Applications in a Production Environment On a production system, convert applications that use globally placed modules to those that use locally placed modules: 1. If you know which modules are used by programs that make up an application, skip this step. Otherwise, use the mxCompileUserModule utility to determine which modules are used by the programs that make up an application. a. Copy the executable to an empty temporary directory. b.
For more information about the strategies and procedures for backing up and restoring database files, see “Using Backup and Restore 2 for SQL/MX Database Recovery” (page 61).
10 Performing Recovery Operations The success of recovery operations depends on the effectiveness and consistency of the plan you develop for handling recovery situations. (See “Planning Database Recovery” (page 45)) Before beginning a recovery operation, you should thoroughly evaluate the tools available (RDF, TMF online dumps, backup tapes, and so forth) and appropriate for the type of failure.
and UNDONEEDED flags. For views, tables, indexes, and ENSCRIBE files, the information appears after the modification timestamp of the table. Normally, volume recovery recovers such files when the volume is started for transaction processing. If, however, the volume is already started and the file is still marked with REDONEEDED or UNDONEEDED, you must recover the file by using file recovery. Recovering Files File recovery is usually the method to use if other recovery methods fail.
Using file recovery with the TIME option can be difficult, however, because this method requires you to coordinate the recovery of interrelated objects, such as tables and their indexes. CAUTION: TMF carries no information about the relationships between file labels and catalogs. If a table is dropped, for example, file recovery cannot restore the catalog entries for the table. If the file recovery operation starts at a time just before a table was dropped, you might lose subsequent DDL changes.
creating the target object, the restore operation fails with Error 9037 as shown in this example, and the object is not recovered: NonStop TMF on \PLUTO *0202* RECOVER FILES [58] OnLineRestore Process #1 OnlineDumpMgmt: *ERROR* TMF-9037: $DATA17.ZSDT5356.J24Z5J00: File System error 11 occurred attempting to retrieve the SQL file label from disk. If, for any other reason, the target object is inaccessible during the restore process, this process also fails with Error 9037.
to obtain a copy of your source objects. Transactions can be active against the source objects at the time of the recovery. CAUTION: If you use the MAP NAMES option of the RECOVER FILES command to recover files to a new location, you must immediately make new online dumps of the target data files recovered. Without these new dumps, you will not have file-recovery protection for those files, and subsequent file recovery operations can fail.
Using VERIFY and FIXUP After using TMF RECOVER to recover partitions of an SQL/MX table, you should perform a consistency check on all recovered files by using mxtool VERIFY. The VERIFY command confirms that all data forks and resource forks exist and contain consistent information. When using mxtool VERIFY, specify the ANSI name of the table. For example, using the same table name from the previous example: mxtool VERIFY CAT.SCH.T039UC3 VERIFY performs these steps: 1.
find /usr/tandem/sqlmx/ddl -mtime +7 -print | grep "/ddl/" | sed "s/./rm &/" sh Recovering Table Privileges NonStop SQL/MX does not save the DDL for GRANT and REVOKE statements, so if you accidentally drop a table that includes security information created by these statements, you must restore the table’s security attributes in the table’s metadata before you can recover the table data, labels, and resource forks from a TMF online dump.
NOTE: If you have granted or revoked table privileges, see “Example: Recovering Table Privileges” (page 242) for guidelines on how to recover the privileges. For complete information on saving DDL and using it to re-create SQL/MX objects, see these discussions: • “Retrieving Saved DDL” (page 60). • In the SQL/MX Reference Manual, the information about the DROP TABLE statement and table management for metadata tables.
When the table is dropped, NonStop SQL/MX produces a file containing the DDL for all the statements in this example.
of indexes is maintained in the file label in the disk directory. When you use MXCI to create just the table and not the indexes, and later recover the table, additional mismatches will occur between the SQL/MX catalog or schema and the file label in the directory. This can make the recovery process even more difficult.
'Dept/Num' -- NOT NULL NOT DROPPABLE ,JOBCODE NUMERIC(4, 0) UNSIGNED DEFAULT NULL HEADING 'Job/Code' ,SALARY NUMERIC(8, 2) UNSIGNED DEFAULT NULL ,CONSTRAINT HRDATA.PERSNL.EMPLOYEE_428832188_2424 PRIMARY KEY (EMPNUM ASC) NOT DROPPABLE ,CONSTRAINT HRDATA.PERSNL.EMPLOYEE_133512188_2424 CHECK (HRDATA.PERSNL.EMPLOYEE.EMPNUM IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.FIRST_NAME IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.LAST_NAME IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.
refer to the DDL you saved earlier—either when the index was created or before the index was dropped— to retrieve the Guardian names. The saved DDL for a dropped table is stored in a file named catalog.schema.table-yyyymmdd-hhmmss.ddl, where yyyymmdd-hhmmss is the time when the table was dropped. In this case, the file is named HRDATA.PERSNL.EMPLOYEE-20031114-160629.ddl.
>>OBEY /USR/TANDEM/SQLMX/DDL/HRDATA.PERSNL.EMPLOYEE +>-20031114-160629.DDL; As MXCI executes the commands, they appear on your screen, along with certain related messages: >>CREATE TABLE HRDATA.PERSNL.
NOTE: GRANT and REVOKE statements are not saved as part of DDL text. However, the DDL text includes DELETE statements to remove the default security information from the metadata tables TBL_PRIVILEGES and COL_PRIVILEGES and INSERT statements to record the correct security information. When recreating a dropped table, use a licensed copy of MXCI to execute these DELETE and INSERT statements after you create the table, to restore the security to the same state as when the table was dropped.
To conduct the recovery, you use the TMFCOM RECOVER FILES command. You can use the MXGNAMES output as input to the RECOVER FILES command, as was done in Step 3: $SYSTEM ZMXTOOLS 29> MXGNAMES HRDATA.PERSNL.EMPLOYEE -TMF Hewlett-Packard NonStop (TM) SQL/MX Guardian Name Listing Utility (c) Copyright 2003 Hewlett-Packard Development Company, LP. (-- Table HRDATA.PERSNL.EMPLOYEE -- & $DATA10,ZSDT5356.NG59FJ*,& -- Index XEMPNAME ON HRDATA.PERSNL.EMPLOYEE -- & $DATA10.ZSDT5356.
Inconsistency found: *** ERROR[20760] The redefinition timestamp (211935615168385004) defined on the label does not match the time (211935867783457892) defined in the metadata for Guardian file (\KRYPTON.$DATA10.ZSDT5356.NPFH8J00). Verifying resource fork for partition: \KRYPTON.$DATA10.ZSDT5356.NPFH8J00 Verifying Partition Map: no partition map exists Verifying index: HRDATA.PERSNL.XEMPNAME Verifying label for partition: \KRYPTON.$DATA10.ZSDT5356.
Verifying Partition Map: no partition map exists Verifying index: HRDATA.PERSNL.XEMPNAME Verifying label for partition: \KRYPTON.$DATA10.ZSDT5356.JN9V5J00 Verifying resource fork for partition: \KRYPTON.$DATA10.ZSDT5356.JN9V5J00 Verifying Partition Map: no partition map exists Object verification complete for: HRDATA.PERSNL.EMPLOYEE 13. If the CASCADE option was specified in the original DROP TABLE command, the table’s views would also have been dropped along with the table and its indexes.
You can also specify every catalog that was backed up: BR> RESTORE $tape-drive, MX CAT *; For more information, see the Backup and Restore 2 Manual. Restoring Schemas When you restore a schema, all its dependent objects are restored as well. Restore one or more schemas either implicitly, by restoring the associated catalogs, or explicitly, by using the SCHEMA restore object: BR> RESTORE $tape-drive, MX SCHEMA CAT3.SCH; BR> RESTORE $tape-drive, MX SCHEMA CAT2.SCH, SCHEMA CAT3.
To restore one or more index partitions, restore the associated indices implicitly or use the PARTITION restore object explicitly: BR> RESTORE $tape-drive, MX (IPART cat1.sch1.index1 PARTITION(part1)), PARTONLY ON; BR> RESTORE $tape-drive, MX (IPART cat1.sch1.index1 PARTITION(part1, part2)) BR+>, PARTONLY ON; For more information, see the Backup and Restore 2 Manual.
Strategies for Repairing Damaged Metadata and Objects Using the mxtool Utility The mxtool utility provides an OSS command-line interface to various utility operations, including FIXUP, GOAWAY, INFO, and VERIFY. Use mxtool to repair damaged SQL/MX objects and metadata.
Correcting Problems Manually You can correct some problems in metadata directly by manually executing inserts, updates, selects, and deletes. Any SQL/MX user can read metadata, but only DDL and utility operations can update it. Therefore, NonStop SQL/MX allows the super ID to fix incorrect metadata through standard INSERT, UPDATE, and DELETE statements using a licensed MXCI process. However, use this method only as a last resort to correct a problem.
Table 17 Label Data Associated With SQL/MX Objects (continued) Label Field Contents Catalog operation timestamp The time and date when the last operation was performed against the SQL/MX catalog. Last open timestamp The time and date when the file was last opened. CRVSN Contains the creation version sequence number associated with the SQL/MX format object stored in the file label by the DP2 for all disk files. Primary extents The number of primary extents allocated for the file.
Table 18 Run-time metadata Stored in the Resource Fork (continued) Run-time metadata Contents Index map array Contains the list of all indexes on the SQL/MX format object (if the object is a table), along with index key information and a list of partitions associated with each index. Constraint information Contains entries for droppable check constraints. This information is needed for similarity-checking to determine whether plans need to be recompiled because of a new or deleted check constraint.
VERIFY performs these actions: • Checks for inconsistencies between an SQL/MX table and its structural representation in the resource fork. This feature compares information saved in the resource fork with similar information stored in metadata and returns inconsistencies. This information is compared: • ◦ ANSI name. The ANSI name stored in the resource fork must match that stored in metadata. ◦ ANSI namespace. The ANSI namespace stored in the resource fork must match that stored in metadata.
• VERIFY obtains read-only locks on metadata while verifying an object. Other operations that read metadata can run concurrently. Operations that change metadata or labels such as DDL, partition management, PURGEDATA, and UPDATE STATISTICS statements cannot run concurrently. • To verify some objects, NonStop SQL/MX might need to access remote systems. In this case, the remote systems must be available, and you must have privileges to view information on them.
interrupted database transactions. In other cases, files marked as questionable are actually consistent. These files, although open at the time of the failure, were not actively taking part in database transactions. In many cases, you know which files are actually corrupt and which are consistent. Normally, it is better to allow the TMF file recovery method to recover all the files and determine which are corrupt and which are not. See “Recovering Files” (page 236).
For example, PURGEDATA sets the corrupt bit before deleting rows. If PURGEDATA fails before deleting all rows, the corrupt bit remains set. If you determine that even though PURGEDATA failed it is acceptable to use the table, you can reset the corrupt bit. Changing the Redefinition Timestamp The redefinition timestamp changes whenever a DDL change made on the object results in a structure change.
CREATE TABLE cat.sch.table1 (Col1 int not null primary key, Col2 int not null, Col3 timestamp not null no default, Col4 char (20)) Partition (Add first key (1000) location \local.$data01 name part2, Add first key (2000) location \remote.$data01 name part3) Location \local.$data02 name part1; 2. Create three partitions with these names: • \LOCAL.$DATA03.ZSDWUEIR.SUDIFO00 • \LOCAL.$DATA02.ZSDWUEIR.SUDIFP00 • \REMOTE.$DATA02.ZSDWUEIR.SUDIFQ00 CREATE INDEX index1 (col2) ON table1 Location \local.
Using GOAWAY to Delete Damaged Objects SQL/MX files consist of two physical Guardian files: data fork and resource fork. Normally, when the data fork is dropped, DP2 automatically drops the corresponding resource fork. In some cases, either an orphaned resource fork or data fork can exist. mxtool GOAWAY is an OSS command-line utility run from mxtool that removes SQL/MX format file labels for both the resource and the data forks associated with a Guardian file.
ALTER DISK, ALTNAME and ALTER DISK, VOLNAME CAUTION: ALTER DISK, ALTNAME and ALTER DISK, VOLUME are extremely dangerous commands because they can corrupt a database. Before using an ALTER DISK command, perform a STOPOPENS DISK and a CONTROL DISK, REFRESH on the volume to ensure valid file labels, and then perform a STOP DISK to make the volume inactive. The ALTER DISK operation renames the files on a volume, but SQL/MX metadata and file labels still contain the old names.
RECOVER FILES $VOL.*.*, FROMARCHIVE 2. 3. 4. Check that dependent objects residing on other volumes have also been recovered and re-create objects as necessary. Verify the database by using the mxtool VERIFY utility. For more information, see the SQL/MX Reference Manual. Make new TMF online dumps of SQL/MX objects that reside on the volume. If the entire object was re-created, take dumps of the entire object even if one or more partitions reside on different volumes.
11 Managing an SQL/MX Distributed Database SQL/MX databases can be distributed over disk volumes on a single system (node) or in a network of nodes. Likewise, application programs that access NonStop SQL/MX can be distributed across CPUs in a single node or in a network. When managing a database distributed across volumes or nodes, you can usually use the same SQL statements you would use with a nondistributed database.
Managing a Network-Distributed SQL/MX Database You can distribute an SQL/MX database—and the applications that access it—across nodes in an Expand network. The database should be distributed in a way that provides a high degree of transparency to applications and users while maintaining high performance and availability. NOTE: For more information about managing a network-distributed SQL/MP database, see the SQL/MP Installation and Management Guide.
Versioning Capabilities The nodes in a distributed SQL/MX database can run different releases of SQL/MX software. Each release of SQL/MX software can identify the other releases that it is compatible and incompatible with. The SQL/MX versioning design ensures that: • Nodes with compatible software versions can interoperate. • Access to or from nodes with incompatible software versions fail predictably. SQL/MX Release 3.x does not support interoperability with SQL/MX Release 2.
when accessed by their ANSI names on remote nodes. Similarly, the metadata definitions for a database object, identified by its ANSI name, can be located and retrieved regardless of where that database object's metadata definitions are located in the network. For location-independent ANSI names, ANSI name lookup always starts with the CATSYS and SCHEMATA system schema tables on the local node. Next, the node that contains the metadata definitions for the ANSI-named object is identified.
Local Autonomy If a query requires data only from the node on which it executes, the query succeeds regardless of the availability of remote nodes on which the query’s accessed objects have partitions. For SQL/MX Release 3.x, query compilation and late name resolution of ANSI names require access to the nodes on which the accessed objects’ definition schema exist. For more information, see “Maintaining Local Autonomy in a Network for SQL/MX Release 3.x” (page 274).
names used follow the preceding rules. If you do not specify the Guardian file and subvolume names, NonStop SQL/MX automatically generates those names. To avoid incidentally meaningful but undesirable names (for example, ZSDHELL0.SMARTY00), vowels and the digit zero are never used in generated portions of names. While user-supplied names must conform to the other Guardian naming rules, they can contain vowels and the digit zero.
• The node with an automatic catalog reference for a catalog has one CAT_REFERENCES row for each node that has a reference for that catalog. • A node with an empty manual reference for a catalog has one CAT_REFERENCES row for the automatic catalog reference node and one CAT_REFERENCES row for itself. Catalog Reference Examples To better illustrate how user catalogs and their schemas are referenced in the system schema tables for SQL/MX Release 3.x, consider this next example scenario.
CAT_UID NODE_NAME REPLICATION_RULE (for manual replication) 12345678912345678 \N3 M CAT_UID contains the 17-digit UID for each catalog. • The SCHEMA_REPLICAS table on \N1 contains two rows for CAT1, one for each of its two schemas. This includes these entries in the SCHEMA_UID and NODE_NAME columns: SCHEMA_UID NODE_NAME 12345678912345678 \N1 23456789123456789 \N1 SCHEMA_UID contains the 17-digit UID for each schema.
Table 20 Statements and Commands for Managing Catalog References Command Description CREATE CATALOG DDL statement that creates a new catalog at the specified location on the local node. REGISTER CATALOG MXCI command that registers a catalog on a remote node. The remote node then has an empty catalog reference to the catalog. UNREGISTER CATALOG MXCI command that unregisters an empty catalog reference from a remote node. For more information, see the SQL/MX Reference Manual.
In a distributed database, statically-compiled queries can execute even if some of the nodes in the database are not available, provided these two requirements are met: • Either the node that stores the query objects’ metadata is available, or the query does not use late name resolution of ANSI names or automatic recompilation. • The user data that the query attempts to access is stored on available disks on the local node or some other available nodes.
Table 21 (page 276) describes the status of each node in the network with respect to user data, object metadata, applications for CAT_1 and CAT_2, and the effects of the loss of each node on the other two nodes in the network: • The column “CAT_1 and CAT_2 User Data Present?” identifies whether user data for CAT_1 and CAT_2 objects is present on a given node in the network. • The column “CAT_1 and CAT_2 Metadata Present?” identifies whether object metadata for CAT_1 and CAT_2 is present on a given node.
Table 22 Consequences of Network Node Loss, Reconfiguration 1 Node CAT_1 and CAT_1 and CAT_2 CAT_2 User Metadata Data Present? Present? CAT_1 and CAT_2 Applications Present? \A Yes No No The same consequences occur as were identified in the node \A row in Table 21 (page 276). \B Yes Yes Yes The same consequences occur as were identified in the node \B row in Table 21 (page 276). In addition, none of the applications that contain queries to CAT_1 or CAT_2 can be executed.
If CAT_1 and CAT_2 are tightly interrelated and if most application programs and queries access objects in both catalogs, the configuration described in Table 23 (page 277) would instead look like the configuration shown in Table 24 (page 278).
Example—Unregistering a Catalog From a Node The UNREGISTER CATALOG command causes an empty catalog reference to be removed from the specified node. As a result, the catalog is no longer visible on that node. UNREGISTER CATALOG samdbcat FROM \nodey.$data47; Distributing SQL/MX Database Objects SQL/MX objects can be distributed individually or distributed as partitions of tables or indexes. Objects can be distributed at creation time by the CREATE statement or at a later time by using the MODIFY utility.
QTY_ORDERED NUMERIC (5) UNSIGNED NO DEFAULT NOT NULL,) LOCATION \B.$DATA02; 3. From the local node \A, create a local index for the table TABREM on the remote node \B: CREATE INDEX CAT1.SCH1.INDEXLOC ON TABREM (ORDERITEM, ORDERNUM) LOCATION \A.$DATA02; Example—Creating a Table With Partitions on a Local and Remote Node To create a table with one partition on the local node \A and one partition on the remote node \B: 1.
Using Co-located Indexes in a Distributed Database Co-locating indexes with their parent table is an effective way of using NonStop SQL/MX to improve performance and allow for better data distribution, both in a standalone and distributed environment. For example, assume the EMPLOYEES table contains an EMPLNAME and EMPLCITY column, and each employee resides in San Francisco, Chicago, or New York City.
• The operating system at a node is updated. Usually nodes run compatible but different operating systems. Consult the current software release documents for compatibility issues between operating system releases. In some cases, upgrading the operating system release imposes compatibility issues on the database. For more information, see “The SQL/MX Environment” (page 26). • Communication to a node is lost. Nodes can become unavailable for a variety of reasons.
a node that runs either SQL/MX Release 2.x or SQL/MX Release 3.0 or SQL/MX Release 3.1, and vice versa. The following are the other restrictions: • Running different versions of SQL/MX software simultaneously on separate nodes of an interrelated SQL/MX database application is not allowed between SQL/MX Release 3.x nodes and SQL/MX Release 2.x nodes.
12 Measuring Performance During the life of an SQL application, you might need to measure the performance of all, or part, of the application. Several NonStop software products can provide statistical information about performance. Collecting these statistics requires an in-depth understanding of the system, the layout of the database tables, and the use of the application programs. You usually gather statistics when you are: • Running a performance benchmark.
ObjectSchemaVersion: 3000 ObjectFeatureVersion: 1200 Owner: SUPER.
Attr[ 0/ 1] dataType: 131 offset: 4 len: 2 align: 2 Attr[ 1/ 2] dataType: 0 offset: 6 len: 15 align: 1 Attr[ 2/ 3] dataType: 0 offset: 21 len: 20 align: 1 Attr[ 3/ 4] dataType: 131 offset: 41 len: 2 align: 2 Attr[ 4/ 5] dataType: 131 offset: 45 len: 2 align: 2 nullOff: 43 nullBitIdx: -1 Attr[ 5/ -1] dataType: 133 offset: 49 len: 4 align: 4 nullOff: 47 nullBitIdx: -1 ============================================================== For more information, see “Querying SQL/MX Metadata” (page 105) and the SQL/MX
With the statistics generated by SET STATISTICS ON or by DISPLAY STATISTICS, you can quickly monitor the performance of a specific statement on specific objects. The information can help you: • Determine the comparative performance of similar objects. For instance, you can determine the effect of a new index on a table compared to the performance without the index, or you can determine the performance after an UPDATE STATISTICS statement. • Display the statistics of various queries or DML statements.
Figure 6 Measure Entities and Program Structures To reduce the cost of overhead for the Measure interface to NonStop SQL/MX, the Measure product updates more than one counter per call. The overall cost of using the Measure product depends on the frequency of the intervals for measurements and on the number of active SQL statements. As the number of active SQL statements in a program increases, the cost of performing measurements increases.
• Disk reads • Attempts at escalation of locks (successful or not) • Lock waits • Message activity • Records accessed • Records used Stream access statements do not collect SQLSTMT statistics, but FILE statistics are collected. FILE Statistics Monitor database files with the FILE entity. The FILE report provides information on all partitions of a specific file and record use by a user process.
The DISKFILE entity gathers activity statistics against all partitions of a file object. Use DISKFILE to determine if data is being evenly distributed across all the partitions in a database. You must use DISCOPEN to determine how data is distributed among all partitions of a FILE. The DISCOPEN entity gathers statistics for partition opens by process. Use DISCOPEN to determine if the partitions in a database are being accessed evenly by the processes in an application.
• DISK-READS stores the number of physical disk I/O operations performed for this statement. • RECOMPILES stores the number of times the statement was recompiled. For valid statements, this number should be zero. If the statement has been recompiled, the counter for each session would be one because an invalid statement is usually recompiled only once in a session. If this number is one or greater, you should consider explicit SQL compiling the program.
• WRITES stores the number of inserts. • UPDATES stores the number of updates. • DELETES stores the number of deletes.
13 Enhancing SQL/MX Database Performance To achieve maximum performance, you must provide sufficient hardware to handle the throughput and size of the application database. In addition to hardware, many factors affect the performance of a database and application. Some factors are system dependent, others are application dependent. The factors discussed in this section are specific performance issues that can arise in an installed and operating SQL/MX environment.
NOTE: The WITH SHARED ACCESS option of MODIFY is supported only for both range and hash-partitioned objects, and only when the partitioning key is a prefix of the clustering key. For more information, see the SQL/MX Reference Manual.
Minimizing Contention When managing partitions, you can minimize contention by using the WITH SHARED ACCESS option of the MODIFY TABLE or MODIFY INDEX statements. For example, this MODIFY TABLE statement uses the WITH SHARED ACCESS option to provide online execution on range-partitioned objects: MODIFY TABLE cat1.sch1.table1 ADD WHERE KEY = value ('San Jose') TO LOCATION $DATA12 WITH SHARED ACCESS COMMIT; When you specify the WITH SHARED ACCESS option with MODIFY, these actions occur: 1.
• Operator intervention might be necessary in these situations: ◦ To restore backed-up audit trails if the audit fix-up process does not find an audit trail online. If the requested audit trail does not exist, the request fails. ◦ To restore the online dump if an online dump on tape is needed for a newly created partition or index. ◦ In response to EMS event messages indicating when online dumps can be taken, operator intervention is required to start the online dumps.
Because the locks used in Step 1 of the operation are exclusive, they have no special priority over other locks that can also be issued on the objects. Therefore, to enable the exclusive locks required by these functions, you might need to manage the application activity as follows: 1. During Step 1, do not compile programs that require access to the metadata involved for update or that refer to affected objects. 2. During Step 2, you can resume application transaction activity. 3.
• “Using SCF ALTER DISK to Resize the SQLMXBUFFER Attribute” (page 302) • “Strategies for Reducing Reuse Failures” (page 302) For every disk volume running NonStop SQL/MX on your system, approximately 1.1 GB of virtual space must be shared between the DP2 data cache, SQL/MX buffer, and lock tables. The DP2 data cache provides temporary storage for disk blocks containing table and index data that are referenced frequently.
Causes and Symptoms of Query Plan Fragment Reuse Failures Query plan reuse failures occur when DP2 is forced to quickly drop inactive plan fragments, even those that are repeatedly used. The most likely causes for these reuse failures are: • The SQL/MX buffer is too small to store all the plan fragments repeatedly being used to support application queries.
STORAGE - Stats DISK \DMR11.$SYSTEM SQL/MX Statistics: Session Data bytes....... 81920 KB Max Data bytes... 131072 KB Total Sessions........... 0 Active Sessions.. 0 - 4KB Blocks - - - - - - - - - - - - Reuse - - - - - - - - - Max...................... 32640 Attempts......... 499463 Number................... 20352 OK............... 499460 In Use................... 0 Failed FST....... 0 Failed ID........ 3 • Max Data bytes identifies the configured size of the SQL/MX buffer for the disk.
The following are the Publish/Subscribe counters: Publish/Subscribe bytes.. 54765 KB Max Subscribers.......... 512 Keys In Use...... 0 Subscribers Active....... 0 Resyncs.......... 0 Maximum Tables........... 512 Tables Active............ 0 • Publish/Subscribe bytes denote the number of kilo bytes allocated for publish/subscribe events. • Max Subscribers denote the maximum number of subscribers allowed per disk volume (512). • Subscribers Active denote the number of active subscribers.
• Memory Pressure denotes the number of failures caused because the memory pressure was detected. • No Memory denotes the number of failures caused because of non-availability of physical memory for the allocation. Using SCF ALTER DISK to Resize the SQLMXBUFFER Attribute Use the SCF ALTER DISK command to change the size of the SQL/MX buffer on a disk volume.
and backup CPUs for the data volume. While both are contained within virtual space, and clean pages can be stolen by the memory manager, over-configuration can lead to excessive page swapping and performance loss. In the worst-case scenario, an over-configured data cache can cause insufficient physical memory to meet the demand of pages, which can contribute to a variety of failures. NOTE: The backup DP2 CPU for a given disk does not use the SQL/MX buffer space unless it becomes the primary DP2.
CACHE FAULTS to CACHE CALLS is even one or two percent, consider reducing the cache size, adding more physical memory to the CPU, or processing to other CPUs. DP2 data cache size is controlled through the Subsystem Control Facility (SCF). For more information on using SCF to set cache size, see the SCF Reference Manual for the Storage Subsystem.
amount of swap file space is not available, an SQL/MX compilation might fail, or an executing statement might return an “insufficient memory” error. See the discussion on managing kernel-managed swap files in the Kernel-Managed Swap Facility (KMSF) Manual for more information on how to use NSKCOM to monitor kernel-managed swap file use, how to monitor swap file usage by process, and how to change your KMSF configuration.
Another disadvantage of co-located indexes is that if a table partition is moved to another disk, you must remember to move its corresponding index partition. Performing FUP RELOADs to Generate More Accurate Query Plans As tables and indexes are subjected to repeated INSERT and DELETE operations, they become more fragmented and less efficiently organized. Over a period of weeks, it can become impossible for MXCMP to accurately estimate the costs for different query plans for those tables.
• ◦ Maintaining programs, requesters, or servers to programmatically check data input can require additional programming time. In addition, your site must have methods or programs to verify that the existing tables conform to the new data checks. ◦ Programs with hard-coded validity checking cannot move as easily from one set of users to another as programs without hard-coded values.
Matching Block Split Operation to Table Usage In a table with key-sequenced organization, when an INSERT operation causes a data block to overflow, the DP2 process must make room for the new row by splitting the block in the middle and transferring some of its contents to a newly allocated block. Because NonStop SQL/MX does not support the SEQUENTIAL BLOCKSPLIT option for the CONTROL TABLE statement, you cannot split the block at the insertion point when rows are being inserted in sequence.
A Using Guardian Names with TMF, RDF, and Measure The TMF, RDF, and Measure subsystems provide important infrastructure and management support to NonStop SQL/MX. Currently, however, the user interfaces to these subsystems require Guardian physical file names, whereas SQL/MX tables can be referenced by their ANSI logical names: • Guardian physical file name format: [\node.][[$volume.]subvolume.]filename • ANSI logical file name format: [[catalog.]schema.
NOTE: Because of the 239-character limit for EDIT files, MXGNAMES cannot support table names in the input file whose overall length is greater than 239 characters, including the dots separating the catalog, schema, and table name portions. Such names have to be specified individually on the MXGNAMES command line. SHOWDDL File SHOWDDL output is commonly saved to an OSS format text file, which has to be converted using CTOEDIT before being used as input to MXGNAMES.
create table cat.sch.t126a (c1 int not null , c2 timestamp default current_timestamp not null , c3 char(4) default 'abcd' , c4 smallint not null , primary key (c1,c2)) location $vol1.zsd0126a.bxnl1r00 partition (add first key (1r00) location $vol2.zsd0126a.bxnl2r00 , add first key (2r00) location $vol3.zsd0126a.bxnl3r00 , add first key (3r00) location $vol4.zsd0126a.bxnl4r00 , add first key (4r00) location $vol5.zsd0126a.bxnl5r00 , add first key (5r00) location $vol6.zsd0126a.
& -- Table CAT.SCH.T126B--& $vol1.zsd0126a.bxnw1r*, & $vol2.zsd0126a.bxnw2r*, & $vol3.zsd0126a.bxnw3r*, & $vol4.zsd0126a.bxnw4r*, & & -- index T126B_NDX1 on CAT.SCH.T126B-- & $vol1.zsd0126a.qdx1g1*,& $vol2.zsd0126a.qdx1g2*,& $vol3.zsd0126a.qdx1g3*,& $vol4.zsd0126a.qdx1g4*,& $vol5.zsd0126a.qdx1g5* & -- end of Table CAT.SCH.T126B--) TMF Example 2: Input Is SHOWDDL Text MXGNAMES -Showddl=$VOL1.SQLSTUFF.SHOWD123 -output=NAMELST -TMF Given the file SHOWD123: CREATE TABLE CAT.SCH.
ADD FIRST KEY (100) LOCATION \NSK.$VOL2.ZSD0126A.QDXWG200 NAME PART_S_T_U , ADD FIRST KEY (500) LOCATION \NSK.$VOL3.ZSD0126A.QDXWG300 NAME PART_Y_Z_1 , ADD FIRST KEY (700) LOCATION \NSK.$VOL4.ZSD0126A.QDXWG400 NAME PART_A_Z_1 ) ; The resulting contents of file NAMELST would be: (-- Table CAT.SCH.T126A -- & $vol1.zsd0126a.bxnl1r*,& $vol2.zsd0126a.bxnl2r*,& $vol3.zsd0126a.bxnl3r*,& $vol4.zsd0126a.bxnl4r*,& $vol5.zsd0126a.bxnl5r*,& $vol6.zsd0126a.bxnl6r*,& & -- index T126A_NDX1 on CAT.SCH.T126A -- & $vol1.
LOCATION ( \pnode.$vol1.zsd0126a.bxnl1r00 TO \bnode.$vol1.zsd0126a.bxnl1r00, \pnode.$vol2.zsd0126a.bxnl2r00 TO \bnode.$vol2.zsd0126a.bxnl2r00, \pnode.$vol3.zsd0126a.bxnl3r00 TO \bnode.$vol3.zsd0126a.bxnl3r00, \pnode.$vol4.zsd0126a.bxnl4r00 TO \bnode.$vol4.zsd0126a.bxnl4r00, \pnode.$vol5.zsd0126a.bxnl5r00 TO \bnode.$vol5.zsd0126a.bxnl5r00, \pnode.$vol6.zsd0126a.bxnl6r00 TO \bnode.$vol6.zsd0126a.bxnl6r00, \pnode.$vol1.zsd0126a.qdxwg100 TO \bnode.$vol1.zsd0126a.qdxwg100, \pnode.$vol2.zsd0126a.
\pnode.$vol3.zsd0126a.qdx1g300 TO \bnode.$vol3.zsd0126a.qdx1g300, \pnode.$vol4.zsd0126a.qdx1g400 TO \bnode.$vol4.zsd0126a.qdx1g400) Restore Example 3: Input Is an SQL/MX Table Name MXGNAMES CAT.SCH.
Index Partitions, 83 RDF Backup Catalogs, 57 RDF Backup Schemas, 57 Schemas, 78 Sequence Generators, 102 SPJ, 102 SQL/MX Database, 73 SQL/MX Metadata, 76 SQL/MX Tables, 80 Triggers, 102 Views, 101 A ABORT DISK command SCF, 263 Access Privileges, 79 Adding Catalogs, 152 Columns, 152 Constraints, 154 Indexes, 155 Objects, 150 Partitions, 157, 307 Schemas, 159 Sequence Generators, 159 SQL/MP Aliases, 160 SQL/MX Tables, 160 Triggers, 161 Views, 161 All Privileges for a Column, 146 ALTER DISK, ALTNAME, 264 ALTE
SQL/MX Tables, 175 Triggers, 176 Views, 177 DUP, 207 E Enhancing Database Performance, 293 F FASTCOPY, 208 File Recovery, 50 File Structures, 30 Foreign Key, 31 Frequency of Backups, 63 FUP RELOAD, 178 Multiple DISPLAY USE OF Operations, 228 MXCS, 26 MXCS_SCHEMA, 23 MXGNAMES, 309 mxtool INFO, 106 mxtool VERIFY, 106 N Naming Conventions, 270 Naming Database Objects ANSI Guardian Physical Names, 22 Network Environments, 281 O GOAWAY, 263 Object Types, 24 Online and Offline Partition Management, 184 OSS
Remote Database Facility, 19 Removing Modules, 231 Reorganizing SQL/MX Tables, 178 Repair SQL/MX Metadata and Objects, 253 Reserved Catalogs, 78 RESTORE SHOWDDL ON, 67 Restoring Catalogs, 251 Indices and Index Partitions, 252 Schemas, 252 Tables and Partitions, 252 Restoring Objects, 251 Reviewing and Setting System Defaults, 68 S Safeguard Security, 44 SCF ALTER DISK, 302 Securing User Modules, 223 SECURITY_SCHEMA, 23 Sequence Generator Displaying Information, 147 Modifying Location, 193 sequence generato