HP NonStop SQL/MX Release 3.2 Management Guide HP Part Number: 691120-001 Published: August 2012 Edition: J06.14 and subsequent J-series RVUs; H06.
© Copyright 2012 Hewlett-Packard Development Company, L.P. Legal Notice Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents About This Manual......................................................................................12 Supported Release Version Updates (RVUs)................................................................................12 Audience...............................................................................................................................12 Related Documentation............................................................................................................
Types of Key-Sequenced File Access.....................................................................................31 Key-Sequenced Tree Structure..............................................................................................31 Planning Table and Index Partitioning........................................................................................33 Range Partitioning and Hash Partitioning..............................................................................
SQL/MX Subvolume and File Naming Guidelines.......................................................................72 SQL/MX Subvolume Naming Guidelines..............................................................................73 SQL/MX Data File Naming Guidelines.................................................................................73 Resource Forks...................................................................................................................
SHOWLABEL Command...................................................................................................105 mxtool VERIFY command..................................................................................................105 SHOWDDL Command.....................................................................................................106 SQL/MX Metadata Tables.....................................................................................................
Displaying all Columns in a View.......................................................................................139 Displaying all Columns in an Index....................................................................................139 Displaying all Columns in a Primary Key or Unique Constraint...............................................140 Displaying all Columns in a NOT NULL Constraint...............................................................
Dropping Triggers............................................................................................................174 Dropping Views...............................................................................................................174 8 Reorganizing SQL/MX Tables and Maintaining Data..................................176 Purging Dropped Tables From the DDL Directory.......................................................................176 Using FUP RELOAD to Reorganize Tables.........
Using REGISTER CATALOG to Access Objects on Remote Nodes...........................................216 Maintaining Local Autonomy for Programmatic Queries........................................................216 Ensuring Proper Name Resolution...........................................................................................217 Setting DEFINEs...............................................................................................................217 Setting SQL/MP Aliases....................
Managing a Network-Distributed SQL/MX Database................................................................264 SQL/MX Distributed Database Features..............................................................................264 Naming Network Nodes..................................................................................................267 Naming SQL/MX Database Objects..................................................................................
MXGNAMES Output Files.................................................................................................307 MXGNAMES Examples....................................................................................................307 Index.......................................................................................................
About This Manual This manual discusses other issues related to managing an SQL/MX system. In this manual, references to SQL/MX Release 2.x indicate SQL/MX Release 2.3.1, and subsequent releases and SQL/MX Release 3.x indicates SQL/MX Release 3.0 and subsequent releases, until indicated in a replacement publication. Supported Release Version Updates (RVUs) This publication supports J06.14 and all subsequent J-series RVUs and H06.
Specialized Guides SQL/MX Installation and Upgrade Guide Describes how to plan for, install, create, and upgrade a SQL/MX database. SQL/MX Management Manual SQL/MX Query Guide Describes how to understand query execution plans and write optimal queries for an SQL/MX database. SQL/MX Data Mining Guide Describes the SQL/MX data structures and operations to carry out the knowledge-discovery process.
Related SQL/MP Manuals SQL/MP Reference Manual Describes the SQL/MP language elements, expressions, predicates, functions, and statements. SQL/MP Installation and Management Guide Describes how to plan, install, create, and manage an SQL/MP database. Describes installation and management commands and SQL/MP catalogs and files. New and Changed Information The HP NonStop SQL/MX Release 3.1 Installation and Management Guide is consolidated with HP NonStop SQL/MX Release 3.
“Enhancing SQL/MX Database Performance” (page 290) Describes how to enhance database performance. “Using Guardian Names with TMF, RDF, and Measure” (page 306) Describes how to use MXTOOL to obtain Guardian names for the NonStop Transaction Management Facility (TMF), the Remote Database Facility (RDF), and Measure. Notation Conventions General syntax notation This list summarizes the notation conventions for syntax presentation in this manual. UPPERCASE LETTERS.
… Ellipsis. An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times. For example: M address [ , new-value ]… [ - ] {0|1|2|3|4|5|6|7|8|9}… An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times. For example: "s-char…" Punctuation. Parentheses, commas, semicolons, and other symbols not previously described must be typed as shown.
1 Introduction to SQL/MX Database Management This section provides an introduction to managing an SQL/MX database. For more information about managing an SQL/MP database, see the SQL/MP Installation and Management Guide. Managing an SQL/MX relational database typically involves managing sets of continuously active programs, as well as the database itself.
• “Multiple Character Sets” (page 20) • “Database Security” (page 20) • “Parallel Processing” (page 20) • “High Availability” (page 21) Distributed Databases An SQL/MX database distributed across multiple nodes in an Expand network provides a high level of transparency, location independence, read and update capability, and data integrity. Objects in the same database can reside on different compatible nodes. The partitions of a table or index can also be spread across compatible nodes.
To back up and restore SQL/MP tables or other SQL/MP objects, you must use the T9074 BACKUP and RESTORE utilities. For more information, see SQL/MP Installation and Management Guide, SQL/MP Reference Manual, and the Guardian Disk and Tape Utilities Reference Manual. Utilities provide backup functionality for SQL executables and program modules stored in OSS files. For more information, see the SQL/MX Reference Manual.
Multiple Character Sets NonStop SQL/MX allows you to associate one of these character sets with a literal or host variable: ISO88591 Default single-byte 8-bit character set for character data types. It supports English and other Western European languages. USC2 Double-byte Unicode character set in UTF16 big-endian encoding. All Basic Multilingual Plan (BMP) characters are included. Surrogate characters are treated as two double-byte characters.
High Availability These SQL/MX features help to ensure high availability for databases: • Online dumps using the TMFCOM DUMP FILES command, with complete support for TMF file recovery to recover a database in case of a disaster. • Online database reorganization capabilities such as online partition moving, partition splitting, and table and index row redistribution with concurrent read and update capability. For more information, see “Reorganizing SQL/MX Tables and Maintaining Data” (page 176).
For more information about using Guardian physical names for SQL/MP objects, see the SQL/MP Reference Manual and the SQL/MP Installation and Management Guide. DEFINE Names Use class MAP DEFINE names as logical names for SQL/MP tables or views in DML statements. When NonStop SQL/MX compiles such statements, it replaces the DEFINE name in the statement with the associated Guardian physical name. You can create DEFINE names within MXCI or names can be inherited from the TACL process or the OSS shell.
Table 1 SQL/MX Metadata Schemas (continued) Schema Name Schema Contents example, a list of Guardian locations for a table’s partitions). SYSTEM_DEFAULTS_SCHEMA The SYSTEM_DEFAULTS table resides in the system catalog under this schema. This user metadata table contains the default settings for options and other attributes used with MXCI commands and SQL/MX statements. MXCS_SCHEMA Five NonStop SQL/MX Connectivity Service (MXCS) tables reside in the system catalog under this schema.
Table 2 SQL/MX Object Types (continued) SQL/MX Object Type Description the view contents from other tables or views. SQL/MX views conform to the ANSI/ISO/IEC 9075:1999 SQL standard, more commonly referred to as SQL:1999, for their definition and security. Index An alternate access path to a table (alternate key) that differs from the primary access path (clustering key) defined when the table was created. An SQL/MX index includes columns for the alternate key and the underlying table’s clustering key.
OSS is an open computing interface to the HP NonStop system. The OSS environment coexists with the Guardian environment and provides a fully independent alternative to it. OSS interoperates with the HP NonStop operating system and makes use of many of its unique features, such as fault tolerance and parallel processing. However, OSS is not an operating system, and it does not replace portions of the operating system. The OSS environment is case-sensitive.
• Altering databases • Managing databases and programs • Reorganizing and moving databases • Managing database applications • Performing recovery operations • Managing distributed databases • Assembling and optimizing queries • Measuring and enhancing performance SQL/MX Database Management Tools NonStop SQL/MX supports these types of SQL/MX statements: DDL Creates, deletes, or modifies the definition of a catalog, schema, or object, or the authorization to use an object.
Table 3 NonStop Tools for SQL/MX Database Management (continued) Program Description DSAP Disk Space Analysis Program; analyzes use of space on disk volumes, reporting on factors such as free space availability and extent allocation. FCHECK Checks internal consistency of structured files and reports consistency errors. The FCHECK verifies the integrity of individual SQL/MX data rows, checking row length and field offsets to make sure they are within acceptable limits.
2 Understanding and Planning SQL/MX Tables An understanding of the types of organizations of SQL/MX tables and their key-sequenced file structure is essential for effective use and functioning of the database.
Understand the Schema Ownership Rule Starting with the SQL/MX Release 3.1, new security features are introduced. For more information on these features, see the SQL/MX Reference Manual. Avoid Using Duplicate Catalogs on Multiple Distributed Nodes If you create catalogs with the same name on two or more nodes, each node can access only the objects in the local catalog. You cannot join the tables or indexes in the same-named catalogs on different nodes.
Using Keys in SQL/MX Tables and Indexes At the physical level, a key is a field or group of fields that the system can use to order records and to identify records for processing. SQL/MX tables and indexes use the keys described in this subsection. Primary Key A primary key is the column or set of columns that define a unique key for an SQL/MX table. The primary key is a constraint and must conform to the ANSI rules for constraints. All columns defined for the primary key must be defined as NOT NULL.
The Key-Sequenced File Structure All SQL/MX tables are key-sequenced files, which store rows (records) that contain a clustering key. New rows are stored in sequence by clustering key value. A user performing update operations can update or delete rows and lengthen or shorten values in a varying-length column (VARCHAR, NCHAR VARYING) when the column is not part of the clustering key.
Figure 1 Key-Sequenced B-Tree Structure Each data block contains a header plus one or more data records, depending on the record size and data-block size. For each data block, an entry in an index block contains the value of the key field for the first record in the data block and the address of that data block. The position of a new record inserted into a key-sequenced file is determined by the value of its clustering key field.
Planning Table and Index Partitioning With medium to very large databases, it is often necessary to use partitioned tables and indexes to break down the data into smaller and more manageable units. Each partition can be managed individually and can function independently of the other partitions. Partitioning can provide significant benefits to a wide variety of applications by improving manageability, performance, and availability.
tunable method for data placement because you can influence availability and performance by spreading these evenly sized partitions across I/O devices (striping). Hash partitioning achieves many of the scan reduction benefits of range partitioning without requiring you to know the distribution of the partitioning key in advance. In addition, use hash partitioning in a “decoupled” fashion to cluster data on a key other than the partitioning key.
INSERT, and ALL privileges provide the user with access to all columns in the table. To protect your tables from access by unauthorized users, provide GRANT privileges to table views. • Alterations to tables generally have a greater impact on application code that uses the tables directly. For example, an application should not use SELECT * to retrieve values from a table, because adding a column to the table would mean that the application must change.
• Ad hoc queries are limited to data returned by views. Restricting users to using only views can prevent them from accessing sensitive data. • Views provide the same performance as the tables. • Views can help provide data integrity. Disadvantages include: • You cannot update views that provide views to more than one table. • Managing the database is more difficult because of the greater number of objects.
A second use of an index is to eliminate a run-time sort of data columns by providing an access path to the data in the order of the desired selection or by reverse order: SELECT A,B FROM ATABLE ORDER BY A, B; A third use of an index is to avoid a full table scan: SELECT ITEM_NAME, RETAIL_PRICE FROM INVNTRY WHERE RETAIL_PRICE = 100; Without an index on RETAIL_PRICE and assuming RETAIL_PRICE is not the leading column on the table’s clustering key, NonStop SQL/MX must scan the table and evaluate the predicate
The use of an alternate index does not ensure that the optimizer will choose the index as the access path. Index use depends highly on selectivity, described in the SQL/MX Query Guide. In general, these guidelines apply: • If the query can be satisfied with an index-only access, the optimizer normally uses the index. • Low selectivity of a predicate means that many rows are selected. For base table access through an index, the optimizer performs a random access read against the table.
key of the underlying table are not included in the clustering key of the index, but are physically included in the index file. The clustering key for an index can contain columns with null values. However, two null values in the same column are treated as equal values for purposes of unique constraint violation checking. In calculating the length of an index key, consider the extra bytes added to the length of any column that is not specified as NOT NULL NOT DROPPABLE.
If the INVNTRY table is large, the cost of sorting the table might be very high. An index on the columns QTY_ORDERED and RETAIL_PRICE, defined as follows, might mean that no sort is required to satisfy the ORDER BY clause: CREATE INDEX RPRICE ON INVNTRY (QTY_ORDERED, RETAIL_PRICE) ; To avoid a sort, define an index on the same columns that appear in the ORDER BY clause. The sequence of these columns in the ORDER BY clause should then match the sequence of columns in the index.
3 Planning Database Security and Recovery Database security and recovery are essential topics to consider before creating an SQL/MX database. Planning for security is the primary protection against unauthorized user intervention. Security, however, cannot eliminate errors by authorized users. Planning for recovery is essential for protecting your database.
Database Access and Security Access to SQL/MX database objects is governed by access privileges. Access privileges for SQL/MX tables, views, and indexes are provided through the GRANT and REVOKE statements. For more information about assigning and removing access privileges for SQL/MX objects, see “Access Privileges for SQL/MX Database Objects” (page 78) and the SQL/MX Reference Manual. For more information about Guardian security, see the Security Management Guide.
File and Directory Commands Only the file owner or the super ID can alter a file’s permission bits and thereby control access to that file. For detailed descriptions of the commands to alter these bits, see the Open System Services User’s Guide. Like the FILEINFO and FUP INFO commands in the Guardian environment, the ls command in the OSS environment allows users to display information, including the permission codes, for their files and directories.
• Safeguard volume protection records can control who is authorized to create disk files on specific disk volumes. • Safeguard process-protection records can control who is authorized to use specific process names. Safeguard access control lists cannot be used to protect OSS files. Access to OSS files is controlled by OSS file-permission bits.
Fault-tolerant hardware and software strategies provide maximum protection against most equipment failures, power failures, and some catastrophic failures. This protection, however, does not eliminate your need to plan carefully to protect database and application software. After formulating a comprehensive recovery strategy, practice carrying out the plan on a regular, consistent basis.
Recovering Dropped Database Objects The method for recovering an accidentally dropped SQL/MX object depends on whether that object is a catalog, table, index, or view. Recovery of a single view or index is usually a straightforward operation. Recovery of a table or catalog, however, can be complex and difficult if there are multiple dependent objects.
is that it requires a lot of work: you need to continually keep track of all interdependent objects, and process many tapes during recovery. ◦ If you perform collective dumps of the SQL/MX catalog and all its objects, this command might enable faster recovery, but you must continually update the SQL OBEY (script) files that you use to rebuild SQL/MX objects. NOTE: You can use OBEY command files containing TMFCOM command scripts for TMF tasks that you perform repeatedly.
Audit Trails If a system, disk, or program fails during a transaction, TMF uses audit trails to restore the files to their original state before the start of the transaction. Each audit trail is a series of files in which TMF records information about transaction changes to a database. The information includes: • Before-images, which are copies of data records before a transaction changes them. • After-images, which are copies of the changed records.
Volume Recovery Volume recovery recovers the database in the event of a disk crash or system failure. When TMF is restarted after a failure, volume recovery is initiated automatically for each accessible data volume on the system (except for volumes explicitly disabled in TMF). To recover the files, the volume recovery process reapplies committed transactions to ensure they are reflected correctly in the database, and then backs out all transactions that were incomplete at the time of the interruption.
Size Considerations When determining the size requirements of TMF for NonStop SQL/MX, consider these guidelines: • DDL statements execute within system-defined TMF transactions, generating audit-trail entries. DDL statements that refer to large tables can generate a large volume of audit-trail entries. Transaction volume includes database use by both application programs and MXCI interactive capabilities. The interactive volume might be minimal or might generate many audit-trail entries.
is necessary, your online dumps already reflect the batch updates. TMF would need to apply only those database changes that occurred after online dumps were taken. • When you create a new table and want to provide file recovery protection for it, make an initial online dump of the table after creation. • Certain DDL and statements and utility commands invalidate previous online dumps.
Table 4 SQL/MX Operations That Impact TMF Online Dumps (continued) Operation Option Effect Recovery Strategy ensure TMF file recovery protection. (This includes dumps of the table’s partitions, plus dumps of partitions associated with dependent indexes.) Fastcopy FIXUP utility Invalidates online dumps for the target tables and target indexes. Make online dumps of the target object after the fastcopy operation to ensure TMF file recovery protection.
For example, if an SQL/MX table to be dumped has the definition: CREATE TABLE CAT.SCH.T056T11 ( WREN INT NO DEFAULT -- NOT NULL NOT DROPPABLE , JUNCO INT NO DEFAULT -- NOT NULL NOT DROPPABLE , BLACKBIRD INT DEFAULT NULL , CONSTRAINT CAT.SCH.T056T11_104569472_0002 PRIMARY KEY (WREN ASC, JUNCO ASC) NOT DROPPABLE , CONSTRAINT CAT.SCH.T056T11_104569472_0001 CHECK (CAT.SCH.T056T11.WREN IS NOT NULL AND CAT.SCH.T056T11.JUNCO IS NOT NULL) NOT DROPPABLE ) LOCATION \NSK.$DATA1.ZSDADLON.
Note how individual resource fork names are accounted for. Alternately, the same list could have been specified: DUMP FILES & ( & $DATA1.ZSDADLON.NSPD2Q0?,& $DATA2.ZSDADLON.N0FK2Q0? & $DATA3.ZSDADLON.N75R2Q0?,& $DATA4.ZSDADLON.OFVY2Q0?,& $DATA5.ZSDADLON.ONL42Q0?,& $DATA1.ZSDADLON.R70U5Q0?,& $DATA2.ZSDADLON.SFQ15Q0?,& $DATA3.ZSDADLON.SNG85Q0? & ) In this case, the final question mark (.SNG85Q0?) includes both the data fork and the resource fork.
others, are described next. For more information about volume recovery and file recovery, see the TMF Operations and Recovery Guide. Using TMF to Recover Dropped SQL/MX Objects TMF is your primary means of recovering dropped SQL/MX objects.
NOTE: The format of the TMF audit records for SQL/MX objects changed in SQL/MX Release 3.0. As a result, the R2.3.x systems cannot apply the new format (R3.x) audit information whereas, R3.x systems can apply R2.3.x audit information. The R2.3.x primary systems can replicate audit trail content to R3.x disaster recovery systems. This allows you to upgrade the backup system to R3.x and test the upgrade prior to production. When the R3.x system becomes the active node, the constraints are: • R2.3.
When you create the schema on the RDF backup node, create it with the same schema subvolume name as that on the primary node. Using Backup and Restore 2 to Create Offline RDF Backup Databases Use Backup and Restore 2 to specify: • An RDF backup catalog name that is different from the primary catalog, using the TGT CATALOG syntax. • The physical location of SQL/MX objects as they are restored, using the LOCATION option to restore an SQL/MX table file to a different location.
Source files not explicitly mapped with the LOCATION clause will have their destination location generated by the system, using the default volumes and internally generated file names, as for any newly created table. Creating an RDF Backup Database With Identical File Names Suppose that an SQL/MX table has these source locations: \NODE1.$DATA01.ZSDABCDEF.FILEX100 \NODE1.$DATA02.ZSDABCDEF.FILEX100 \NODE1.$DATA03.ZSDABCDEF.FILEX100 Suppose that the desired RDF primary-to-backup volume mappings are: \NODE1.
Synchronizing an RDF Backup Database Whether created and populated online or offline, an RDF backup database must be synchronized with the primary database before use. This process is described in the RDF/IMP, IMPX, and ZLT System Management Manual. Using RDF to Recover Dropped Tables NonStop SQL/MX can automatically save the DDL for any individually dropped table, which can be retrieved later if the dropped table needs to be re-created.
In each case, the quotes delimiting the identifier are removed. For example, if the table CAT.”S&C%H”.”T*A*B?01” is dropped at 12:57:15 am on April 24, 2011, the saved DDL file would be: /usr/tandem/sqlmx/ddl/CAT.S_C_H.T_A_B_01-20110424-215715.ddl If the three-part ANSI name exceeds the maximum OSS file name length of 248, it is truncated to 248 characters. Files with similar names can be distinguished by: • The trailing timestamp portion of the name.
Table 5 Backup and Restore of SQL/MX and OSS Objects SQL/MX or OSS Object BACKUP Command RESTORE Command Catalog Explicitly backs up the named catalog and all Explicitly restores the named catalog and all subordinate objects. See “Backing Up Catalogs” subordinate objects. (page 63). Schema Explicitly backs up the named schema and all subordinate objects. Unless specifically excluded, a schema is automatically backed up when its associated catalog is backed up. See “Backing Up Schemas” (page 63).
Table 5 Backup and Restore of SQL/MX and OSS Objects (continued) SQL/MX or OSS Object BACKUP Command RESTORE Command OSS program executable file or user module Does not automatically back up program Use OSS backup objects to restore program executables or user modules. You back up executables or user modules in one or more sets program files or executables by specifying OSS of OSS directories and subdirectories. backup objects.
code. By using the WHERE expression, you can back up only SQL/MX files that have been modified from a certain date. If you perform partial backups, you must perform a full backup periodically to ensure that all files have been saved.
BR> BACKUP $tape-drive, MX (TPART cat1.sch1.table1 PARTITION (part1, part2)) BR+>, PARTONLY ON; You can use the BACKUP job option PARTONLY only when a database has partitioned tables and indexes. PARTONLY enables you to back up and restore individual components of a partitioned database. (For more information, see the Backup and Restore 2 Manual.) You cannot use the PARTONLY option with the INDEXES INCLUDED option.
However, except for GRANT and REVOKE privileges, you can capture the DDL for the CREATE statements executed for these objects and use this information to manually re-create these objects after a Restore operation. For more information, see the “Using the RESTORE SHOWDDL ON Option to Re-Create Objects” (page 66). For more information, see the “Example: Recovering Table Privileges” (page 239).
To restore class or JAR files, use OSS restore objects to specify the original OSS directory locations and optional target locations under which the original directories are restored: BR> RESTORE $tape-drive, OSS ((/usr/jarfiles, TARGET /newdir), (/usr/spjclasses, TGT /newdir)); For more information about backing up and restoring OSS objects, see the Backup and Restore 2 Manual.
4 Reviewing and Setting System Defaults NonStop SQL/MX uses system-defined default settings for attributes associated with compiling and executing queries. The system-defined default settings, which are hard-coded settings, are optimal in most cases. Under some circumstances, however, you might want to override a specific system-defined default setting.
The COMPILERCONTROLS is the table-valued stored procedure, which is implemented from the compiler. For more information, see the SQL/MX Reference Manual. Consequences of Inserting Rows Into the SYSTEM_DEFAULTS Table Changes you make through the SYSTEM_DEFAULTS table are permanent until changed by a DELETE statement or another UPDATE statement. However, the values you insert into the SYSTEM_DEFAULTS table do not affect your current session immediately.
Table 6 Settings for Selected System Defaults (page 1 of 5) (continued) System Default Default Value Alternative Setting NATIONAL_CHARSET The default value is UCS2. The value ISO88591 is appropriate if ISO88591 is the preferred national character set. A value for NATIONAL_CHARSET is set when you run the InstallSqlmx script during installation of NonStop SQL/MX. This value can be changed only by reinstalling NonStop SQL/MX. For more information, see SQL/MX Installation and Upgrade Guide.
Table 6 Settings for Selected System Defaults (page 1 of 5) (continued) System Default Default Value Alternative Setting behaves like RESTRICT; ON means that NO ACTION behaves like RESTRICT, without warning or errors. 70 SAVE_DROPPED_TABLE_DDL The default value is ON. DDL is saved only if SAVE_DROPPED-TABLE_DDL is set to the default ON setting. HP recommends that you do not change this default value in production environments.
Settings Required to Achieve ANSI Compliance Table 7 (page 71) lists the system default settings required to achieve ANSI compliance.
5 Creating an SQL/MX Database When you install and initialize NonStop SQL/MX on a system node, NonStop SQL/MX creates the system catalog and several system schema. After the installation is complete, you can create your own user catalogs and schemas. Creating a catalog updates the system schema. The definition schema for a user catalog is created when you create the first schema in that catalog. For more information , see the “Creating SQL/MX Metadata” (page 75).
NOTE: NonStop SQL/MX additionally restricts system-generated names by excluding vowels to prevent names containing actual words that could be unexpected or misleading. SQL/MX Subvolume Naming Guidelines • The subvolume name must begin with the letters ZSD, followed by a letter, not a digit (for example, ZSDa, not ZSD2). • The subvolume name must be exactly eight characters long.
Resource forks contain similar information to Guardian file labels, but have the capacity to store a much greater variety and quantity of information. Examples of data fork/resource fork pairs are: • $DATA.ZSDADMM8.WDE6TI00 / $DATA.ZSDADMM8.WDE6TI01 • $DATA.ZSDADMM8.HEPQTI00 / $DATA.ZSDADMM8.HEPQTI01 • $DATA.ZSDADMM8.SS8BSI00 / $DATA.ZSDADMM8.SS8BSI01 Data files are sometimes referred to as data forks to distinguish them from resource forks.
where schema-clause is one of: • schema • schema AUTHORIZATION auth-id • schema AUTHORIZATION auth-id location-clause • schema location-clause location-clause is: LOCATION subvolume [reuse-clause] reuse-clause is: REPEAT USE ALLOWED schema-element is one of: • table-definition • view-definition • grant-statement • index-definition One use of the schema subvolume is to identify all Guardian files in an schema for use with Guardian-based commands for TMF, RDF, and other subsystems.
When NonStop SQL/MX is installed and initialized on a node, the system catalog, NONSTOP_SQLMX_nodename, and its schemas are automatically created. These metadata schemas are: • SYSTEM_SCHEMA schema, which contains the five system schema tables used for catalog reference and recording catalog and schema information. • SYSTEM_DEFAULTS_SCHEMA schema, which contains the default settings for options and other attributes used with MXCI commands and SQL/MX statements.
• A nonempty catalog cannot be dropped. • In a distributed database environment, only the super ID user and the person who creates a catalog can register and unregister it. For more information, see the “Managing an SQL/MX Distributed Database” (page 263).
CREATE SCHEMA mycat.myschema; This example creates a schema that is owned by the user sql.gdavis and located on the volume ZSDA142A: CREATE SCHEMA mycat.myschema AUTHORIZATION "sql.gdavis" LOCATION ZSDA142A; The authorization ID must be the current authorization ID unless it is a super ID user. A super ID user can specify any currently valid authorization ID as the owner of the schema. For more information, see the “Access Privileges for SQL/MX Database Objects” (page 78) and the SQL/MX Reference Manual.
Examples for Granting Privileges This example grants SELECT privileges on a table, but not the privilege of granting SELECT privileges to others: GRANT SELECT ON TABLE persnl.employee TO "sql.user1"; This example grants SELECT and DELETE privileges on a table, including the privilege of granting SELECT and DELETE privileges to others: GRANT SELECT, DELETE ON TABLE sales.odetail TO "sql.user1", "sql.
Creating and Using Keys Create and use these keys for SQL/MX tables and their indexes: • Primary key. See “Creating and Using a Primary Key” (page 80). • Clustering key. See “Creating and Using a Clustering Key” (page 80). • SYSKEY. See “Using the SYSKEY” (page 81). • Partially decoupling the clustering and partitioning keys. See “Partially Decoupling the Clustering Key and the Partitioning Key” (page 82). • Partitioning key.
you cannot use it as the clustering key. If you attempt to use it, NonStop SQL/MX returns an error. • If you omit the STORE BY clause and do not specify a PRIMARY KEY that has the NOT DROPPABLE option, the clustering key storage order is determined by the SYSKEY only. Before deciding which columns to use in your clustering key, carefully consider these guidelines: • Each SQL/MX table and index must have just one clustering key.
specifies the lowest values in the partition for columns stored in ascending order and the highest values in the partition for columns stored in descending order. These column values constitute the partitioning key. You specify the first value allowed in the associated partition for that column of the partitioning key as a literal.
Performance Benefits of Partitioning • Partitions are independent of one another and only the accessed partition needs to be available. The query plan stores the primary partition. If the primary partition cannot be found, the alternate partitions are checked, starting with local partitions. • Partitions improve transaction throughput by allowing simultaneous disk access to different partitions of the same table. • Partitions require no special access procedures.
Using HASHPARTFUNC to Plan and Build Hash-Partitioned Tables NonStop SQL/MX uses the HASHPARTFUNC function to hash partition data. NonStop SQL/MX passes column values and the number of partitions, n, to HASHPARTFUNC. The function then returns a hash value in the range 0 through n-1, where n is the number of partitions. This hash value determines the partition where a row with the given colum value(s) will be stored.
Using DDL_DEFAULT_LOCATIONS to Distribute Primary Range Partitions You can use the DDL_DEFAULT_LOCATIONS system default to specify one or more default volumes for the primary range partition in a CREATE statement that does not include a LOCATION clause. If DDL_DEFAULT_LOCATIONS specifies more than one default volume, it chooses one at random for the location specification.
location $DATA01 hash partition; Example for Creating an SQL/MX Table With Multiple Hash Partitions CREATE TABLE cat1.sch1.
PRIMARY KEY NOT DROPPABLE constraints), headings, and partitions unless include-option clauses are specified. Optionally, create the new table with the same constraints and headings as the source table. Alternately, override these attributes and create the new table with different constraints and headings. • When you use the CREATE TABLE statement, for SQL/MX tables, note that it does not provide the SIMILARITY CHECK clause used with SQL/MP because similarity checking is always on for NonStop SQL/MX.
• Specify the HEADING and UPSHIFT attributes for your application’s use, if applicable. For more information, see “Specifying Column Attributes” (page 94). • Collation of numeric values occurs with negative numbers preceding positive numbers. • For the purpose of sorting, the null value is considered to be greater than all other values. • For compatibility of SQL/MX data types, two data types are comparable if a value of one data type can be compared to a value of another data type.
CHAR, or NATIONAL CHARACTER Variable-length characters CHAR VARYING VARCHAR NCHAR VARYING or NATIONAL CHAR VARYING For NonStop SQL/MX, the maximum row size is approximately 32 KB, provided the Control Query Default DEFAULT_BLOCKSIZE is set to 32768. However, the actual row size is less because of bytes used by the header, null indicator, column length indicator, and other system features. For more information, see the SQL/MX Reference Manual.
Defining Numeric Data Numeric data types are either exact or approximate. Exact numeric data types can represent a value exactly. Approximate numeric data types do not. NonStop SQL/MX supports these exact and approximate numeric data types: NUMERIC Exact numeric data types SMALL INT INTEGER LARGEINT DECIMAL PICTURE FLOAT Approximate numeric data types REAL DOUBLE PRECISION FLOAT is compatible with other numeric data types.
An item of type TIME indicates a time of day based on a 24-hour clock and is made up of these contiguous fields: HOUR Hour of day 00 to 23 MINUTE Minute of hour 00 to 59 SECOND Second of minute 00 to 59 A column value of type TIMESTAMP is made up of these fields: YEAR Year 0001 to 9999 MONTH Month of year 01 to 12 DAY Day of month 01 to 31 HOUR Hour of day 00 to 23 MINUTE Minutes of hour 00 to 59 SECOND* Second of minute 00 to 59 *The SECONDS field can have an optional fractional
Guidelines for Date and Time Interval Data When you define a column to hold date and time, date, time, or time interval values, use these general guidelines: • A column of the DATE, TIME, or TIMESTAMP type holds a value that represents a date or an instant in time, and a column of the INTERVAL type holds a value that represents a time interval, or duration.
Using Default and Null Values In NonStop SQL/MX, a null value is a marker that indicates that a column in a specified row has no value. The null value is not treated as a normal data value. It serves only as a placeholder necessary for certain relational operations. To an application interacting with a database, the null value indicates “unknown” or “do not know.” The DEFAULT and NULL clauses determine the value used when a column value is not supplied for a row during data entry.
• Use the DEFAULT NULL clause to specify that the column takes on a null value if no value is supplied for it. • Use the NOT NULL clause to specify that the column cannot contain null values. If you omit NOT NULL, nulls are allowed in the column. If you specify both NOT NULL and NO DEFAULT, each row inserted in the table must include a value for the column. ◦ Use of the DROPPABLE clause with NOT NULL means that you can later drop the NOT NULL constraint by using the ALTER TABLE statement.
to 128 bytes in length if it is displayed with a SELECT statement in MXCI. The heading name is an SQL identifier that can be delimited and its characters must be from the ISO88591 character set. If you specify a heading that is identical to the column name, INVOKE and SHOWDDL do not display that heading. If you specify NO HEADING for the column, the application uses the column name as the default.
Creating an Index To create an index, use the CREATE INDEX statement. If the underlying table contains data, the creation process automatically loads the index unless you specify NO POPULATE. When you define an alternate index, first consider the column-related guidelines described under “Managing Table Data” (page 87).
• If an index is added or removed, query execution plans can be rendered inoperable. The SQL/MX executor will perform a similarity check on such a plan and, if necessary, automatically recompile it. To avoid this, you can explicitly SQL compile the program. For specific guidelines, see the information on similarity checks and automatic recompilation in the SQL/MX Programming Manual for C and COBOL. • To influence the optimizer’s choice of index, use the CONTROL QUERY SHAPE statement.
CREATE UNIQUE INDEX XEMP ON persnl.employee (LAST_NAME, EMPNUM) LOCATION $data1 ATTRIBUTE NO AUDITCOMPRESS HASH PARTITION (ADD LOCATION $data2, ADD LOCATION $data3, ADD LOCATION $data4, ADD LOCATION $data5); Creating Constraints on SQL/MX Tables Constraints are rules that protect the integrity of data in a table by restricting the values in a particular column or set of columns to those that meet the conditions of the constraints.
Naming Constraints When you create a constraint, either specify a name for it or let NonStop SQL/MX name it. Constraint names are three-part logical ANSI names. Constraints have their own namespace within a schema, so a constraint can have the same name as a table, index, or view in the schema. However, two constraints in a schema cannot have the same name.
Examples for Creating Constraints for SQL/MX Tables To add a UNIQUE table constraint to an existing table: ALTER TABLE persnl.project ADD CONSTRAINT projtimestamp_uc UNIQUE (projcode, ship_timestamp); To add a FOREIGN KEY constraint to an existing table: ALTER TABLE persnl.project ADD CONSTRAINT projlead_fk FOREIGN KEY (projlead) REFERENCES persnl.employee; Creating Views of SQL/MX Tables SQL/MX views comply with the SQL:1999 standard.
This example creates a view from two tables by using an INNER JOIN: CREATE VIEW MYVIEW4 (v_ordernum, v_partnum) AS SELECT od.ordernum, p.partnum FROM SALES.ODETAIL OD INNER JOIN SALES.PARTS P ON od.partnum = p.partnum; Creating Triggers Use the CREATE TRIGGER statement to create triggers on SQL/MX tables. A trigger is a mechanism that sets up the database to perform certain actions automatically in response to the occurrence of specified events.
◦ ◦ Use fillers as required to attain proper alignment: – Recommended: CREATE TABLE t1, where column a is SMALLINT NOT NULL, c is SMALLINT NOT NULL, and b is INT NOT NULL. – Not recommended: CREATE TABLE t1, where column a is SMALLINT NOT NULL, b is INT NOT NULL, and c is SMALLINT NOT NULL. Unaligned fields cannot choose bulk move. • Avoid using signed numeric data types. Signed numbers are expensive to process and expensive to encode for key access. Unsigned numbers do not need encoding.
• You can mix SQL/MP and SQL/MX tables in the same DML statement. You can join an SQL/MP and SQL/MX table, create a union of them, and insert data from an SQL/MP table into an SQL/MX table and visa versa. • You cannot create a view that references both an SQL/MP table and an SQL/MX table. For more information, see the SQL/MX Comparison Guide for SQL/MP Users.
6 Querying SQL/MX Metadata Use the commands described in this section to obtain the information you need to manage a database.
Commands for Displaying Information Topics in this subsection: • “DISPLAY USE OF Command” (page 105) • “mxtool INFO Command” (page 105) • “SHOWLABEL Command” (page 105) • “mxtool VERIFY command” (page 105) • “SHOWDDL Command” (page 106) DISPLAY USE OF Command Use the MXCI DISPLAY USE OF command to display a list of modules and, for each module, to display a list of dependent objects (tables and indexes) that are used by the module.
SHOWDDL Command Use the MXCI SHOWDDL command to display the DDL syntax equivalent to the syntax that was used to create an SQL/MX table, view, or SPJ and its dependent objects. This command also supports SQL/MP alias names. You can choose to display the DDL for the underlying SQL/MP object and its dependent objects either in SQL/MX or in SQL/MP DDL syntax. The command is especially useful for displaying a thorough description of an object as it exists in metadata.
along with the tables shown in Figure 2 (page 106) for that schema. User catalogs do not contain the other metadata schemas shown in Figure 2 (page 106). • There is one set of statistics tables per schema. These are not shown in Figure 2 (page 106). Figure 3 (page 108) lists the SQL/MX metadata tables located in the system catalog, including the primary keys for each table. Use this table as a reference for the examples later in this section.
Figure 3 SQL/MX Metadata Tables 108 Querying SQL/MX Metadata
SQL/MX Metadata Tables 109
Starting with SQL/MX Release 3.1, a new schema, called SYSTEM_SECURITY_SCHEMA is introduced. Figure 4 (page 110) displays the tables in the new schema.
Understanding ANSI External and Internal Names ANSI SQL objects have both external and internal names. When objects are created in NonStop SQL/MX, they are given external names. However, when the same objects are stored in the metadata, they are stored by their internal name. You need to be aware of external and internal naming when performing metadata queries. When you use an object name in a search condition, such as a comparison predicate, use the internal name.
Displaying System Schema Information Topics in this subsection: • “Locating System Schema Tables” (page 112) • “Locating System Defaults Schema Tables” (page 114) • “Locating MXCS Schema Tables” (page 117) This table shows the locations of system schema information: Schema Table Type Schema Location on Each Node System schema tables NONSTOP_SQLMX_nodename.SYSTEM_SCHEMA System defaults schema tables NONSTOP_SQLMX_nodename.SYSTEM_DEFAULTS_SCHEMA MXCS (ODBC) schema tables NONSTOP_SQLMX_nodename.
ANSI NAME NONSTOP_SQLMX_DMR15.SYSTEM_SCHEMA.CAT_REFERENCES RESOURCE FORK \DMR15.$DATA02.ZSD0.CATREF01 SYSTEM METADATA \DMR15.$DATA02.ZSD0 VERSION 1200 TYPE K FORMAT 2 CODE 564 EXT ( 128 PAGES, 256 PAGES, MAXEXTENTS 512 ) PACKED REC 30 BLOCK 32768 KEY ( COLUMN 0, ASC , COLUMN 1, ASC ) AUDIT BUFFERED AUDITCOMPRESS OWNER -1 SECURITY (RWEP): *SQL DATA MODIF: 27 Jan 2011, 16:42, OPEN CREATION DATE: 27 Nov 2009, 19:40 REDEFINITION DATE: 27 Nov 2009, 19:40 LAST OPEN: 27 Jan 2011, 17:14 EOF: 98304 (0.
TYPE K FORMAT 2 CODE 563 EXT ( 128 PAGES, 256 PAGES, MAXEXTENTS 512 ) PACKED REC 48 BLOCK 4096 KEY ( COLUMN 0, ASC , COLUMN 1, ASC ) AUDIT BUFFERED AUDITCOMPRESS OWNER -1 SECURITY (RWEP): *SQL DATA MODIF: 19 Jan 2011, 17:24, OPEN CREATION DATE: 27 Nov 2009, 19:40 REDEFINITION DATE: 27 Nov 2009, 19:40 LAST OPEN: 26 Jan 2011, 0:37 EOF: 12288 (0.
Owner: SUPER.
schema_name = 'SYSTEM_DEFAULTS_SCHEMA' and cat_uid = -- get catalog uid for CAT (select cat_uid from nonstop_sqlmx_dmr15.system_schema.catsys where cat_name = 'NONSTOP_SQLMX_DMR15' ) ) for read uncommitted access; TABLE_NAME ------------- SYSTEM -------- DATA_SOURCE ----------- FILE_SUFFIX ------------------ SYSTEM_DEFAULTS \DMR15 $DATA08 ZSDA984F.M26FB700 --- 1 row(s) selected.
Audited: T (If F, a Utility operation is in progress or has failed) Broken: F Buffered: T ClearOnPurge: F Corrupt: F (If T, a Utility operation is in progress or has failed) CrashLabel: F CrashOpen: F IndexCompress: F IncompletePartBoundChg: F (If T, a Utility operation is in progress or has failed) RedoNeeded: F RollfwdNeeded: F UndoNeeded: F UnreclaimedSpace: F (If T, a Utility operation is in progress or has failed) Primary Extent Size: 128 Pages Secondary Extent Size: 256 Pages Max Extents: 512 Extents
LAST OPEN: 28 Jan 2011, 3:01 EOF: 12288 (0.
CAT DISTRIB_CAT NONSTOP_SQLMX_AZTEC Determining Whether a Catalog is Local or Remote The CAT_REFERENCES table contains at least one row for each catalog that is visible on the local node. It can contain more than one row for each such catalog, but only the row that has the value 'A' in the REPLICATION_RULE column contains the name of the node where the metadata for the catalog resides.
where s.schema_name = 'DEFINITION_SCHEMA_VERSION_' and c.cat_name = 'SAMDBCAT' and s.cat_uid = c.cat_uid order by schema_name for read uncommitted access; SCHEMA_NAME LOCAL_SMD_VOLUME ------------------------------ ---------------- SCHEMA_SUBVOLUME ---------------- DEFINITION_SCHEMA_VERSION_3000 $SYSTEM ZSD8F9M1 --- 1 row(s) selected. • This example uses the FILES command from the TACL prompt to display the list of files in the definition schema. The table names are hard-coded names.
Displaying Schema Information This subsection discusses: • “Displaying a Schema UID” (page 121) • “Displaying all Schemas Visible on a Node” (page 121) • “Displaying all Objects in a Schema” (page 122) • “Displaying the Attributes of a Schema” (page 123) • “Displaying the Owner of a Schema” (page 123) Related topics: • “Locating System Schema Tables” (page 112) • “Locating System Defaults Schema Tables” (page 114) • “Displaying all Tables in a Schema” (page 126) • “Displaying all Views in a
>>select substring(schema_name,1,30) as schema_name, cat_uid, schema_uid from nonstop_sqlmx_.system_schema.
HISTOGRAM_INTERVALS HISTOGRAM_INTERVALS_62116 HISTOGRAM_INTERVALS_83646 MVS_TABLE_INFO_UMD MVS_TABLE_INFO_UMD_231635 MVS_TABLE_INFO_UMD_782835 MVS_UMD MVS_UMD_326725152_5188 MVS_UMD_462235152_5188 MVS_USED_UMD MVS_USED_UMD_251445152_51 MVS_USED_UMD_343845152_51 ODETAIL ODETAIL_545675672_5188 ODETAIL_793765672_5188 ORDERS ORDERS_347711272_5188 ORDERS_917521272_5188 ORDREP PARTS PARTS_386351772_5188 PARTS_836161772_5188 XCUSTNAM XORDCUS XORDREP XPARTDES TA CN CN TA CN CN TA CN CN TA CN CN TA CN CN TA CN CN T
from schemata s, catsys c where cat_name = 'SAMDBCAT' and s.cat_uid = c.
• “Displaying the Attributes of a Table” (page 127) • “Displaying all Tables on a System” (page 128) Related topics: • “Locating System Schema Tables” (page 112) • “Locating System Defaults Schema Tables” (page 114) • “Determining Whether a Table has Indexes” (page 131) • “Displaying all Indexes for a Table” (page 132) • “Displaying the State of Indexes for a Table” (page 134) • “Displaying all Partitions for a Table or Index” (page 134) • “Determining Whether a Table Has Constraints” (page
MVS_USED OBJECTS PARTITIONS REF_CONSTRAINTS REPLICAS RI_UNIQUE_USAGE ROUTINES SCH_PRIVILEGES SEQUENCE_GENERATORS SG_USAGE SYNONYM_USAGE TBL_CONSTRAINTS TBL_PRIVILEGES TEXT TRIGGERS TRIGGERS_CAT_USAGE TRIGGER_USED VWS VW_COL_TBL_COLS VW_COL_USAGE VW_TBL_USAGE HISTOGRAMS HISTOGRAMS_FREQ_VALS HISTOGRAM_INTERVALS MVS_TABLE_INFO_UMD MVS_UMD MVS_USED_UMD PARTLOC PARTSUPP SUPPLIER T1 DEPT EMPLOYEE HISTOGRAMS HISTOGRAMS_FREQ_VALS HISTOGRAM_INTERVALS JOB MVS_TABLE_INFO_UMD MVS_UMD MVS_USED_UMD PROJECT CUSTOMER HISTO
nonstop_sqlmx_.system_schema.schemata s, objects o where c.cat_uid = s.cat_uid and c.cat_name = 'SAMDBCAT' and s.schema_uid = o.schema_uid and s.schema_name = 'SALES' and o.
--- SQL operation complete. >> select substring(object_name, 1, 15) as TABLENAME, case object_security_class when 'UM' then 'USER METADATA TABLE' when 'UT' then 'USER-DEFINED TABLE' when 'SM' then 'SYSTEM METADATA TABLE' when 'MU' then 'OTHER' end as SECURITY_CLASS, create_time, object_owner as owner from objects where object_type = 'BT' and schema_uid = ( select schema_uid from nonstop_sqlmx_.system_schema.
The VWS table is the definition schema table that lists all views in a catalog. Text for views is stored separately in the TEXT table. For more information about all metadata tables, see Figure 3 (page 108) in this manual or the SQL/MX Reference Manual. All queries for information about a specific view must access information from the CATSYS, SCHEMATA, and OBJECTS tables, and from the VWS table.
Displaying Information About SQL/MP Aliases Topics in this subsection: • “Displaying all SQL/MP Alias Names in a Schema” (page 130) • “Displaying all Attributes of an SQL/MP Alias Name” (page 130) The MP_PARTITIONS table contains the partition names of SQL/MP tables that have SQL/MP aliases. For more information about all metadata tables, see Figure 3 (page 108) in this manual or the SQL/MX Reference Manual.
Displaying Information About SPJs The ROUTINES table contains information about stored procedures in Java (SPJs). All queries for information about a specific SPJs must access information from the CATSYS, SCHEMATA, and OBJECTS tables, and from the ROUTINES table. For more information about all metadata tables, see Figure 3 (page 108) in this manual or the SQL/MX Reference Manual.
s.cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys c where c.cat_name = 'SAMDBCAT' and schema_version = ) ) ) for read uncommitted access; NUM_INDEXES -------------------1 Displaying all Indexes for a Table This example (when run on the node on which the metadata is located) displays all indexes for a given table: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring (o1.
where o.object_name = 'XCUSTNAM' and o.object_uid = l.base_object_uid and l.object_uid = o1.object_uid and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys where cat_name = 'SAMDBCAT' ) ) for read uncommitted access order by l.object_uid; LOCK_NAME UTIL ------------------------- ---- STATUS ----------- (EXPR) --------------------- XCUSTNAM_LOCK 3 \DMR15.
Displaying the State of Indexes for a Table This example displays the state of the indexes for the table SAMDBCAT.SALES.CUSTOMER: >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring (o1.object_name, 1, 40) as index_name, case a.valid_data when 'Y' then 'populated' else 'unpopulated' end as index_status from samdbcat.definition_schema_version_.access_paths a, samdbcat.definition_schema_version_.objects o1 where a.
substring(p.system_name, 1, 15) as system, p.data_source, p.file_suffix from partitions p, objects o where o.object_name_space = 'TA' and o.object_name = 'CUSTOMER' and o.object_uid = p.object_uid and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
) for read uncommitted access; DATA_SOURCE MAX_EXT ----------------------- -------$SYSTEM FIRST_KEY ----------- 160 STATUS -----------------AVAILABLE Displaying Constraint Information Topics in this subsection: • “Determining Whether a Table Has Constraints” (page 136) • “Displaying all Constraints on a Table” (page 137) • “Displaying the Attributes of a Constraint” (page 137) Related topics: • “Displaying all Columns in a Primary Key or Unique Constraint” (page 140) • “Displaying all Columns i
-------------------2 Displaying all Constraints on a Table This example displays all constraints for a given table. >> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring (o1.object_name, 1, 30) as constraint_name, case (t.constraint_type) when 'C' then 'CHECK' when 'F' then 'FOREIGN' when 'P' then 'PRIMARY' when 'U' then 'UNIQUE' else 'UNKNOWN' end as constraint_type from objects o, tbl_constraints t, objects o1 where o.
and o.object_name = 'CUSTOMER' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'SALES' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
o.schema_uid = (select schema_uid from nonstop_sqlmx_sys name>.system_schema.schemata where schema_name = 'PERSNL' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys where cat_name = 'SAMDBCAT' ) ) order by column_number for read uncommitted access; COLUMN_NAME ------------------------- COLUMN_NUMBER ------------- EMPNUM FIRST_NAME LAST_NAME DEPTNUM JOBCODE SALARY 0 1 2 3 4 5 Displaying all Columns in a View This example displays all columns in the view SAMDBCAT.SALES.
>> set schema samdbcat.definition_schema_version_; --- SQL operation complete. >> select substring(c.column_name, 1, 25) as column_name, c.column_number as base_table_column, ac.system_added_column from access_paths ap, access_path_cols ac, cols c, objects o where ap.access_path_uid = o.object_uid and ap.access_path_uid <> ap.table_uid and c.object_uid = ap.table_uid and o.object_name = 'XCUSTNAM' and ac.column_number = c.column_number and ac.access_path_uid = ap.access_path_uid and o.
s.cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.catsys c where c.cat_name = 'SAMDBCAT' and schema_version = 3000 ) ) for read uncommitted access; COLUMN_NAME ------------- COL_NUM ---------- TYPE ----------- CONSTRAINT_NAME ----------------------- ORDERNUM 0 PRIMARY KEY ODETAIL_545675672_5188 PARTNUM 1 PRIMARY KEY ODETAIL_545675672_5188 Displaying all Columns in a NOT NULL Constraint This example displays all columns with NOT NULL constraints for the table SAMDBCAT.SALES.
Displaying all Columns in a Referential Integrity Constraint This example displays all columns with referential integrity constraints for the table SAMDBCAT.SALES.ODETAIL: >> alter table samdbcat.sales.odetail add constraint p1 foreign key (partnum) references samdbcat.sales.parts(partnum); >> set schema samdbcat.definition_schema_version_; >> select substring(c.column_name, 1, 15) as column_name, c.column_number as col_num, r.update_rule, substring(o2.
where c.object_uid = o.object_uid and o.object_name = 'EMPLOYEE' and o.object_type = 'BT' and o.schema_uid = (select schema_uid from nonstop_sqlmx_.system_schema.schemata where schema_name = 'PERSNL' and cat_uid = (select cat_uid from nonstop_sqlmx_.system_schema.
Displaying Information About Privileges Topics in this subsection: • “Displaying all Privileges for a Table” (page 144) • “Displaying all Privileges for a View” (page 145) • “Displaying all Privileges for a Column” (page 146) The TBL_PRIVILEGES table, located in the schema DEFINITION_SCHEMA_VERSION_version-number of each catalog, describes the privileges for all tables and views in that catalog.
) for read uncommitted access; GRANTOR_ID ---------- GRANTOR_T --------- GRANTEE_ID ----------- GRANTEE_T --------- PRVTYPE ------- GRANTABLE --------- -2 -2 -2 -2 -2 System System System System System 65535 65535 65535 65535 65535 User User User User User Delete Insert Refer Select Update Y Y Y Y Y Displaying all Privileges for a View This example displays all privileges for the view SAMDBCAT.SALES.CUSTLIS: >>set schema samdbcat.
Displaying all Privileges for a Column This example displays all privileges for the column EMPNUM in the table SAMDBCAT.PERSNL.EMPLOYEE: >>set schema samdbcat.definition_schema_version_schema version>; --- SQL operation complete.
inconsistent through database changes that are not applied consistently throughout related objects. SQL can prevent many of the operations that cause inconsistency, but it does not always detect all operational errors. For example, when you drop a table, SQL attempts to drop all dependencies. However, a user might restore objects that might not be consistent with the related objects. System operational problems or system failures can also cause inconsistencies in the data dictionary.
>> select o.object_feature_version from nonstop_sqlmx_node.system_schema.schemata s, nonstop_sqlmx_node.system_schema.catsys c, catalog-name.definition_schema_version_schema-version.objects o where c.cat_name = 'catalog-name' and c.cat_uid = s.cat_uid and s.schema_name = 'schema-name' and s.schema_uid = o.schema_uid and o.object_name = 'object-name'; This example is the same as the previous but limits the query to a specified name space: >> select o.object_feature_version from nonstop_sqlmx_node.
7 Adding, Altering, and Dropping SQL/MX Database Objects After you create an SQL/MX database, you can assume the database is consistent and the application data is valid. Database management operations that add, alter, or drop database objects must maintain this same level of data consistency and validity, and must be planned carefully.
Table 8 Adding Objects to an SQL/MX Database (continued) Object Operation Statement, Command or Utility Schema Add CREATE SCHEMA statement SQL/MP alias Add CREATE SQLMP ALIAS statement Stored procedures in Java (SPJs) Add CREATE PROCEDURE statement Table Add CREATE TABLE statement Trigger Add CREATE TRIGGER statement View Add CREATE VIEW statement For more information on creating (adding) objects, see: • “Understanding and Planning SQL/MX Tables” (page 28) • “Creating an SQL/MX Datab
Table 9 Authorization Requirements for Adding Database Objects (continued) Operation Authorization Requirements “Adding Triggers” (page 159) Adding a trigger can affect up to three schemas, each with its own authorization requirements: • The schema where the trigger is created. You must either own this schema or be the super ID. • The schema where the subject table exists. You must either own this schema or be the super ID. • The schema where the referenced table exists.
NOTE: Where possible, avoid adding columns to a table. When you add a column, internal expression optimizations are turned off, and the executor incurs a performance penalty whenever a column is added. The extent of the impact on performance depends in large part on data types used in added columns. For more information, see the “Database Design Guidelines for Improving OLTP Performance” (page 101).
To integrate the new column into the existing database or application programs: • If you want to create a new index using the new column, follow the steps for “Adding Indexes to SQL/MX Tables” (page 154). • If you want to add the new column to an existing index, first follow the steps for “Dropping SQL/MX Indexes” (page 169), then follow the steps for “Adding Indexes to SQL/MX Tables” (page 154) to add the new column definition. You cannot alter an index or view to add a column.
5. Use the DISPLAY USE OF command to identify which user modules are associated with this object. See the similarity check criteria in the SQL/MX Programming Manual for C and COBOL to determine if your changes are likely to cause similarly check to fail and force automatic recompilation. If they will, you should SQL compile these modules after making the changes to avoid expensive automatic recompilations at run time.
For additional guidelines related to index creation, including performance-related considerations, see the SQL/MX Reference Manual, “Determining When to Use Indexes” (page 36), “Defining an Index” (page 38), and “Creating Indexes for SQL/MX Tables” (page 95). Evaluating the Benefits of a New Index Use indexes to improve the performance of your database and to implement constraints. Indexes can improve performance by eliminating the need for the disk process to access the underlying table.
to avoid expensive automatic recompilations at run time. SQL applications that are running while you make these changes will still undergo automatic recompilation. For more information about explicit and automatic recompilation, see the SQL/MX Programming Manual for C and COBOL. For more information about using DISPLAY USE OF, see “Checking Module Dependencies with DISPLAY USE OF” (page 223) and the SQL/MX Reference Manual. 6. Enter the CREATE INDEX statement.
• A certain subset of a remote table’s data is accessed more frequently at the local node than from a remote node, partitioning the table so that the frequently accessed portion of the data resides on the local node can increase local performance. • Queries are processed in parallel, partitioning a table or index is often required. Partitioning is necessary, for example, for parallel execution of a SELECT statement on a single table.
>> MODIFY TABLE cat3.sch3.ordersh add partition +> location $DATA04; --- SQL operation complete. In response to this command, a portion of the data in each of the original three partitions is moved automatically to the fourth partition, rebalancing the data evenly across the four partitions. For more detailed examples of adding, modifying, and deleting range-partitioned and hash-partitioned tables and indexes, see “Using MODIFY to Manage Table and Index Partitions” (page 182).
Adding Stored Procedures in Java (SPJs) Use the CREATE PROCEDURE statement to add an SPJ to your SQL/MX database. For more information, see the SQL/MX Guide to Stored Procedures in Java. Adding SQL/MX Tables Use the CREATE TABLE statement to add a table to your SQL/MX database. The name for the new table must be unique among names of tables, views, SQL/MP aliases, and procedures within its schema.
Steps for Adding a Trigger 1. 2. 3. 4. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the table to which you want to add the trigger. Query the system metadata to ensure the new trigger name is unique among existing trigger names defined in the schema. Use the DISPLAY USE OF command to identify which user modules are associated with this object.
Table 10 Altering Objects in an SQL/MX Database (continued) Object Operation Statement, Command or Utility Reuse partition Move partition MODIFY utility MODIFY utility Grant privilege GRANT statement GRANT EXECUTE statement nl Stored procedure in Java (SPJ) nl nl Revoke privilege REVOKE statement REVOKE EXECUTE statement nl SQL/MP alias Remap ALTER SQLMP ALIAS statement System Default Changing attribute settings See “Reviewing and Setting System Defaults” (page 67) Table Change attribute
Table 11 Authorization Requirements for Altering Database Objects (continued) Operation Authorization Requirements “Altering SQL/MX Tables” (page 164) To alter a table with the ALTER TABLE statement, you must own its schema or be the super ID. To alter a table with the GRANT and REVOKE statements, you must have both that privilege and the right to grant or revoke that privilege.
Altering Partitions for SQL/MX Tables and Indexes To alter a table or index partition, use the MODIFY utility. For MODIFY utility syntax, see the SQL/MX Reference Manual. Steps for Altering a Partition 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the partition you want to alter. Use the DISPLAY USE OF command to identify which user modules are associated with this object.
Altering SPJs For more information, see the SQL/MX Guide to Stored Procedures in Java. Altering System Defaults For more information, see “Reviewing and Setting System Defaults” (page 67). Altering SQL/MX Tables You can use various statements and utilities to alter SQL/MX tables in many ways. For more information about these statements and utilities, see the SQL/MX Reference Manual. Altering SQL/MX Table Columns You cannot alter table columns directly.
Altering Table File Attributes To alter these physical file attributes of SQL/MX tables, use the ALTER TABLE statement: • ALLOCATE/DEALLOCATE • AUDITCOMPRESS • CLEARONPURGE • MAXEXTENTS Altering a table's file attributes neither invalidates any programs nor affects dependencies of the table. To alter security attributes of SQL/MX tables, however, use the GRANT and REVOKE statements. (See “Altering Table Privileges” (page 165).) Steps for Altering Table File Attributes 1. 2. 3.
For more information, see the SQL/MX Reference Manual. Altering Triggers To alter triggers, use the ALTER TRIGGER statement. You can alter triggers by either enabling or disabling them with the ALTER TRIGGER statement. For more information, see the SQL/MX Reference Manual. Steps for Altering a Trigger 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records.
For more information, see the SQL/MX Reference Manual. For more information about application programs, see “Managing Database Applications” (page 209). Authorization Requirements for Dropping Database Objects Table 13 (page 167) describes the authorization requirement for each drop operation. Table 13 Authorization Requirements for Dropping Database Objects Operation Authorization Requirements “Dropping Catalogs” (page 167) Any user can drop any empty catalog visible on the local node.
For more information, see the SQL/MX Reference Manual. Steps for Dropping a Catalog 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the catalog you wish to drop. If necessary, use the DROP SCHEMA statement to drop any schemas from the catalog. (See “Dropping Schemas” (page 171).
statement drops only the constraint definition from the table and does not affect the data in the table. Dropping a constraint on a table invalidates the programs that depend on the table. You should include steps to explicitly SQL compile the dependent programs to avoid automatic recompilation and to return the application to a valid state. For more information, see the SQL/MX Reference Manual. Steps for Dropping a Constraint 1. 2. 3. Start an MXCI session.
4. 5. Enter the DROP INDEX statement. Revise the application source code as needed to reflect your changes to the database. Process and compile the updated source file. For more information, see the SQL/MX Programming Manual for C and COBOL. If you plan to use the TMF subsystem for recovering an index, see “Recovering Views and Indexes” (page 248). For more information, see the SQL/MX Reference Manual.
NOTE: Unlike the DROP statement, PURGEDATA does not remove the file label. For more information, see “Using PURGEDATA to Delete Data From Tables” (page 207) Dropping Schemas Use the DROP SCHEMA statement to drop a schema. To drop a schema successfully: • It must be empty unless you use the CASCADE option. • You must own the schema or be the super ID. • You must have remote passwords for any nodes to which the catalog of the schema has been registered.
Steps for Dropping an SQL/MP Alias 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the SQLMP alias that you wish to drop. Use the DISPLAY USE OF command to identify which user modules are associated with this object.
necessary and recompile them explicitly, as in “Steps for Dropping an SQL/MX Table and its Data” (page 173). • A table that has an active DDL lock (one for which the process that created it still exists) cannot be dropped even if you specify CASCADE. An active DDL lock will be released when the utility locking the file completes.
Steps for Dropping a Table’s Data Only 1. 2. 3. 4. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the table containing the data you wish to drop. Use the PURGEDATA utility to drop the data. Make new TMF online dumps of all affected partitions. For more information, see the SQL/MX Reference Manual.
Steps for Dropping a View 1. 2. 3. Start an MXCI session. Enter a LOG command to initiate a log file for statements and commands entered in this session. Keep the log for your records. Determine the name of the view you wish to drop. Use the DISPLAY USE OF command to identify which user modules are associated with this object.
8 Reorganizing SQL/MX Tables and Maintaining Data Sometimes you might need to restructure the data in an SQL/MX table by reloading or reorganizing the table. NonStop SQL/MX offers several tools to aid the restructuring task: • Purge DDL files for dropped tables from the directory at /usr/tandem/sqlmx/ddl to provide more space for DROP operations. See “Purging Dropped Tables From the DDL Directory” (page 176). • Choose a method for reorganizing the data.
SQL/MX tables can be reorganized online. Online reorganization involves: • “Reorganizing Table Files With FUP RELOAD” (page 177) • “Determining the Status of a Reorganization With FUP STATUS” (page 178) • “Suspending a Reorganization Operation” (page 178) After you use FUP RELOAD to defragment a table, update the physical statistics using UPDATE STATISTICS to update the metadata information. This allows the optimizer to generate better query plans.
• The FUP RELOAD operation might take a long time, depending upon the size of the file and the rate specified for the command. • You can suspend the FUP RELOAD operation or request a status report about the progress of the operation, as explained in “Determining the Status of a Reorganization With FUP STATUS” (page 178). This command initiates a FUP RELOAD operation for the table named CUSTOMER. The SLACK option sets a minimum amount of free space in the blocks.
FUP - RELOAD $DATA08.ZSDAWVHH.J8MGRZ00, RATE 20, SLACK 50 If you want to keep the same RATE and SLACK values when you restart the reorganization process, enter the FUP RELOAD command without the RATE and SLACK parameters: FUP - RELOAD $DATA08.ZSDAWVHH.J8MGRZ00 After a FUP RELOAD operation has been suspended and you want to start this operation completely over again, enter the FUP RELOAD command with the NEW option: FUP - RELOAD $DATA08.ZSDAWVHH.
must perform a RECOVER operation on the affected SQL/MX object to restore the utility operation to a known good state. Consequences of a Failed Utility Operation If a failure, such as an unavailable resource or TMF failure, prevents a utility operation from completing successfully, the utility responds by attempting to perform its own internal recovery procedure for the purpose of completing the request or, if necessary, rolling back the operation to the known good state from which it began.
You can also use SHOWDDL to display information about DDL locks. An example SHOWDDL output would include: -- WARNING: A DDL lock exists on the following object, SHOWDDL may be inaccurate -CREATE TABLE CAT.SCH.CUSTOMER For more information, see the SQL/MX Reference Manual. Performing Recovery on Failed Utility Operations If the utility request fails to complete successfully, the request must be recovered to a known good state (resumed or cancelled), which removes the DDL lock and resets all flags.
Operations are allowed to continue for views because creating and dropping views do not change the structure of the dependent tables or the dependent table’s indexes. This contrasts with NonStop SQL/MP, for which structure information is stored about views in the dependent table’s label. Structure changing operations are sometimes allowed to run and succeed when inactive DDL locks are present.
partitions that are not being used by other applications or that are being used with READ access. WRITE access is prohibited. MODIFY’s with-shared-access option supports online partition management of range-partitioned tables and indexes in which the partitioning key is a prefix of the clustering key. This feature provides full read and write access to a range-partitioned table or index for the duration of most operations.
MODIFY and Table Reloading Some MODIFY command options start the online reload process ORSERV, which runs in the background. Until the ORSERV process completes, you cannot perform DDL operations and utility operations—including another MODIFY command—on the affected table or index. ORSERV keeps the file open and does not terminate for five minutes after the reload operation completes.
successfully or, if necessary, rolling it back to the known good state that existed at the start of the operation. If a flag remains set after a MODIFY request completes, this signifies that the table or index the MODIFY request is acting upon contains unusable space. When the MODIFY request completes, it starts an ORSERV process that reclaims the unused file space, reloads the file, and calls on DP2 to reset any UNRECLAIMEDSPACE or INCOMPLETE SQLDDL OPERATION flag that remains set.
a Flag Is Set in a Partition” (page 185)) and perform a FUP STATUS on the partition, using its Guardian name, to determine whether or not the reload process is running: • ◦ If the reload operation is still running, wait until it completes. ◦ If the reload operation is not running, perform a FUP RELOAD on the partition that has the UNRECLAIMEDSPACE or INCOMPLETE SQLDDL OPERATION flag set.
add first key ('Chicago') location $DATA02, add first key ('New York') location $DATA03 ); Also suppose that the table ORDERSR contains orders with these locations: • Atlanta • Chicago • Cincinnati • New York Example of Adding a New Range Partition You expect several thousand orders for ‘Seattle’ to arrive shortly. As the table ORDERSR is currently defined, you would insert these orders into the partition on $DATA03.
Note that the DROP PARTITON syntax is not used in this example because the partition $DATA03, which contains ‘New York,’ is not yet empty. Example of Reusing an Existing Range Partition by Setting the FIRST KEY Values to New Values You have reorganized your business so that all orders that originate in ‘Chicago’ and all locations with values greater than ‘Chicago’ are processed through an agent in ‘Boston.
Example of Dropping an Existing Empty Range Partition Suppose that your company has archived all the orders from 2003 and earlier and no longer needs to keep them in the ORDERSR table. You have deleted all records with ORDERTIMEs in 2003 and earlier from the table. As a result, the index partition $DATA01 is empty. Use this MODIFY command to drop this empty index partition: MODIFY INDEX cat2.sch2.
Example of Dropping an Existing Hash Partition and Rebalancing Data Orders suddenly decline, and you decide to free up some of the disk resources used by the table ORDERSH. Use this MODIFY TABLE command to drop the last partition and redistribute its data to the remaining partitions: MODIFY TABLE cat3.sch3.
By using the syntax last partition in the example, you avoid the need to specify the last partition by its location. Example of Moving an Existing Hash Partition to a New Location You need to use $DATA03 for other purposes. Use the MODIFY command to move the index partition there to another volume: MODIFY INDEX cat2.sch2.
• import automatically performs constraint validations while loading table rows. Performance considerations constraint validation are described in “Managing Constraints to Improve import Performance” (page 197). • import automatically performs triggered actions while loading table rows. Performance considerations for performing triggered actions are described in “Managing Triggers to Improve import Performance” (page 197).
Table 14 import Command Options (continued) import Option Description execution errors, the resultant error file can be used as an input file by a subsequent import execution. The name of the error file must be specified in the OSS format, for example /usr/jdoe/errfile, and must not be the same file as the one specified with the -E error-filename option if the -E option is also specified.
For more information about other input file considerations, including using a field qualifier, field delimiter, row delimiter, or record delimiter, see the SQL/MX Reference Manual. Output File Considerations for import Rows in the parsing error log file must look exactly like the corresponding row in the input file. However, rows in the execution error log file may not look like the corresponding row in the input file. Four notable cases are: 1. The format of numbers may be changed. Example: 123456.
Recommended Practices for Improving import Performance This subsection describes recommended practices for improving the performance of import operations on SQL/MX tables: • “Running import on Empty Tables” (page 195) • “Running import on Populated Tables” (page 196) • “Managing Partitions to Improve import Performance” (page 196) • “Managing Constraints to Improve import Performance” (page 197) • “Managing Indexes to Improve import Performance” (page 197) • “Managing Triggers to Improve import P
Running import on Populated Tables To speed up the performance of import on populated tables, you must properly manage SQL/MX objects that are known to slow down performance, including constraints, indexes, triggers, and certain data types. These factors determine the specific method to use for importing data into a populated table: • The amount of data being imported.
Managing Constraints to Improve import Performance Consider these guidelines for improving import performance with constraints: • Each check constraint, including NOT NULL constraints, causes an extra check of the data before the import operation completes. Because no additional I/O is needed to process the data, check constraint checking should be relatively efficient. However, it does increase the time required to process each table row.
load a particular range partition. You can also specify the start and end positions of the range partition based on the key column data in the input file. However, you cannot use these options to specify an actual key value at the command line. • For hash-partitioned tables, import loads all hash partitions in the destination table in a single operation.
Support for restarting import For more information, see the SQL/MX Reference Manual. Examples of Using import to Load an SQL/MX Table Example 1: Importing Data From a Delimited File This example is of an import of data from a delimited file. It shows the structure of the table to be loaded, the data to be loaded (the input data file), the format file describing the data, and the import command used to load it.
Example 2: Importing Data From a Fixed Width File This example of an import from a fixed width file shows the data to be loaded (the input data file), the format file describing the data, and the import command used to load it.
Data to Input The input file, COINPUT_FX2, contains records like this: 0123456789012345 ABCDEFGHIJKLMNPQ 0123456789012345 Note that there are three hidden spaces and a newline character at the end of each line.
select * from cat.sch.Xample; C1 C2 ----------- -------11 12345678 15 62345678 27 72345678 55 12345678 --- 4 row(s) selected. The X.perrs file will contain: A8,22345678 14,423456789 17,52345678,9 BZ,XYZ The X.perrmsg file will contain: *** ERROR[20081] Row number 2 and column number 1 could not be processed. Column Data: A8 *** ERROR[20291] The data specified in row number 4 column number 2 is longer than the actual column size definition. Column Data:123456789.
When you use the import command in an DSS environment, you can keep a data warehouse up to date by performing periodic (for example, daily, weekly, or monthly) updates to the database. Use the import command to append data to multiple partitions of a table. The DataLoader/MX product can help you streamline this task. DataLoader/MX is a nonprivileged batch program that provides a library of command routines for loading and maintaining SQL/MX tables.
is striped (partitioned) across 16 partitions. To maximize parallel execution, each partition is associated with a different CPU. The next example uses the import command each week to add the most recent week of data to the appropriate 16 partitions. It starts an import process for each target partition (in each associated CPU). You can enter the next import command from another OSS shell that is running on another CPU. The example divides the input data into 16 files.
• During the DUP operation, the target table is marked as corrupt to prevent another process from viewing the data until the operation completes successfully. • All utility operations have the potential to run for hours, especially those that involve a great deal of data movement. To manage systems effectively, you need to know how far the operation has proceeded and how much longer it needs to run. Utilities provide reports that indicate what step is in progress.
operations. The latter approach is only viable when the source table contents are not updated in between operations. Note that if you want to copy the indexes explicitly, you can also start the explicit fastcopy operation with the FASTCOPY INDEX command (not necessarily with the FASTCOPY TABLE command) followed by the required number of additional FASTCOPY INDEX commands and one FASTCOPY TABLE...INDEXES EXPLICIT command, in any order.
FASTCOPY INDEX SI1 to TI1; The base table TT can be accessed only after completing the fastcopy operation (copying of indexes SI1 to TI1). If you try to access the table without completing the fastcopy operation (by copying the index explicitly), it returns the following error: ERROR[8580] No partitions of table could be accessed DUP versus FASTCOPY Table 15 (page 207) lists the differences between the DUP and FASTCOPY commands.
• The PURGEDATA operation does not automatically alter the table’s statistics. After purging the data and after you (or any programs) have added data to the table, run an UPDATE STATISTICS statement to record current statistics for the table in the catalog. If the statistics are incorrect, the SQL compiler might not select the best access path for performance. • You cannot use the PURGEDATA command on a view or in an SQL program.
9 Managing Database Applications Managing an SQL/MX database includes supporting the operating requirements and access requirements of application programs and maintaining valid application programs. Providing this support and maintenance can include both performance-related and operation tasks.
COMMIT WORK or ROLLBACK WORK. The result is the same since the transaction is already aborted. Some errors that might cause a TMF transaction abort might not be retryable, such as errors caused by conditions that must be resolved with some corrective action by the user or application. For example, depending on the application, it might not be useful to retry a duplicate key error returned by an INSERT statement using ROWSETS.
NonStop SQL/MX and TMF absolutely guarantee the integrity of the database following a CPU failure, application failure, or other component failure. Moving Programs From Development to Production The structure of the database may differ on development and production systems. The execution plans generated on the development system may be inefficient or invalid on the production system.
compiler to automatically recompile the query execution plan. The automatically compiled plan is not saved for subsequent executions of the same program or for multiple concurrent executions of the same program. Therefore, automatic recompilation is expensive in terms of performance. You can overcome this problem by persisting the results of Similarity Check and Late name resolution in the module files using the mxrpm tool.
modules to the production system and SQL compile there, follow these guidelines when referring to tables or views in embedded SQL statements: • If you expect the database object names to differ on development and production systems, use class MAP DEFINEs (for SQL/MP objects only) or PROTOTYPE host variables (for embedded SQL programs in C, C++, and COBOL only) in the SQL statements of the program.
Assumptions For the purpose of the examples, suppose that: • Two NonStop server systems, \DEV and \PROD, are connected by an Expand network. For more information, see the Introduction to Networking for HP NonStop NS-Series Servers. • The development node catalogs and schemas are not visible from the production node and vice versa. To ensure this, do not use the REGISTER CATALOG statement from either node to create an empty catalog reference on the other node.
The Similarity Check determines if the query execution plan of a statement is still valid. During the Similarity Check process, the redefinition timestamps and Guardian names of runtime objects are obtained and used during statement processing. This information is not saved for subsequent executions or for multiple concurrent executions of the same program. Whenever a new process starts, the Similarity Check is triggered when the statement is executed. Starting with SQL/MX Release 3.
Using REGISTER CATALOG to Access Objects on Remote Nodes Statements in an embedded SQL program can refer to SQL/MX objects on a remote node without changes to the database object names in the source code. However, the catalog that contains the objects must be visible from the local node before you can process and run the program. Use the REGISTER CATALOG statement to create an empty catalog reference on the local node that enables local programs to query the catalog’s objects on a remote node or nodes.
Ensuring Proper Name Resolution SQL/MX statements in programs must refer to database objects that exist on the production system or are visible on a remote node. If the database object names in programs refer to DEFINEs, these DEFINEs should be set in the environment where the programs will run. If the database object names in programs refer to SQL/MP aliases, the SQL/MP aliases must be created on the production system.
registering catalogs to make them visible in an SQL/MX distributed database environment, see “Using Catalog References in an SQL/MX Distributed Database” (page 268). You can use OBEY command files or scripts to automate the process of creating large numbers of SQL/MP aliases for applications that you will run in the SQL/MX production environment. Create the SQL/MP aliases before you start processing or running applications in the production environment.
Maintaining Query Execution Plan Validity A query execution plan defines the semantics and execution characteristics for a single compiled SQL statement. Statically compiled applications typically have many query execution plans. The query execution plans of an application are stored in an SQL/MX user module. Globally placed modules are stored in the /usr/tandem/sqlmx/USERMODULES directory. Locally placed modules are stored in the same directory as the C, C++, or COBOL executables that use them.
For more information, see the SQL/MX Query Guide. For more information about upgrading SQL/MX databases, see the SQL/MX Installation and Upgrade Guide. Displaying and Analyzing Query Execution Plans Use the EXPLAIN function to display the query execution plans of a module. For more information, see the SQL/MX Programming Manual for C and COBOL and the SQL/MX Reference Manual. For details on how to analyze query execution plans and determine an optimal plan, see the SQL/MX Query Guide.
Securing Globally Placed Modules You secure globally placed modules from the /usr/tandem/sqlmx/USERMODULES directory. The objective should be to assign the responsibility and accountability for safeguarding the integrity of each globally placed module to its own creator and owner. CAUTION: Only trusted accounts should be given write access to the directory /usr/tandem/sqlmx/USERMODULES.
4. Alternatively, the SUPER.SUPER user can make sure that all modules can be written to only by their respective owners by entering: chmod 0644 /usr/tandem/sqlmx/USERMODULES/* 5. Make sure that all future modules that you create are writable only by you or the USERMODULES directory owner by entering: umask 022 Regardless of the current umask settings, this command makes module files (and OSS files in general) that the user creates readable by all but writable only by the module owner and creator. 6.
7. Alternatively, the SUPER.SUPER user can make sure that all modules in the local directory can be written to only by their respective owners by entering: chmod 0644 /usr/local-directory/* 8. Make sure that all future modules that you create are writable only by you or the owner of the local directory owner by entering: umask 022 This command makes module files (and OSS files in general) that the user creates readable by all but writable only by the module owner and creator. 9.
Object: CAT.SCH.T28 Module: Object: Object: Object: Object: CAT.SCH.CURSOMEM \NODE1.DATA10.ORDERS.T01 CAT.SCH.T02 CAT.SCH.T03 CAT.SCH.T04 Module: Object: Object: Object: Object: CAT.SCH.TESTMEM CAT.SCH.T15 CAT.SCH.T16 CAT.SCH.T17 CAT.SCH.T18 Display all Modules and Their Corresponding Source SQL File Names >>DISPLAY USE OF SOURCE; Module: CAT.SCH.CONSTRAINTM Source Name: /E/NODE1/usr/user1/file1.sql Module: CAT.SCH.CURSOMEM Source Name: /E/NODE2/usr/user1/file2.sql Module: CAT.SCH.
Display all the Invalid Modules and Their Corresponding Source SQL File Names Associated With an Object >>DISPLAY USE OF INVALID MODULES FOR ‘CAT.SCH.TABLE1’; Object: CAT.SCH.TABLE1 Module: CAT.SCH.MODTABLE1 Source Name: /E/NODE11/modulelist/module1.sql Module: CAT.SCH.MODTABLE2 Source Name: /E/NODE11/modulelist/module2.sql Display Source SQL File for a Given Module in a Given OSS Directory >>DISPLAY USE OF MODULE_DIR '/usr/user1/all/modulestorage' SOURCE 'CAT.ALL.MODULE1'; Module: CAT.ALL.
CPUs to simultaneously access each of the subsets in far less time than it would take to run a single DISPLAY USE OF operation on the modules. This subsection provides instructions and examples for precompiling embedded applications to generate different modules using the module grouping precompiler option.
1 modules found, 1 modules extracted. 1 mxcmp invocations: 1 succeeded, 0 failed. 5. Perform SQL compilation on the other executables: % % % % 6. mxCompileUserModule mxCompileUserModule mxCompileUserModule mxCompileUserModule t02.exe t03.exe t11.exe t12.exe Check the SQL compiled module files using INVENTORYAPP: % ls /usr/tandem/sqlmx/USERMODULES/*INVENTORYAPP^*^^ /usr/tandem/sqlmx/USERMODULES/T01CAT.T01SCH.INVENTORYAPP^T01MOD^^ /usr/tandem/sqlmx/USERMODULES/T01CAT.T01SCH.
% % % % /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: /user/smith/tmp/tmp/DUO: mxCompileUserModule mxCompileUserModule mxCompileUserModule mxCompileUserModule -g -g -g -g moduleLocal moduleLocal moduleLocal moduleLocal t02.exe t03.exe t11.exe t12.exe 11. Verify how the local modules are generated: /user/smith/tmp/tmp/DUO: ls -l *.*.INVENTORYAPP^*^^ -rw-rw---- 1 JAVAQA.MODMAN JAVAQA 48293 Nov 29 14:00 T01CAT.T01SCH.INVENTORYAPP^T01MOD^^ -rw-rw---- 1 JAVAQA.
To delete the locally placed module file cat.sch.locmod1 from its local directory, enter: rm /usr/local01/CAT.SCH.LOCMOD1 In some cases, you might want to remove all modules of an application. For specific details, see the grouping technique of module management naming in the SQL/MX Programming Manual for C and COBOL. To delete all the module files of a group, enter: rm /usr/tandem/sqlmx/USERMODULES/*.*.
Converting Applications in a Production Environment On a production system, convert applications that use globally placed modules to those that use locally placed modules: 1. If you know which modules are used by programs that make up an application, skip this step. Otherwise, use the mxCompileUserModule utility to determine which modules are used by the programs that make up an application. a. Copy the executable to an empty temporary directory. b.
For more information about the strategies and procedures for backing up and restoring database files, see “Using Backup and Restore 2 for SQL/MX Database Recovery” (page 60).
10 Performing Recovery Operations The success of recovery operations depends on the effectiveness and consistency of the plan you develop for handling recovery situations. (See “Planning Database Recovery” (page 44)) Before beginning a recovery operation, you should thoroughly evaluate the tools available (RDF, TMF online dumps, backup tapes, and so forth) and appropriate for the type of failure.
and UNDONEEDED flags. For views, tables, indexes, and ENSCRIBE files, the information appears after the modification timestamp of the table. Normally, volume recovery recovers such files when the volume is started for transaction processing. If, however, the volume is already started and the file is still marked with REDONEEDED or UNDONEEDED, you must recover the file by using file recovery. Recovering Files File recovery is usually the method to use if other recovery methods fail.
Using file recovery with the TIME option can be difficult, however, because this method requires you to coordinate the recovery of interrelated objects, such as tables and their indexes. CAUTION: TMF carries no information about the relationships between file labels and catalogs. If a table is dropped, for example, file recovery cannot restore the catalog entries for the table. If the file recovery operation starts at a time just before a table was dropped, you might lose subsequent DDL changes.
creating the target object, the restore operation fails with Error 9037 as shown in this example, and the object is not recovered: NonStop TMF on \PLUTO *0202* RECOVER FILES [58] OnLineRestore Process #1 OnlineDumpMgmt: *ERROR* TMF-9037: $DATA17.ZSDT5356.J24Z5J00: File System error 11 occurred attempting to retrieve the SQL file label from disk. If, for any other reason, the target object is inaccessible during the restore process, this process also fails with Error 9037.
to obtain a copy of your source objects. Transactions can be active against the source objects at the time of the recovery. CAUTION: If you use the MAP NAMES option of the RECOVER FILES command to recover files to a new location, you must immediately make new online dumps of the target data files recovered. Without these new dumps, you will not have file-recovery protection for those files, and subsequent file recovery operations can fail.
Using VERIFY and FIXUP After using TMF RECOVER to recover partitions of an SQL/MX table, you should perform a consistency check on all recovered files by using mxtool VERIFY. The VERIFY command confirms that all data forks and resource forks exist and contain consistent information. When using mxtool VERIFY, specify the ANSI name of the table. For example, using the same table name from the previous example: mxtool VERIFY CAT.SCH.T039UC3 VERIFY performs these steps: 1.
find /usr/tandem/sqlmx/ddl -mtime +7 -print | grep "/ddl/" | sed "s/./rm &/" sh Recovering Table Privileges NonStop SQL/MX does not save the DDL for GRANT and REVOKE statements, so if you accidentally drop a table that includes security information created by these statements, you must restore the table’s security attributes in the table’s metadata before you can recover the table data, labels, and resource forks from a TMF online dump.
NOTE: If you have granted or revoked table privileges, see “Example: Recovering Table Privileges” (page 239) for guidelines on how to recover the privileges. For complete information on saving DDL and using it to re-create SQL/MX objects, see these discussions: • “Retrieving Saved DDL” (page 59). • In the SQL/MX Reference Manual, the information about the DROP TABLE statement and table management for metadata tables.
When the table is dropped, NonStop SQL/MX produces a file containing the DDL for all the statements in this example.
of indexes is maintained in the file label in the disk directory. When you use MXCI to create just the table and not the indexes, and later recover the table, additional mismatches will occur between the SQL/MX catalog or schema and the file label in the directory. This can make the recovery process even more difficult.
'Dept/Num' -- NOT NULL NOT DROPPABLE ,JOBCODE NUMERIC(4, 0) UNSIGNED DEFAULT NULL HEADING 'Job/Code' ,SALARY NUMERIC(8, 2) UNSIGNED DEFAULT NULL ,CONSTRAINT HRDATA.PERSNL.EMPLOYEE_428832188_2424 PRIMARY KEY (EMPNUM ASC) NOT DROPPABLE ,CONSTRAINT HRDATA.PERSNL.EMPLOYEE_133512188_2424 CHECK (HRDATA.PERSNL.EMPLOYEE.EMPNUM IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.FIRST_NAME IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.LAST_NAME IS NOT NULL AND HRDATA.PERSNL.EMPLOYEE.
refer to the DDL you saved earlier—either when the index was created or before the index was dropped— to retrieve the Guardian names. The saved DDL for a dropped table is stored in a file named catalog.schema.table-yyyymmdd-hhmmss.ddl, where yyyymmdd-hhmmss is the time when the table was dropped. In this case, the file is named HRDATA.PERSNL.EMPLOYEE-20031114-160629.ddl.
>>OBEY /USR/TANDEM/SQLMX/DDL/HRDATA.PERSNL.EMPLOYEE +>-20031114-160629.DDL; As MXCI executes the commands, they appear on your screen, along with certain related messages: >>CREATE TABLE HRDATA.PERSNL.
NOTE: GRANT and REVOKE statements are not saved as part of DDL text. However, the DDL text includes DELETE statements to remove the default security information from the metadata tables TBL_PRIVILEGES and COL_PRIVILEGES and INSERT statements to record the correct security information. When recreating a dropped table, use a licensed copy of MXCI to execute these DELETE and INSERT statements after you create the table, to restore the security to the same state as when the table was dropped.
To conduct the recovery, you use the TMFCOM RECOVER FILES command. You can use the MXGNAMES output as input to the RECOVER FILES command, as was done in Step 3: $SYSTEM ZMXTOOLS 29> MXGNAMES HRDATA.PERSNL.EMPLOYEE -TMF Hewlett-Packard NonStop (TM) SQL/MX Guardian Name Listing Utility (c) Copyright 2003 Hewlett-Packard Development Company, LP. (-- Table HRDATA.PERSNL.EMPLOYEE -- & $DATA10,ZSDT5356.NG59FJ*,& -- Index XEMPNAME ON HRDATA.PERSNL.EMPLOYEE -- & $DATA10.ZSDT5356.
Inconsistency found: *** ERROR[20760] The redefinition timestamp (211935615168385004) defined on the label does not match the time (211935867783457892) defined in the metadata for Guardian file (\KRYPTON.$DATA10.ZSDT5356.NPFH8J00). Verifying resource fork for partition: \KRYPTON.$DATA10.ZSDT5356.NPFH8J00 Verifying Partition Map: no partition map exists Verifying index: HRDATA.PERSNL.XEMPNAME Verifying label for partition: \KRYPTON.$DATA10.ZSDT5356.
Verifying Partition Map: no partition map exists Verifying index: HRDATA.PERSNL.XEMPNAME Verifying label for partition: \KRYPTON.$DATA10.ZSDT5356.JN9V5J00 Verifying resource fork for partition: \KRYPTON.$DATA10.ZSDT5356.JN9V5J00 Verifying Partition Map: no partition map exists Object verification complete for: HRDATA.PERSNL.EMPLOYEE 13. If the CASCADE option was specified in the original DROP TABLE command, the table’s views would also have been dropped along with the table and its indexes.
You can also specify every catalog that was backed up: BR> RESTORE $tape-drive, MX CAT *; For more information, see the Backup and Restore 2 Manual. Restoring Schemas When you restore a schema, all its dependent objects are restored as well. Restore one or more schemas either implicitly, by restoring the associated catalogs, or explicitly, by using the SCHEMA restore object: BR> RESTORE $tape-drive, MX SCHEMA CAT3.SCH; BR> RESTORE $tape-drive, MX SCHEMA CAT2.SCH, SCHEMA CAT3.
To restore one or more index partitions, restore the associated indices implicitly or use the PARTITION restore object explicitly: BR> RESTORE $tape-drive, MX (IPART cat1.sch1.index1 PARTITION(part1)), PARTONLY ON; BR> RESTORE $tape-drive, MX (IPART cat1.sch1.index1 PARTITION(part1, part2)) BR+>, PARTONLY ON; For more information, see the Backup and Restore 2 Manual.
Strategies for Repairing Damaged Metadata and Objects Using the mxtool Utility The mxtool utility provides an OSS command-line interface to various utility operations, including FIXUP, GOAWAY, INFO, and VERIFY. Use mxtool to repair damaged SQL/MX objects and metadata.
Correcting Problems Manually You can correct some problems in metadata directly by manually executing inserts, updates, selects, and deletes. Any SQL/MX user can read metadata, but only DDL and utility operations can update it. Therefore, NonStop SQL/MX allows the super ID to fix incorrect metadata through standard INSERT, UPDATE, and DELETE statements using a licensed MXCI process. However, use this method only as a last resort to correct a problem.
Table 17 Label Data Associated With SQL/MX Objects (continued) Label Field Contents Catalog operation timestamp The time and date when the last operation was performed against the SQL/MX catalog. Last open timestamp The time and date when the file was last opened. CRVSN Contains the creation version sequence number associated with the SQL/MX format object stored in the file label by the DP2 for all disk files. Primary extents The number of primary extents allocated for the file.
Table 18 Run-time metadata Stored in the Resource Fork (continued) Run-time metadata Contents Index map array Contains the list of all indexes on the SQL/MX format object (if the object is a table), along with index key information and a list of partitions associated with each index. Constraint information Contains entries for droppable check constraints. This information is needed for similarity-checking to determine whether plans need to be recompiled because of a new or deleted check constraint.
VERIFY performs these actions: • Checks for inconsistencies between an SQL/MX table and its structural representation in the resource fork. This feature compares information saved in the resource fork with similar information stored in metadata and returns inconsistencies. This information is compared: • ◦ ANSI name. The ANSI name stored in the resource fork must match that stored in metadata. ◦ ANSI namespace. The ANSI namespace stored in the resource fork must match that stored in metadata.
• VERIFY obtains read-only locks on metadata while verifying an object. Other operations that read metadata can run concurrently. Operations that change metadata or labels such as DDL, partition management, PURGEDATA, and UPDATE STATISTICS statements cannot run concurrently. • To verify some objects, NonStop SQL/MX might need to access remote systems. In this case, the remote systems must be available, and you must have privileges to view information on them.
interrupted database transactions. In other cases, files marked as questionable are actually consistent. These files, although open at the time of the failure, were not actively taking part in database transactions. In many cases, you know which files are actually corrupt and which are consistent. Normally, it is better to allow the TMF file recovery method to recover all the files and determine which are corrupt and which are not. See “Recovering Files” (page 233).
For example, PURGEDATA sets the corrupt bit before deleting rows. If PURGEDATA fails before deleting all rows, the corrupt bit remains set. If you determine that even though PURGEDATA failed it is acceptable to use the table, you can reset the corrupt bit. Changing the Redefinition Timestamp The redefinition timestamp changes whenever a DDL change made on the object results in a structure change.
CREATE TABLE cat.sch.table1 (Col1 int not null primary key, Col2 int not null, Col3 timestamp not null no default, Col4 char (20)) Partition (Add first key (1000) location \local.$data01 name part2, Add first key (2000) location \remote.$data01 name part3) Location \local.$data02 name part1; 2. Create three partitions with these names: • \LOCAL.$DATA03.ZSDWUEIR.SUDIFO00 • \LOCAL.$DATA02.ZSDWUEIR.SUDIFP00 • \REMOTE.$DATA02.ZSDWUEIR.SUDIFQ00 CREATE INDEX index1 (col2) ON table1 Location \local.
Using GOAWAY to Delete Damaged Objects SQL/MX files consist of two physical Guardian files: data fork and resource fork. Normally, when the data fork is dropped, DP2 automatically drops the corresponding resource fork. In some cases, either an orphaned resource fork or data fork can exist. mxtool GOAWAY is an OSS command-line utility run from mxtool that removes SQL/MX format file labels for both the resource and the data forks associated with a Guardian file.
ALTER DISK, ALTNAME and ALTER DISK, VOLNAME CAUTION: ALTER DISK, ALTNAME and ALTER DISK, VOLUME are extremely dangerous commands because they can corrupt a database. Before using an ALTER DISK command, perform a STOPOPENS DISK and a CONTROL DISK, REFRESH on the volume to ensure valid file labels, and then perform a STOP DISK to make the volume inactive. The ALTER DISK operation renames the files on a volume, but SQL/MX metadata and file labels still contain the old names.
RECOVER FILES $VOL.*.*, FROMARCHIVE 2. 3. 4. Check that dependent objects residing on other volumes have also been recovered and re-create objects as necessary. Verify the database by using the mxtool VERIFY utility. For more information, see the SQL/MX Reference Manual. Make new TMF online dumps of SQL/MX objects that reside on the volume. If the entire object was re-created, take dumps of the entire object even if one or more partitions reside on different volumes.
11 Managing an SQL/MX Distributed Database SQL/MX databases can be distributed over disk volumes on a single system (node) or in a network of nodes. Likewise, application programs that access NonStop SQL/MX can be distributed across CPUs in a single node or in a network. When managing a database distributed across volumes or nodes, you can usually use the same SQL statements you would use with a nondistributed database.
Managing a Network-Distributed SQL/MX Database You can distribute an SQL/MX database—and the applications that access it—across nodes in an Expand network. The database should be distributed in a way that provides a high degree of transparency to applications and users while maintaining high performance and availability. NOTE: For more information about managing a network-distributed SQL/MP database, see the SQL/MP Installation and Management Guide.
Versioning Capabilities The nodes in a distributed SQL/MX database can run different releases of SQL/MX software. Each release of SQL/MX software can identify the other releases that it is compatible and incompatible with. The SQL/MX versioning design ensures that: • Nodes with compatible software versions can interoperate. • Access to or from nodes with incompatible software versions fail predictably. SQL/MX Release 3.x does not support interoperability with SQL/MX Release 2.
when accessed by their ANSI names on remote nodes. Similarly, the metadata definitions for a database object, identified by its ANSI name, can be located and retrieved regardless of where that database object's metadata definitions are located in the network. For location-independent ANSI names, ANSI name lookup always starts with the CATSYS and SCHEMATA system schema tables on the local node. Next, the node that contains the metadata definitions for the ANSI-named object is identified.
Local Autonomy If a query requires data only from the node on which it executes, the query succeeds regardless of the availability of remote nodes on which the query’s accessed objects have partitions. For SQL/MX Release 3.x, query compilation and late name resolution of ANSI names require access to the nodes on which the accessed objects’ definition schema exist. For more information, see “Maintaining Local Autonomy in a Network for SQL/MX Release 3.x” (page 271).
names used follow the preceding rules. If you do not specify the Guardian file and subvolume names, NonStop SQL/MX automatically generates those names. To avoid incidentally meaningful but undesirable names (for example, ZSDHELL0.SMARTY00), vowels and the digit zero are never used in generated portions of names. While user-supplied names must conform to the other Guardian naming rules, they can contain vowels and the digit zero.
• The node with an automatic catalog reference for a catalog has one CAT_REFERENCES row for each node that has a reference for that catalog. • A node with an empty manual reference for a catalog has one CAT_REFERENCES row for the automatic catalog reference node and one CAT_REFERENCES row for itself. Catalog Reference Examples To better illustrate how user catalogs and their schemas are referenced in the system schema tables for SQL/MX Release 3.x, consider this next example scenario.
CAT_UID NODE_NAME REPLICATION_RULE (for manual replication) 12345678912345678 \N3 M CAT_UID contains the 17-digit UID for each catalog. • The SCHEMA_REPLICAS table on \N1 contains two rows for CAT1, one for each of its two schemas. This includes these entries in the SCHEMA_UID and NODE_NAME columns: SCHEMA_UID NODE_NAME 12345678912345678 \N1 23456789123456789 \N1 SCHEMA_UID contains the 17-digit UID for each schema.
Table 20 Statements and Commands for Managing Catalog References Command Description CREATE CATALOG DDL statement that creates a new catalog at the specified location on the local node. REGISTER CATALOG MXCI command that registers a catalog on a remote node. The remote node then has an empty catalog reference to the catalog. UNREGISTER CATALOG MXCI command that unregisters an empty catalog reference from a remote node. For more information, see the SQL/MX Reference Manual.
In a distributed database, statically-compiled queries can execute even if some of the nodes in the database are not available, provided these two requirements are met: • Either the node that stores the query objects’ metadata is available, or the query does not use late name resolution of ANSI names or automatic recompilation. • The user data that the query attempts to access is stored on available disks on the local node or some other available nodes.
Table 21 (page 273) describes the status of each node in the network with respect to user data, object metadata, applications for CAT_1 and CAT_2, and the effects of the loss of each node on the other two nodes in the network: • The column “CAT_1 and CAT_2 User Data Present?” identifies whether user data for CAT_1 and CAT_2 objects is present on a given node in the network. • The column “CAT_1 and CAT_2 Metadata Present?” identifies whether object metadata for CAT_1 and CAT_2 is present on a given node.
Table 22 Consequences of Network Node Loss, Reconfiguration 1 Node CAT_1 and CAT_1 and CAT_2 CAT_2 User Metadata Data Present? Present? CAT_1 and CAT_2 Applications Present? \A Yes No No The same consequences occur as were identified in the node \A row in Table 21 (page 273). \B Yes Yes Yes The same consequences occur as were identified in the node \B row in Table 21 (page 273). In addition, none of the applications that contain queries to CAT_1 or CAT_2 can be executed.
If CAT_1 and CAT_2 are tightly interrelated and if most application programs and queries access objects in both catalogs, the configuration described in Table 23 (page 274) would instead look like the configuration shown in Table 24 (page 275).
Example—Unregistering a Catalog From a Node The UNREGISTER CATALOG command causes an empty catalog reference to be removed from the specified node. As a result, the catalog is no longer visible on that node. UNREGISTER CATALOG samdbcat FROM \nodey.$data47; Distributing SQL/MX Database Objects SQL/MX objects can be distributed individually or distributed as partitions of tables or indexes. Objects can be distributed at creation time by the CREATE statement or at a later time by using the MODIFY utility.
QTY_ORDERED NUMERIC (5) UNSIGNED NO DEFAULT NOT NULL,) LOCATION \B.$DATA02; 3. From the local node \A, create a local index for the table TABREM on the remote node \B: CREATE INDEX CAT1.SCH1.INDEXLOC ON TABREM (ORDERITEM, ORDERNUM) LOCATION \A.$DATA02; Example—Creating a Table With Partitions on a Local and Remote Node To create a table with one partition on the local node \A and one partition on the remote node \B: 1.
Using Co-located Indexes in a Distributed Database Co-locating indexes with their parent table is an effective way of using NonStop SQL/MX to improve performance and allow for better data distribution, both in a standalone and distributed environment. For example, assume the EMPLOYEES table contains an EMPLNAME and EMPLCITY column, and each employee resides in San Francisco, Chicago, or New York City.
• The operating system at a node is updated. Usually nodes run compatible but different operating systems. Consult the current software release documents for compatibility issues between operating system releases. In some cases, upgrading the operating system release imposes compatibility issues on the database. For more information, see “The SQL/MX Environment” (page 24). • Communication to a node is lost. Nodes can become unavailable for a variety of reasons.
a node that runs either SQL/MX Release 2.x or SQL/MX Release 3.0 or SQL/MX Release 3.1, and vice versa. The following are the other restrictions: • Running different versions of SQL/MX software simultaneously on separate nodes of an interrelated SQL/MX database application is not allowed between SQL/MX Release 3.x nodes and SQL/MX Release 2.x nodes.
12 Measuring Performance During the life of an SQL application, you might need to measure the performance of all, or part, of the application. Several NonStop software products can provide statistical information about performance. Collecting these statistics requires an in-depth understanding of the system, the layout of the database tables, and the use of the application programs. You usually gather statistics when you are: • Running a performance benchmark.
ObjectSchemaVersion: 3000 ObjectFeatureVersion: 1200 Owner: SUPER.
Attr[ 0/ 1] dataType: 131 offset: 4 len: 2 align: 2 Attr[ 1/ 2] dataType: 0 offset: 6 len: 15 align: 1 Attr[ 2/ 3] dataType: 0 offset: 21 len: 20 align: 1 Attr[ 3/ 4] dataType: 131 offset: 41 len: 2 align: 2 Attr[ 4/ 5] dataType: 131 offset: 45 len: 2 align: 2 nullOff: 43 nullBitIdx: -1 Attr[ 5/ -1] dataType: 133 offset: 49 len: 4 align: 4 nullOff: 47 nullBitIdx: -1 ============================================================== For more information, see “Querying SQL/MX Metadata” (page 104) and the SQL/MX
With the statistics generated by SET STATISTICS ON or by DISPLAY STATISTICS, you can quickly monitor the performance of a specific statement on specific objects. The information can help you: • Determine the comparative performance of similar objects. For instance, you can determine the effect of a new index on a table compared to the performance without the index, or you can determine the performance after an UPDATE STATISTICS statement. • Display the statistics of various queries or DML statements.
Figure 5 Measure Entities and Program Structures To reduce the cost of overhead for the Measure interface to NonStop SQL/MX, the Measure product updates more than one counter per call. The overall cost of using the Measure product depends on the frequency of the intervals for measurements and on the number of active SQL statements. As the number of active SQL statements in a program increases, the cost of performing measurements increases.
• Disk reads • Attempts at escalation of locks (successful or not) • Lock waits • Message activity • Records accessed • Records used Stream access statements do not collect SQLSTMT statistics, but FILE statistics are collected. FILE Statistics Monitor database files with the FILE entity. The FILE report provides information on all partitions of a specific file and record use by a user process.
The DISKFILE entity gathers activity statistics against all partitions of a file object. Use DISKFILE to determine if data is being evenly distributed across all the partitions in a database. You must use DISCOPEN to determine how data is distributed among all partitions of a FILE. The DISCOPEN entity gathers statistics for partition opens by process. Use DISCOPEN to determine if the partitions in a database are being accessed evenly by the processes in an application.
• DISK-READS stores the number of physical disk I/O operations performed for this statement. • RECOMPILES stores the number of times the statement was recompiled. For valid statements, this number should be zero. If the statement has been recompiled, the counter for each session would be one because an invalid statement is usually recompiled only once in a session. If this number is one or greater, you should consider explicit SQL compiling the program.
• WRITES stores the number of inserts. • UPDATES stores the number of updates. • DELETES stores the number of deletes.
13 Enhancing SQL/MX Database Performance To achieve maximum performance, you must provide sufficient hardware to handle the throughput and size of the application database. In addition to hardware, many factors affect the performance of a database and application. Some factors are system dependent, others are application dependent. The factors discussed in this section are specific performance issues that can arise in an installed and operating SQL/MX environment.
NOTE: The WITH SHARED ACCESS option of MODIFY is supported only for both range and hash-partitioned objects, and only when the partitioning key is a prefix of the clustering key. For more information, see the SQL/MX Reference Manual.
Minimizing Contention When managing partitions, you can minimize contention by using the WITH SHARED ACCESS option of the MODIFY TABLE or MODIFY INDEX statements. For example, this MODIFY TABLE statement uses the WITH SHARED ACCESS option to provide online execution on range-partitioned objects: MODIFY TABLE cat1.sch1.table1 ADD WHERE KEY = value ('San Jose') TO LOCATION $DATA12 WITH SHARED ACCESS COMMIT; When you specify the WITH SHARED ACCESS option with MODIFY, these actions occur: 1.
• Operator intervention might be necessary in these situations: ◦ To restore backed-up audit trails if the audit fix-up process does not find an audit trail online. If the requested audit trail does not exist, the request fails. ◦ To restore the online dump if an online dump on tape is needed for a newly created partition or index. ◦ In response to EMS event messages indicating when online dumps can be taken, operator intervention is required to start the online dumps.
Because the locks used in Step 1 of the operation are exclusive, they have no special priority over other locks that can also be issued on the objects. Therefore, to enable the exclusive locks required by these functions, you might need to manage the application activity as follows: 1. During Step 1, do not compile programs that require access to the metadata involved for update or that refer to affected objects. 2. During Step 2, you can resume application transaction activity. 3.
• “Using SCF ALTER DISK to Resize the SQLMXBUFFER Attribute” (page 299) • “Strategies for Reducing Reuse Failures” (page 299) For every disk volume running NonStop SQL/MX on your system, approximately 1.1 GB of virtual space must be shared between the DP2 data cache, SQL/MX buffer, and lock tables. The DP2 data cache provides temporary storage for disk blocks containing table and index data that are referenced frequently.
Causes and Symptoms of Query Plan Fragment Reuse Failures Query plan reuse failures occur when DP2 is forced to quickly drop inactive plan fragments, even those that are repeatedly used. The most likely causes for these reuse failures are: • The SQL/MX buffer is too small to store all the plan fragments repeatedly being used to support application queries.
STORAGE - Stats DISK \DMR11.$SYSTEM SQL/MX Statistics: Session Data bytes....... 81920 KB Max Data bytes... 131072 KB Total Sessions........... 0 Active Sessions.. 0 - 4KB Blocks - - - - - - - - - - - - Reuse - - - - - - - - - Max...................... 32640 Attempts......... 499463 Number................... 20352 OK............... 499460 In Use................... 0 Failed FST....... 0 Failed ID........ 3 • Max Data bytes identifies the configured size of the SQL/MX buffer for the disk.
The following are the Publish/Subscribe counters: Publish/Subscribe bytes.. 54765 KB Max Subscribers.......... 512 Keys In Use...... 0 Subscribers Active....... 0 Resyncs.......... 0 Maximum Tables........... 512 Tables Active............ 0 • Publish/Subscribe bytes denote the number of kilo bytes allocated for publish/subscribe events. • Max Subscribers denote the maximum number of subscribers allowed per disk volume (512). • Subscribers Active denote the number of active subscribers.
• Memory Pressure denotes the number of failures caused because the memory pressure was detected. • No Memory denotes the number of failures caused because of non-availability of physical memory for the allocation. Using SCF ALTER DISK to Resize the SQLMXBUFFER Attribute Use the SCF ALTER DISK command to change the size of the SQL/MX buffer on a disk volume.
and backup CPUs for the data volume. While both are contained within virtual space, and clean pages can be stolen by the memory manager, over-configuration can lead to excessive page swapping and performance loss. In the worst-case scenario, an over-configured data cache can cause insufficient physical memory to meet the demand of pages, which can contribute to a variety of failures. NOTE: The backup DP2 CPU for a given disk does not use the SQL/MX buffer space unless it becomes the primary DP2.
CACHE FAULTS to CACHE CALLS is even one or two percent, consider reducing the cache size, adding more physical memory to the CPU, or processing to other CPUs. DP2 data cache size is controlled through the Subsystem Control Facility (SCF). For more information on using SCF to set cache size, see the SCF Reference Manual for the Storage Subsystem.
amount of swap file space is not available, an SQL/MX compilation might fail, or an executing statement might return an “insufficient memory” error. See the discussion on managing kernel-managed swap files in the Kernel-Managed Swap Facility (KMSF) Manual for more information on how to use NSKCOM to monitor kernel-managed swap file use, how to monitor swap file usage by process, and how to change your KMSF configuration.
Another disadvantage of co-located indexes is that if a table partition is moved to another disk, you must remember to move its corresponding index partition. Performing FUP RELOADs to Generate More Accurate Query Plans As tables and indexes are subjected to repeated INSERT and DELETE operations, they become more fragmented and less efficiently organized. Over a period of weeks, it can become impossible for MXCMP to accurately estimate the costs for different query plans for those tables.
• ◦ Maintaining programs, requesters, or servers to programmatically check data input can require additional programming time. In addition, your site must have methods or programs to verify that the existing tables conform to the new data checks. ◦ Programs with hard-coded validity checking cannot move as easily from one set of users to another as programs without hard-coded values.
Matching Block Split Operation to Table Usage In a table with key-sequenced organization, when an INSERT operation causes a data block to overflow, the DP2 process must make room for the new row by splitting the block in the middle and transferring some of its contents to a newly allocated block. Because NonStop SQL/MX does not support the SEQUENTIAL BLOCKSPLIT option for the CONTROL TABLE statement, you cannot split the block at the insertion point when rows are being inserted in sequence.
A Using Guardian Names with TMF, RDF, and Measure The TMF, RDF, and Measure subsystems provide important infrastructure and management support to NonStop SQL/MX. Currently, however, the user interfaces to these subsystems require Guardian physical file names, whereas SQL/MX tables can be referenced by their ANSI logical names: • Guardian physical file name format: [\node.][[$volume.]subvolume.]filename • ANSI logical file name format: [[catalog.]schema.
NOTE: Because of the 239-character limit for EDIT files, MXGNAMES cannot support table names in the input file whose overall length is greater than 239 characters, including the dots separating the catalog, schema, and table name portions. Such names have to be specified individually on the MXGNAMES command line. SHOWDDL File SHOWDDL output is commonly saved to an OSS format text file, which has to be converted using CTOEDIT before being used as input to MXGNAMES.
create table cat.sch.t126a (c1 int not null , c2 timestamp default current_timestamp not null , c3 char(4) default 'abcd' , c4 smallint not null , primary key (c1,c2)) location $vol1.zsd0126a.bxnl1r00 partition (add first key (1r00) location $vol2.zsd0126a.bxnl2r00 , add first key (2r00) location $vol3.zsd0126a.bxnl3r00 , add first key (3r00) location $vol4.zsd0126a.bxnl4r00 , add first key (4r00) location $vol5.zsd0126a.bxnl5r00 , add first key (5r00) location $vol6.zsd0126a.
& -- Table CAT.SCH.T126B--& $vol1.zsd0126a.bxnw1r*, & $vol2.zsd0126a.bxnw2r*, & $vol3.zsd0126a.bxnw3r*, & $vol4.zsd0126a.bxnw4r*, & & -- index T126B_NDX1 on CAT.SCH.T126B-- & $vol1.zsd0126a.qdx1g1*,& $vol2.zsd0126a.qdx1g2*,& $vol3.zsd0126a.qdx1g3*,& $vol4.zsd0126a.qdx1g4*,& $vol5.zsd0126a.qdx1g5* & -- end of Table CAT.SCH.T126B--) TMF Example 2: Input Is SHOWDDL Text MXGNAMES -Showddl=$VOL1.SQLSTUFF.SHOWD123 -output=NAMELST -TMF Given the file SHOWD123: CREATE TABLE CAT.SCH.
ADD FIRST KEY (100) LOCATION \NSK.$VOL2.ZSD0126A.QDXWG200 NAME PART_S_T_U , ADD FIRST KEY (500) LOCATION \NSK.$VOL3.ZSD0126A.QDXWG300 NAME PART_Y_Z_1 , ADD FIRST KEY (700) LOCATION \NSK.$VOL4.ZSD0126A.QDXWG400 NAME PART_A_Z_1 ) ; The resulting contents of file NAMELST would be: (-- Table CAT.SCH.T126A -- & $vol1.zsd0126a.bxnl1r*,& $vol2.zsd0126a.bxnl2r*,& $vol3.zsd0126a.bxnl3r*,& $vol4.zsd0126a.bxnl4r*,& $vol5.zsd0126a.bxnl5r*,& $vol6.zsd0126a.bxnl6r*,& & -- index T126A_NDX1 on CAT.SCH.T126A -- & $vol1.
LOCATION ( \pnode.$vol1.zsd0126a.bxnl1r00 TO \bnode.$vol1.zsd0126a.bxnl1r00, \pnode.$vol2.zsd0126a.bxnl2r00 TO \bnode.$vol2.zsd0126a.bxnl2r00, \pnode.$vol3.zsd0126a.bxnl3r00 TO \bnode.$vol3.zsd0126a.bxnl3r00, \pnode.$vol4.zsd0126a.bxnl4r00 TO \bnode.$vol4.zsd0126a.bxnl4r00, \pnode.$vol5.zsd0126a.bxnl5r00 TO \bnode.$vol5.zsd0126a.bxnl5r00, \pnode.$vol6.zsd0126a.bxnl6r00 TO \bnode.$vol6.zsd0126a.bxnl6r00, \pnode.$vol1.zsd0126a.qdxwg100 TO \bnode.$vol1.zsd0126a.qdxwg100, \pnode.$vol2.zsd0126a.
\pnode.$vol3.zsd0126a.qdx1g300 TO \bnode.$vol3.zsd0126a.qdx1g300, \pnode.$vol4.zsd0126a.qdx1g400 TO \bnode.$vol4.zsd0126a.qdx1g400) Restore Example 3: Input Is an SQL/MX Table Name MXGNAMES CAT.SCH.
Index RDF Backup Schemas, 56 Schemas, 77 SPJ, 101 SQL/MX Database, 72 SQL/MX Metadata, 75 SQL/MX Tables, 79 Triggers, 101 Views, 100 A ABORT DISK command SCF, 260 Access Privileges, 78 Adding Catalogs, 151 Columns, 151 Constraints, 153 Indexes, 154 Objects, 149 Partitions, 156, 304 Schemas, 158 SQL/MP Aliases, 158 SQL/MX Tables, 159 Triggers, 159 Views, 160 All Privileges for a Column, 146 ALTER DISK, ALTNAME, 261 ALTER DISK, LABEL, 261 ALTER DISK, VOLNAME, 261 Altering Distributed Objects, 277 Indexes, 16
E mxtool VERIFY, 105 Enhancing Database Performance, 290 F FASTCOPY, 205 File Recovery, 49 File Structures, 29 Foreign Key, 30 Frequency of Backups, 62 FUP RELOAD, 176 N Naming Conventions, 267 Naming Database Objects ANSI Guardian Physical Names, 21 Network Environments, 278 O GOAWAY, 260 Object Types, 23 Online and Offline Partition Management, 182 OSS Environment, 24 H P HEADING Attribute, 94 Parallel Index, 302 Parallel Processing, 20 Partitioning Key, 30 Performance, 281 Performance Measurem
RESTORE SHOWDDL ON, 66 Restoring Catalogs, 248 Indices and Index Partitions, 249 Schemas, 249 Tables and Partitions, 249 Restoring Objects, 248 Reviewing and Setting System Defaults, 67 V Volume Recovery, 49 W With Compiled Modules, 211 Without Compiled Modules, 212 S Safeguard Security, 43 SCF ALTER DISK, 299 Securing User Modules, 220 SECURITY_SCHEMA, 22 SET STATISTICS, 283 Setting DEFINEs, 217 SQL/MP Aliases, 217 SHOWDDL, 106 SHOWLABEL, 105 SQL/MX Database Recovery Backup and Restore 2, 60 SQL/MX Meta