HP NonStop SQL/MP Installation and Management Guide Abstract This manual explains how to install HP NonStop™ SQL/MP, the HP relational database management system, and how to plan, create, and manage SQL/MP databases and applications. Product Version NonStop SQL/MP G06 Supported Release Version Updates (RVUs) This publication supports G06.22 and all subsequent G-series RVUs until otherwise indicated by its replacement publication.
Document History Part Number Product Version Published 520680-001 NonStop SQL/MP G06 November 2001 523353-001 NonStop SQL/MP G06 February 2002 523353-002 NonStop SQL/MP G06 December 2003 523353-003 NonStop SQL/MP G06 March 2004 523353-004 NonStop SQL/MP G06 December 2004
HP NonStop SQL/MP Installation and Management Guide Index Figures What’s New in This Manual xiii Manual Information xiii New and Changed Information Tables xiii About This Manual xv Audience and Task Analysis xv Prerequisites xv NonStop SQL/MP Library xvi Related Manuals xvii Notation Conventions xviii 1.
2. Installing SQL/MP (continued) Contents 2.
4. Planning Database Security and Recovery (continued) Contents 4.
5. Creating a Database (continued) Contents 5. Creating a Database (continued) Defining Unique Indexes 5-47 Creating Index Partitions 5-48 Specifying Parallel Loading of Index Partitions 5-49 Creating Constraints on Data 5-51 Using the CREATE CONSTRAINT Statement 5-52 Examples of Creating Constraints 5-54 Creating Collations 5-55 Creating Collation Source Files 5-56 Creating Collation Objects 5-59 Securing Collations 5-59 6.
Contents 7. Adding, Altering, Removing, and Renaming Database Objects (continued) 7.
. Reorganizing Tables and Maintaining Data (continued) Contents 8.
10. Managing Database Applications Contents 10.
Contents 11. Performing Recovery Operations (continued) 11.
12. Managing a Distributed Database Contents 12.
14. Enhancing Performance (continued) Contents 14.
C. Format 2 Partitions Contents C. Format 2 Partitions Planning for SQL Format 2-Enabled Tables and Format 2 Partitions General Planning Considerations C-5 Operational Considerations for SQL Format 2-Enabled Table Use C-7 Fallback Considerations C-8 Fallback Scenario 1 C-8 Fallback Scenario 2 C-8 Fallback Scenario 3 C-9 Interoperability Considerations C-12 Third-Party Provider Considerations C-13 C-1 Index Figures Figure 1-1. Figure 3-1. Figure 3-2. Figure 3-3. Figure 5-1. Figure 5-2. Figure 10-1.
Tables Contents HP NonStop SQL/MP Installation and Management Guide —523353-004 xii
What’s New in This Manual Manual Information HP NonStop SQL/MP Installation and Management Guide Abstract This manual explains how to install HP NonStop™ SQL/MP, the HP relational database management system, and how to plan, create, and manage SQL/MP databases and applications. Product Version NonStop SQL/MP G06 Supported Release Version Updates (RVUs) This publication supports G06.22 and all subsequent G-series RVUs until otherwise indicated by its replacement publication.
What’s New in This Manual New and Changed Information HP NonStop SQL/MP Installation and Management Guide —523353-004 xiv
About This Manual The SQL/MP relational database management system (RDBMS) uses the Structured Query Language (SQL) to create databases and to describe and manipulate data. SQL/MP is compatible with the American National Standards Institute (ANSI) and International Organization for Standardization (ISO) standards for SQL. SQL/MP provides high performance for production of online transaction processing (OLTP) applications over a full range of centralized or distributed systems.
NonStop SQL/MP Library About This Manual queries, and the SQL/MP Version Management Guide, which explains how to work with different versions of SQL/MP software, objects, and programs. NonStop SQL/MP Library This manual is a part of the SQL/MP library of manuals. The library also includes these manuals: • • • • • • • • Introduction to NonStop SQL/MP provides an overview of the SQL/MP relational database management system.
Related Manuals About This Manual This figure illustrates the relationships between the SQL/MP Installation and Management Guide and other manuals in the SQL/MP library.
Notation Conventions About This Manual • • • • • • • • • • Measure Reference Manual describes the use of the Measure product to collect statistical information on database objects and processes, and to generate reports for performance analysis. Open System Services Shell and Utilities Reference Manual describes how to use OSS utilities; this information is useful for OSS SQL programs.
General Syntax Notation About This Manual UPPERCASE LETTERS. Uppercase letters indicate keywords and reserved words. Type these items exactly as shown. Items not enclosed in brackets are required. For example: MAXATTACH lowercase italic letters. Lowercase italic letters indicate variable items that you supply. Items not enclosed in brackets are required. For example: file-name computer type. Computer type letters within text indicate C and Open System Services (OSS) keywords and reserved words.
General Syntax Notation About This Manual … Ellipsis. An ellipsis immediately following a pair of brackets or braces indicates that you can repeat the enclosed sequence of syntax items any number of times. For example: M address [ , new-value ]… [ - ] {0|1|2|3|4|5|6|7|8|9}… An ellipsis immediately following a single syntax item indicates that you can repeat that syntax item any number of times. For example: "s-char…" Punctuation.
Notation for Messages About This Manual !i:i. In procedure calls, the !i:i notation follows an input string parameter that has a corresponding parameter specifying the length of the string in bytes. For example: error := FILENAME_COMPARE_ ( filename1:length , filename2:length ) ; !i:i !i:i !o:i. In procedure calls, the !o:i notation follows an output buffer parameter that has a corresponding input parameter specifying the maximum length of the output buffer in bytes.
Notation for Management Programming Interfaces About This Manual either vertically, with aligned braces on each side of the list, or horizontally, enclosed in a pair of braces and separated by vertical lines. For example: obj-type obj-name state changed to state, caused by { Object | Operator | Service } process-name State changed from old-objstate to objstate { Operator Request. } { Unknown. } | Vertical Line.
Change Bar Notation About This Manual Change Bar Notation Change bars are used to indicate substantive differences between this manual and its preceding version. Change bars are vertical rules placed in the right margin of changed portions of text, figures, tables, examples, and so on. Change bars highlight new or revised information. For example: The message types specified in the REPORT clause are different in the COBOL environment and the Common Run-Time Environment (CRE).
About This Manual Change Bar Notation HP NonStop SQL/MP Installation and Management Guide —523353-004 xxiv
1 The SQL/MP Database Management Environment Managing an SQL/MP relational database typically involves managing sets of continuously active programs in addition to the database itself. In this environment, you must add new applications and disk volumes, and repair or change existing applications, all without affecting other applications currently running on your system.
The SQL/MP Database Management Environment SQL Objects be part of more than one database. Each node (system) can have more than one SQL database, and a database can span more than one node. You use SQL/MP through SQL statements, SQLCI commands, and SQL compiler directives.
The SQL/MP Database Management Environment SQL Catalogs Views A view is a table that has a logical definition and a file label, but contains no data. A protection view is derived from a single table by selecting a subset of the table columns and rows. A shorthand view is derived by selecting columns and rows from one or more tables and views.
Active Data Dictionary The SQL/MP Database Management Environment Figure 1-1 shows typical use of a data dictionary. DDL statements and the SQL compiler access both the catalogs and the file labels, but SQL compiled programs access only the file labels of objects during program execution. An executing program requires catalog access only when the program requires automatic recompilation. Figure 1-1.
The SQL/MP Database Management Environment SQL/MP Features not recompile a program explicitly, an SQL recompilation occurs automatically before each execution of the program to ensure it uses the current definition of the table.
The SQL/MP Database Management Environment • Data Integrity File recovery, which reconstructs specific audited files when the current copies on the data volume are not usable; for example, if a system or media failure jeopardizes the consistency of one or more audited files The BACKUP utility provides volume-mode and file-mode tape backups for database objects and for SQL programs stored in Guardian files, which you can recover with RESTORE.
The SQL/MP Database Management Environment Multiple Character Sets For each table, you can define one or more integrity constraints that the disk process checks before inserting a row into the table. Each constraint is an SQL object. If a row does not satisfy a condition specified in a constraint, the disk process does not insert the row. Likewise, update values for existing rows must satisfy all constraints on the table.
The SQL/MP Database Management Environment Parallel Processing Parallel Processing The following types of parallel processing are available in SQL/MP: • • • • • • Parallel query processing Parallel join operations Parallel index maintenance Parallel index loading Parallel sorting Parallel input-output operations Parallel query processing can provide speed-up and scale-up. Speed-up is the reduction of response time, which you can accomplish by spreading the database over multiple partitions.
The SQL/MP Database Management Environment Database Management Operating Environments and row redistribution with concurrent read and update capability available for the table or index • • • Parallel table loads (using multiple SQLCI LOAD PARTONLY operations) and index loads (using CREATE INDEX or LOAD command with PARALLEL EXECUTION ON) to reduce the time required to load the object Automatic recompilation or partial recompilation, which eliminates the need to terminate program execution when changes i
The SQL/MP Database Management Environment • Database Management Tasks When referring to SQL objects from embedded SQL statements within an SQL program compiled for the OSS environment, use Guardian names or DEFINEs. Database Management Tasks To manage an SQL/MP database, you must perform all the tasks required to create the database, ensure its availability to users, and perform required changes.
Database Management Tools The SQL/MP Database Management Environment • Data Status Language (DSL) statements that retrieve status information about catalogs and about versions of objects and software components. In addition, SQL/MP supplies installation commands to install SQL/MP and a set of database management utilities. DDL, DML, DCL, and DSL statements, Guardian utilities, and SQL utilities are the basic tools of the SQL/MP database administrator.
Database Management Tools The SQL/MP Database Management Environment Table 1-1. SQL/MP Statements and Commands for Database Management (page 2 of 3) Statement or Command Type Statement or Command Description CONVERT Converts an Enscribe file definition to an equivalent SQL CREATE TABLE statement and, optionally, generates SQLCI LOAD commands for loading data from the Enscribe file into an equivalent SQL table. COPY Copies data to and from Enscribe files and SQL tables, adding to existing data.
Database Management Tools The SQL/MP Database Management Environment Table 1-1. SQL/MP Statements and Commands for Database Management (page 3 of 3) Statement or Command Type Statement or Command Installation CREATE SYSTEM CATALOG Creates the system catalog, including the CATALOGS table. DROP SYSTEM CATALOG Deletes the system catalog, including the CATALOGS table, and deletes the SQLCI2 program. INITIALIZE SQL Prepares a NonStop system to run SQL/MP. DELETE Deletes rows from a table or view.
The SQL/MP Database Management Environment Database Management Tools Table 1-2. NonStop Tools for Database Management Program Description BACKUP Copies Guardian files from disk to magnetic tape. DataLoader/ MP Loads and maintains SQL/MP databases (designed for large decision support system (DSS) tables). DSAP Disk Space Analysis Program: analyzes use of space on disk volumes, reporting on factors such as free space availability and extent allocation.
2 Installing SQL/MP After ensuring that your node meets the hardware and software requirements for SQL/MP, you can install the SQL/MP relational database management system (RDBMS). After installing SQL/MP, you can install the sample application distributed with the SQL/MP software, as explained in the SQL/MP Reference Manual. Use this application to demonstrate embedding SQL statements in host language programs, querying the catalogs, and querying and updating sample database tables.
SQL/MP Software Components Installing SQL/MP rename the appropriate LIBOBJnn library object file to LIBOBJ before using SYSGEN to generate the node. SYSGEN accepts only one LIBOBJ file. • For a list of supported character sets, see Defining Columns on page 5-19. Recommendations for a node running SQL/MP are: • • Mirrored volumes are recommended, but not required, for volumes containing SQL objects. A minimum of 16 megabytes of memory is suggested for each processor in a node running SQL/MP.
Starting the Transaction Management Facility (TMF) Installing SQL/MP audit trails and the dump process. For information about TMF auditing requirements, configuration guidelines, and considerations for SQL/MP, see The TMF Subsystem on page 4-10 and the TMF Planning and Configuration Guide. Your version of TMF must be compatible with your version of SQL/MP software as noted under Hardware and Software Requirements on page 2-1.
Initializing SQL/MP Installing SQL/MP For configuration information, see Guidelines for Configuring TMF on page 4-13. Initializing SQL/MP Before you can use SQL/MP for the first time, or when reinstalling the product, you must request initialization of SQL/MP by using the CREATE SYSTEM CATALOG and INITIALIZE SQL commands. The SQL/MP Reference Manual describes the syntax of these SQLCI commands.
Initializing SQL/MP Installing SQL/MP By default, the system catalog resides on the subvolume $SYSTEM.SQL; however, you can specify an alternative volume and subvolume. The system catalog contains a table called CATALOGS, which is the system directory of catalogs. If you put the system catalog on a volume other than $SYSTEM, SQL/MP puts the CATALOGS table on a subvolume named SQL on the same volume as the rest of the system catalog. Note.
Initializing SQL/MP Installing SQL/MP 6. Initialize SQL/MP by entering this command at the SQLCI prompt: >> INITIALIZE SQL; During the installation of the SQL/MP system software, the INSTALL program places the new SQLCI2 program on the $SYSTEM.SYSTEM subvolume in the file named ZZSQLCI2. SQLCI2 is the process through which the SQL/MP conversational interface (SQLCI) communicates with the SQL/MP executor to request various functions.
Initializing SQL/MP Installing SQL/MP can use ALTER TABLE statements to resecure the CATALOGS, USAGES, TRANSIDS, and PROGRAMS tables separately. >> ALTER CATALOG system-catalog SECURE "NG--"; >> ALTER TABLE system-catalog.CATALOGS SECURE "NA--"; For more information, see Securing the System Catalog on page 5-10. 10. Verify that these programs are secured for execute access on your node: $SYSTEM.SYSTEM.SQLCI $SYSTEM.SYSTEM.SQLCI2 $SYSTEM.SYSTEM.SQLCAT $SYSTEM.SYSTEM.AUDSERV $SYSTEM.SYSTEM.SQLCOMP $SYSTEM.
Initializing SQL/MP Installing SQL/MP You can use the FUP SECURE command to alter the security of these programs and the FUP LICENSE command to license the programs, if necessary. To alter the security of the SQL sensitive program SQLCI2, you can either use the SQL ALTER PROGRAM statement or the FUP SECURE command. Note. The SQLCI2 program is not licensed under normal circumstances. Only the super ID can license the SQLCI2 program. For more information, see Appendix A, Licensed SQLCI2 Process. 1.
Setting Up Event Logging Installing SQL/MP 4. If you use COBOL with SQL/MP, replace the COBOLEXT file on your $SYSTEM.SYSTEM subvolume with the correct COBOL extension file. The $SYSTEM.SYSTEM.COBOLEXT extension file installed by the INSTALL program does not contain the complete COBOL extension libraries for SQL/MP, which are stored under the name $SYSTEM.SYSTEM.COBOLEX0. To replace the existing $SYSTEM.SYSTEM.
Additional Installation Considerations Installing SQL/MP Additional Installation Considerations Installation of SQL/MP might require installation of a specific version of related software. For versioning requirements of products associated with SQL/MP, check the documentation supplied with your software. If you install an SPR to the T6570 - National Language Character Processing product, you must SQL compile the collation compiler manually. For more information, see Step 8 on page 2-4.
Installing SQL/MP • Migrating to a Newer Software Version Test the features available with the newer version of the software. Create test catalogs and objects for testing purposes, rather than altering existing objects. For example, collations require a version 300 or newer catalog to register the collation. Before adding a collation, create a test catalog to associate with the collation.
Installing SQL/MP C-Series to D-Series Migration Considerations If you are migrating from version 1 to a version 300 or newer version, you can use these version 2 features without upgrading any version 1 catalogs: parallel execution, local autonomy, parallel index maintenance, and virtual sequential block buffering. You can have a combination of versions of catalogs on a node.
Mixed-Version Network Considerations Installing SQL/MP Mixed-Version Network Considerations SQL/MP processes run at a high PIN by default. However, a process that must access objects on C-series nodes must not run at a high PIN.
Upgrading Catalogs Installing SQL/MP For parallel index creation or parallel index load operations that access base table partitions on nodes running older versions of SQL/MP, specify the LOCALONLY option at the start of the CREATE INDEX configuration file to force RECGEN processes to run on the local node. Alternatively, start the CREATE INDEX operation from the node running the older version of software. Caution.
Reverting to an Older Software Version Installing SQL/MP no user or program can access any catalogs or objects on the node. For this reason, you might want to upgrade catalogs individually. This example upgrades three user catalogs, one catalog at a time: >> UPGRADE CATALOG \SYS1.$VOL1.SALES TO 310; --- SQL operation complete. >> UPGRADE CATALOG \SYS1.$VOL1.INVENT TO 310; --- SQL operation complete. >> UPGRADE CATALOG \SYS1.$VOL1.PERSNL TO 310; --- SQL operation complete.
Downgrading Catalogs Installing SQL/MP from the newer-version tables to the older-version tables or use SQLCI for this purpose. You must perform these tasks before installing the older-version software. Caution. If any tables have the UNRECLAIMED FREESPACE (F) or INCOMPLETE SQLDDL OPERATION (D) attribute set, remedy the situation before downgrading the version. Otherwise, the table might be corrupt after the downgrade is completed.
Recompiling Programs Installing SQL/MP • Drop any program whose program catalog version would be newer than the version of the catalog after the catalog is downgraded. For example, drop version 310 and newer programs before reverting to version 2. You cannot downgrade a catalog to version 1, but version 2 catalogs are compatible with version 1 SQL/MP software if you access only version 1 objects. For information about catalog, object, and program versions, see the SQL/MP Version Management Guide.
Reverting to an Older Version of TMF Installing SQL/MP 3. Make a list of all programs whose program format version or program catalog version is 300 or newer. You can generate a list by entering a SELECT statement like the following for each catalog on the node: >> SELECT PROGRAMNAME, PROGRAMFORMATVERSION, +> PROGRAMCATALOGVERSION +> FROM $CATVOL.SYS.PROGRAMS <--PROGRAMS table in system catalog +> WHERE PROGRAMCATALOGVERSION >= 300 +> OR PROGRAMFORMATVERSION >= 300 ; --- SQL operation complete.
3 Understanding and Planning Database Tables An understanding of the types of organizations of tables and their corresponding file structures is essential for effective use and functioning of the database. This section presents conceptual information about tables and how to plan table organization. Subsequent sections describe how to create tables and associated database objects.
Understanding and Planning Database Tables Primary Keys The following text describes primary keys and then discusses the structures of the three types of files. An understanding of these structures can help you plan for the best use of disk storage space when sizing your database, implementing economical table access methods, and analyzing various performance trade-offs. This understanding is also essential for anyone using the FILCHECK or TANDUMP utility to operate on physical file structures.
Understanding and Planning Database Tables Primary Keys a table, a column cannot contain the null value; the column is implicitly defined as NOT NULL. User-defined primary keys facilitate generic locking as explained under Using Generic Locks on page 14-21. System-Defined Primary Key A system-defined primary key consists of a column named SYSKEY, generated and maintained internally by SQL. For each row (record), the SYSKEY column contains a unique system-generated value.
Understanding and Planning Database Tables Primary Keys nonunique clustering key. Using these nonunique key values, a user can scan the file more efficiently and can also create partitions. The combined length of the clustering key, not including the appended SYSKEY column, cannot exceed 247 bytes. Columns in the clustering key definition cannot be updated. Clustering keys have the same performance implications as primary keys, as described next under Key Levels.
Understanding and Planning Database Tables Primary Keys For example, keys defined with these columns are level 2 (SIGNED is the default): KEY-1 KEY-2 KEY-3 KEY-4 • INTEGER CHARACTER (30) DESCENDING CHARACTER(2), NUMERIC(8), DECIMAL(4), VARCHAR(20) INTEGER, DATETIME, INTERVAL Level 3 keys include other possible keys not included in levels 1 or 2.
Understanding and Planning Database Tables Key-Sequenced File Structure key-sequenced table in which the index columns make up the primary key. By using index keys to support a unique index, you can improve performance for queries. For key level consideration, indexes always have contiguous columns because the order of the columns in the CREATE INDEX statement defining the index applies rather than the position of the columns in the underlying table.
Understanding and Planning Database Tables Key-Sequenced File Structure SYSKEY appended to a clustering key. This rule also applies to entry-sequenced tables but not to relative tables. • • • If you create a table with a SYSKEY column, consistency problems can result if the values of SYSKEY are used in a column of a user-defined table as a foreign key in that table. If the original table is reloaded, the SYSKEY values change because the column is system defined.
Key-Sequenced File Structure Understanding and Planning Database Tables If you do not use an index, access occurs by primary or clustering key. Access can begin with the first record in the file or can be requested for only a specified range of records in the file. Key-Sequenced Tree Structure Key-sequenced files are physically organized as one or more bit-map blocks and a B-tree structure of index blocks and data blocks. Bit-map blocks within a structured file organize the file’s free space.
Understanding and Planning Database Tables Entry-Sequenced File Structure Each data block contains a header, plus one or more data records, depending on the record size and data-block size. For each data block, an entry in an index block contains the value of the key field for the first record in the data block and the address of that data block. The position of a new record inserted into a key-sequenced file is determined by the value of its primary-key field.
Relative File Structure Understanding and Planning Database Tables Figure 3-2 on page 3-10 illustrates the structure of an entry-sequenced file. Figure 3-2.
Relative File Structure Understanding and Planning Database Tables position in the file. The first record is identified by record number zero. Succeeding records are identified by ascending record numbers in increments of one. The value of the primary key is either a user-specified or system-generated relative record number. Each new row is stored at the relative record number specified by the primary key. Figure 3-3 illustrates the structure of a relative file. Figure 3-3.
Understanding and Planning Database Tables Relative File Structure characteristic represents a factor that could influence both space and time performance of a relative file. A user performing update operations on a table stored in a relative file can update or delete rows and can lengthen or shorten values in varying-length columns up to the actual defined record length.
Understanding and Planning Database Tables Determining a Database Layout Determining a Database Layout In your database scheme, users and applications can access the database with these: • • • Base tables only Views only A combination of base tables and views In addition, indexes can be an efficient underlying mechanism for data access. Using Base Tables You can use base tables to externalize all the data to the user.
Understanding and Planning Database Tables • • • • • • • Using Views Reducing the overhead of returning unnecessary rows or columns, depending on the selection criteria Allowing the apparent structure of retrieved data to be different from the actual stored data Giving individual windows on the data to many users Allowing logical renaming of columns Redefining headings for columns Redefining help text for columns Presenting only the columns and rows a user must work with, instead of all the columns and r
Understanding and Planning Database Tables • Using Views Limitations on access to the data The SECURE clause can assign a security string to a protection view to limit access to those users who have authority to read, write to, and purge the view. • Protection against inserting or updating rows outside the definition of the view A protection view defined with the WITH CHECK option specifies that only rows that satisfy the view’s definition can be inserted by users.
Understanding and Planning Database Tables Determining When to Use Indexes Using Only Views to Externalize Data Experienced database administrators have observed that having applications use only views as the external interface for the database scheme has certain advantages for both programs and ad hoc queries. Advantages to using only views as the external database scheme include: • • • • • • The physical database structure of the base tables is not externalized to programmers or other users.
Understanding and Planning Database Tables Determining When to Use Indexes Each index is assigned a name and is physically stored in a separate key-sequenced file that possesses the same file name as the index. Index files are not tables, and they cannot be queried directly through SQL; they are only a tool for providing faster access to tables. Performance Benefits of Indexes Indexes can improve performance by eliminating the need for the disk process to access the underlying table.
Understanding and Planning Database Tables Determining When to Use Indexes Evaluating the Benefit of a New Index Creating alternate indexes can help the performance of some, but not all, queries. In some applications, determining when an index could be efficient might be easier than in others. Examine the WHERE clauses and ORDER BY clauses of the SELECT statements in your application. You should consider creating indexes on only the most frequently used columns.
Understanding and Planning Database Tables Determining When to Use Indexes Defining an Index When you define an index, consider these guidelines: • • • • • • • • The maximum length for the rows of a nonunique index is 253 bytes. The row length includes the sum of the lengths of the columns declared for the index plus the sum of the lengths of the columns of the primary key of the underlying table. The maximum length for the rows of a unique index is 508 bytes.
Understanding and Planning Database Tables Determining When to Use Indexes the table. If SQL generates KEYTAG values, it sequentially numbers all indexes on a table, beginning with 1. KEYTAG 0 is the primary key. • • Indexed columns—the columns located in the column list in the CREATE INDEX statement. For nonunique indexes—columns of the primary key of the underlying table. These columns are required to identify rows uniquely in the index.
Understanding and Planning Database Tables Determining When to Use Indexes For another example, suppose that applications frequently access the EMPLOYEE table by employee name (and the primary key is EMPNUM).
Understanding and Planning Database Tables Determining When to Use Indexes Creating Indexes for MIN and MAX Functions Indexes can also improve the processing of the MIN and MAX functions. For example, consider these two queries. Suppose that an index exists on the RETAIL_PRICE column.
4 Planning Database Security and Recovery Database security and recovery are two important topics to consider before creating a database: • • Planning for security and implementing an authorization scheme is the primary protection against unauthorized user intervention. Security, however, cannot eliminate errors by authorized users. Planning for recovery is essential for protecting your database.
Planning Database Security and Recovery Security Guidelines Security Guidelines Authorization to operate on SQL tables, views, indexes, collations, and SQL programs that run in the Guardian environment is maintained by the Guardian security subsystem. Authorization for SQL programs that run in the OSS environment is maintained by the OSS security subsystem.
Planning Database Security and Recovery Sample Authorization Schemes ° The tables, views, and indexes are secured for access by servers; all application use is through programs initiated by the application environment. ° Queries on tables or views are limited to the database administrator and the super ID user.
Planning Database Security and Recovery Guidelines for Security Schemes authority to purge an object can drop that object, an authorization scheme should limit the authority for purging. For an authorization scheme, you should establish catalog boundaries along the lines of application and user access requirements. Associate catalogs with sets of tables logically associated or used together. With this scheme, security follows the catalogs you choose.
Planning Database Security and Recovery Authorization Requirements for Database Operations Network databases require remote passwords (at the network level) and network security strings for both catalogs and objects to allow remote access. • • • • • • • • When an SQL object is created, the ownership defaults to the owner of the session or program.
Planning Database Security and Recovery Authorization Requirements for Database Operations Table 4-1.
Planning Database Security and Recovery Authorization Requirements for Database Operations Table 4-1. Authorization Requirements (page 2 of 3) CREATE INDEX Local owner of the underlying table, local super ID, local group manager, or remote owner with purge authority for the table; read and write authority for the underlying table; and write authority for the USAGES table of catalogs that describes the underlying table. CREATE TABLE Read and write authority for all affected catalogs.
Planning Database Security and Recovery Safeguard Security Product Table 4-1. Authorization Requirements (page 3 of 3) DUP Read authority for objects and files being duplicated; read authority for the catalogs containing the object descriptions; same authority as for CREATE statements for the types of objects being duplicated; and purge authority for target files and objects if purging is necessary. EDIT Read and write authority for the file to be edited.
Planning Database Security and Recovery Safeguard Security Product specifies the users and user groups who can or cannot access the object. If an access control list does not specify access permission for a particular user, the Safeguard product rejects that user’s access attempt.
Planning Database Security and Recovery The TMF Subsystem The TMF Subsystem The TMF subsystem provides transaction protection, database consistency, and database recovery. TMF Concepts Use of the TMF subsystem requires an understanding of these TMF elements: • • • • • Transactions Audit trails Audit files Audit dumps Online dumps These paragraphs give a brief overview of these elements; for more information, see the TMF Introduction.
Planning Database Security and Recovery Levels of Database Recovery Audited Files Files or tables protected by the TMF subsystem are called audited files. Only audited files have changes logged to audit trails. Files not protected by the TMF subsystem are nonaudited files and do not have changes logged. You can choose which files are to be audited on a file-by-file basis, depending on application requirements. Only files on a TMF-configured data volume can be audited.
Planning Database Security and Recovery SQL Requirements for TMF The TMF subsystem handles backout operations without operator intervention by using the audit trails automatically cycled by the TMF subsystem. The TMF backout process uses before-images in the audit trails to undo the effects of an aborted transaction. Volume Recovery Volume recovery recovers the database in the event of a disk crash or system failure.
Planning Database Security and Recovery Guidelines for Configuring TMF audited. Consider using the TMF subsystem to audit all tables, views, and indexes to ensure both the integrity of the database and a timely recovery from media failures or incomplete transactions. As a general rule, the TMF subsystem must be available when users are running SQL application programs or using SQLCI.
Planning Database Security and Recovery Guidelines for Configuring TMF backs out event records, thereby eliminating valuable historical information about events such as failures. A database with a combination of audited and nonaudited tables can be left in an inconsistent state after a failure. If a failure occurs, audited tables are recovered to the original state, but updates to nonaudited tables are left in an unknown state.
Planning Database Security and Recovery Guidelines for Online Dumps Specifying TMF Attributes Use these guidelines when specifying TMF attributes: • • • • • • You can define a separate audit trail for each volume or audit more than one volume in the same audit trail. The volume or volumes containing the audit trails must have sufficient free space to accommodate the extents required for the number of audit-trail files.
Planning Database Security and Recovery TMF Considerations in Using SQLCI objects. Thus, if a file recovery is necessary your online dumps already reflect the batch updates. TMF would need to apply only those database changes that occurred after the online dumps were taken. • • When you create a new table and you want to provide file recovery protection for the table, you should make an initial online dump of the file after creating it. Certain DDL statements invalidate previous online dumps.
Planning Database Security and Recovery • • • • • • Backup Strategies For DDL statements, the catalog manager generates the appropriate number of TMF transactions for the operations, reducing the overhead associated with TMF audit trails and ensuring that the necessary locks are acquired for the operations. These system-generated transactions occur regardless of the AUTOWORK setting.
Planning Database Security and Recovery • • • • • • • Backup Strategies Backing up catalogs Backing up partitions Backing up indexes Backing up views Backing up collations Backing up by volume or by file Using OBEY command files for backup operations For information about backing up OSS files, see the discussion of the pax utility in the Open System Services Shell and Utilities Reference Manual. Note. The BACKUP and RESTORE recovery method differs from recovery methods provided by TMF.
Planning Database Security and Recovery • • • • Daily Backups BACKUP has two formats for files: ARCHIVEFORMAT and DP2FORMAT. ARCHIVEFORMAT is the default used by BACKUP when a system includes SQL/MP. If you need to back up non-SQL files with DP2FORMAT, you must specify a file set list that does not contain any SQL objects and specify DP2FORMAT in the BACKUP command.
Planning Database Security and Recovery Periodic Full Backups You might not need to back up subvolumes that the SIT SYSGEN tape can recover or the audit trails dumped to tape by TMF procedures. This example performs a full backup on all files except $SYSTEM.SYSTEM.*, $SYSTEM.SYSnn.*, and $AUDIT.TRAILS.*: BACKUP $TAPE, *.*.* EXCLUDE ($SYSTEM.SYSTEM.*, $SYSTEM.SYSnn.*, $AUDIT.TRAILS.
Planning Database Security and Recovery Backing Up Partitions partitions of tables and indexes; the catalog tables themselves are not backed up unless you also specify the SQLCATALOGS ON option. By using the FROM CATALOG option for an SQL database, you can achieve a backup of a catalog or list of catalogs. This clause can be useful in maintaining backups of logical groupings of SQL objects as they are grouped in the catalogs. You can use a DEFINE name for the catalog name.
Planning Database Security and Recovery Backing Up Indexes Backing Up Indexes The INDEXES option controls whether indexes are backed up automatically when the underlying table is backed up. If you specify INDEXES IMPLICIT or use this option by default, index definitions are backed up automatically with the underlying table, regardless of whether it is explicitly named in the file set list.
Planning Database Security and Recovery Backing Up Collations This example backs up the table EMPLOYEE, which has one protection view, EMPLIST, and two shorthand views, EMPSHV1 and EMPSHV2. The example shows the BACKUP command that backs up both protection and shorthand views: BACKUP $TAPE,($VOL1.SVOL.EMPLOYEE,$VOL1.SVOL.EMPSHV1, $VOL2.SVOL.EMPSHV2), LISTALL, OPEN Backing Up Collations To back up collations, specify them in the file set list in the BACKUP command, as you do tables and indexes.
Planning Database Security and Recovery Using Volume-Mode or File-Mode Backup Using Volume-Mode or File-Mode Backup Choose a volume-mode or file-mode backup, depending on the results you want to achieve. For a volume-mode backup, consider these characteristics of the operation: • • • • • BACKUP makes a physical copy of a volume on tape. The copy includes the disk data structures that are usually not apparent to the user.
Planning Database Security and Recovery Using OBEY Command Files for Recovery Another recovery method is to maintain EDIT files that contain database creation statements and commands for re-creating your database. When you first create the database, you should record the statements and commands in EDIT files. These EDIT files are called OBEY command files because you can use the OBEY command through SQLCI to run part or all the statements and commands in the files.
Planning Database Security and Recovery Using OBEY Command Files for Recovery HP NonStop SQL/MP Installation and Management Guide —523353-004 4- 26
5 Creating a Database The task of creating an SQL/MP database consists of creating catalogs, tables, views, indexes, collations, and constraints. Before you attempt these operations, however, you should understand planning schemes and define the database layout, as described in Section 3, Understanding and Planning Database Tables. You should have already planned your security, TMF requirements, and recovery mechanisms as described in Section 4, Planning Database Security and Recovery.
Catalog Tables Creating a Database A catalog has two types of components, described in these subsections: • • Catalog tables Indexes on catalog tables Catalog Tables Each SQL/MP catalog table describes either a particular type of object or some aspect of an object. For instance, the BASETABS catalog table describes base tables. Each catalog table has a name assigned by the system; for example, BASETABS.
Catalog Tables Creating a Database Table Name Table Function (page 2 of 2) USAGES Describes dependencies among SQL objects. VERSIONS Contains version information about the catalog. This is a backup copy of the version information in the CATALOGS table of the system catalog. VIEWS Describes views defined on base tables. Figure 5-1 illustrates relations between catalog tables. Figure 5-1.
Requirements for Catalogs Creating a Database Table Name Index Name Index Columns (page 2 of 2) PROGRAMS IXPROG01 GROUPID, USERID TABLES IXTABL01 GROUPID, USERID USAGES IXUSAG01 USINGOBJNAME, USINGOBJTYPE Only IXINDE01 is a unique index. The SQL/MP Reference Manual presents a detailed description of each catalog table. Requirements for Catalogs A system that includes SQL/MP can have one or more catalogs to describe an SQL database.
Creating a Database Design Considerations Design Considerations Because only a few rules apply to catalog location, many possible schemes exist for creating catalogs. Before deciding how to create your catalog structure, consider these performance issues for catalogs: • • • Dependent relationships in the USAGES tables of catalogs should be easily accessible. Dispersing tables, views, and indexes among catalogs does not improve throughput.
Creating a Database • Performance Considerations Administrative control is easier, and perhaps safer, with fewer catalogs. If restricting the authority for users to create SQL objects is important, controlling fewer security strings is easier. If you are using the $SYSTEM default location for the system catalog, you might want to limit use of the catalog to minimize SQL/MP disk accesses on $SYSTEM during SQL compilations.
Securing Catalog Tables Creating a Database not explicitly specify a catalog in the CREATE CATALOG statement, SQL/MP creates the catalog on the current default volume and subvolume. This example specifies the location of the catalog. The catalog name is the subvolume name. >> CREATE CATALOG \SYS1.$VOL1.MFG; --- SQL operation complete. You can use a DEFINE name to specify a catalog name. In this example, the catalog is created on \SYS1.$VOL1.MFG as defined by the DEFINE =MFG.
Creating a Database Securing Catalog Tables If you specify a security string in the CREATE CATALOG statement, you must specify the catalog name. This example specifies the location of a new catalog and the security: >> CREATE CATALOG \SYS1.$VOL1.MFGCAT SECURE "GGNO"; --- SQL operation complete. When you create a catalog or alter the security string for a catalog, the catalog security applies to all the catalog tables.
Securing Catalog Tables Creating a Database Examples These examples show access to the catalog tables. Actual access for certain statements can depend on the security of a table, view, or index. The first example shows security that enables any network user to read or write to the catalogs in which objects are registered. Any network user can compile and register programs in this catalog and can create tables, views, and indexes. Only the super ID user in the DBA user group can drop the catalog. $VOL1.
Securing the System Catalog Creating a Database Securing the System Catalog The system catalog maintains the directory of catalogs on each system in the CATALOGS table. Except for this table, the system catalog is like any other catalog on the system. You can use the system catalog as a catalog directory only or as a general-purpose catalog. The security of the system catalog should protect the catalog from removal.
Creating a Database Determining the Organization of the Physical File For information about loading base tables, see Guidelines for Loading Tables on page 8-8. Determining the Organization of the Physical File When you create a table, you can use the ORGANIZATION clause in the CREATE TABLE statement to organize the physical file. The ORGANIZATION clause is optional. The file organization can be key-sequenced, entry-sequenced, or relative.
Creating a Database +> +> Determining the Organization of the Physical File BLOCKSIZE 4096 MAXEXTENTS 24 +> ORGANIZATION KEY SEQUENCED +> SECURE "GGOO"; --- SQL operation complete. Creating a Key-Sequenced Table With a Clustering Key If you want the rows in a key-sequenced table ordered by a column or combination of columns whose values do not uniquely identify rows, you can specify these columns as a clustering key in the CLUSTERING KEY clause of the CREATE TABLE statement.
Creating a Database Determining the Organization of the Physical File Example 5-1. Creating a Table and Dependent Objects (page 1 of 2) --- DEFINEs were previously added during this SQLCI --- session or inherited from the command interpreter. >> INFO DEFINE =MCAT; DEFINE NAME =MCAT CLASS CATALOG SUBVOL \SYS1.$VOL1.MFG >> INFO DEFINE =ORDERS; DEFINE NAME =ORDERS CLASS MAP FILE $VOL1.MFG.ORDERS >> INFO DEFINE =REPORDS; DEFINE NAME =REPORDS CLASS MAP FILE $VOL1.MFG.
Creating a Database Determining the Organization of the Physical File Example 5-1. Creating a Table and Dependent Objects (page 2 of 2) >> CREATE VIEW =REPORDS +> AS SELECT SALESREP,ORDERNUM,DELIV_DATE,ORDER_DATE +> FROM =ORDERS +> CATALOG =MCAT +> FOR PROTECTION; --- SQL operation complete. >> CREATE INDEX =XORDCUS +> ON =ORDERS (CUSTNUM) +> KEYTAG “OC” +> EXTENT (100,50) +> BLOCKSIZE 2048 +> MAXEXTENTS 24 +> ICOMPRESS +> CATALOG =MCAT; --- SQL operation complete.
Creating a Database Determining the Number of Records per Block as a department number or zip code in an employee table. This reference is made through the WHERE clause in a DELETE, SELECT, or UPDATE statement. A user performing update operations can update or delete rows and can lengthen or shorten values in varying-length columns. If the logical length of the record varies, however, the physical space consumed in a relative table is always the same.
Creating a Database Additional Guidelines for Creating Tables Thus, the actual number of rows stored in a block might be greater than your calculated value of N if VARCHAR columns use fewer bytes than their maximum byte length. Although the size of a VARCHAR column can change, the total row length cannot exceed the specified maximum record size for the table.
Creating a Database • • • • • Additional Guidelines for Creating Tables ° Use AUDIT to protect the table with TMF auditing. If neither AUDIT nor NO AUDIT is specified, AUDIT is assigned by default. ° Use AUDITCOMPRESS to minimize the amount of audit-trail resources required. Use NO AUDITCOMPRESS if you need to read the complete beforeimages and after-images directly from the audit trails. ° Use BUFFERED, SERIALWRITES, and VERIFIEDWRITES to control the disk processing of the table.
Creating a Database Creating Tables on a System That Uses SMF Creating Tables on a System That Uses SMF If you are running SQL/MP on a system using the SMF product, you can specify a virtual volume for a table. The virtual volume is associated with a storage pool; SMF places the table on a physical volume in that storage pool. SMF chooses a physical location based on its size estimate of the file and on the available space in the pool.
Defining Columns Creating a Database Defining Columns To ensure the validity of your database, you must first define columns correctly for the use of the data and assign data types that provide the best design for your application. It is the database administrator’s task to consider how the data is used and to assign appropriate data types and constraints. To define columns for a table, specify the column definitions in the CREATE TABLE statement or in ALTER TABLE statements with the ADD COLUMN clause.
Creating a Database Defining Columns When determining the data type and attributes for a column, consider these guidelines: • • • • • • • SQL/MP supports the ASCII character set and several other character sets for character data. For more information, see Defining Character Data on page 5-21. Specify a column default value for each column. This default value must be DEFAULT, DEFAULT SYSTEM, NOT NULL, NULL, NO DEFAULT, DEFAULT NULL, or LITERAL literal, or a valid combination of these values.
Defining Columns Creating a Database conversion in programs. Data conversions in programs can decrease application performance. • • • • • Define varying-length columns (VARCHAR, NCHAR VARYING) as the last columns of the table. For all the other data types, the column structure within the table does not affect the performance of queries or updates. For the most efficient use of varying-length columns, however, these columns should be trailing.
Defining Columns Creating a Database • • • ISO 8859/1 through ISO 8859/9: 8-bit character sets, of which ASCII (a 7-bit set) is a subset HP Kanji: the HP representation of the character set defined in the JIS X0208 standard and commonly used in Japan HP KSC5601: a double-byte character set that is the Korean Industrial Standard character set SQL/MP also supports the UNKNOWN character set for character columns that do not have a specified character set. Note.
Defining Columns Creating a Database Defining Date-Time and Time Interval Data The data types for date-time data are: DATETIME Date and time data, optionally including specific time periods DATE Date only data TIME Time only data TIMESTAMP Date and time data The data type for time interval data is as follows: INTERVAL Duration of time Columns of the DATETIME, DATE, TIME, TIMESTAMP, and INTERVAL data types contain information about dates, times, and time intervals.
Defining Columns Creating a Database An item of type TIME indicates a time of day based on a 24-hour clock. An item of type TIME is made up of these contiguous fields: HOUR MINUTE SECOND Hour of day Minute of hour Second of minute 0 to 23 0 to 59 0 to 59 The TIME data type is equivalent to DATETIME HOUR TO SECOND.
Defining Columns Creating a Database • A column defined as DATETIME can have a range of DATETIME fields to limit the set of values stored: COLUMN_1 COLUMN_2 COLUMN_3 • • A date-time column stores values in local civil time (LCT). The LCT is determined by the node where the SQL executor is running, based upon the TIME ZONE OFFSET and DAYLIGHT SAVINGS parameters established for the system during system generation.
Creating a Database • Defining Columns A date-time value cannot be multiplied by -1, but an INTERVAL value can be. Negative date-time values are not valid. These guidelines apply to arithmetic operations on date-time and INTERVAL data types: • • • The result of subtracting two date-time values is an INTERVAL value. The result of adding or subtracting INTERVAL values is an INTERVAL value.
Defining Columns Creating a Database ° • • Declaring NO DEFAULT NOT NULL for a column requires the application to supply a value. The supplied value cannot be a null value. Use the DEFAULT literal clause in a column definition when a literal default value for the column is acceptable. The data type of the literal must match the data type of the column, as follows: ° Declaring a column with DEFAULT literal specifies using the default value if no entry is made for the column.
Creating a Database • • Defining Columns A column used in nonunique index keys can contain null values because duplicate values are allowed in such columns. A column used as a unique key for a single-column index can contain null values, but only one row with a null value is allowed. Therefore, you might want to define a column used in this manner as NOT NULL. Unique multicolumn indexes can contain columns with null values.
Defining Columns Creating a Database Defining Corresponding Columns in Different Tables Data type cross-matching between tables is the responsibility of the database administrator. You must ensure that corresponding columns used in different tables are defined with the same characteristics: data type, size, default values, and constraints. For example, a part-number column defined in several tables should have the same definition. Do not define one column as PIC X(4) and another as PIC 9(4).
Defining Columns Creating a Database Table 5-1. Summary of Corresponding Data Types (page 2 of 3) SQL/MP Column Name and Data Type COBOL C Pascal TAL E NUMERIC(4, 0) PIC S9(4) COMP. short c; int16; int c; F NUMERIC(4, 0) UNSIGNED PIC 9(4) COMP. unsigned short d; cardinal; int d /SMALLINT UNSIGNED/; G NUMERIC(9, 2) PIC S9(7)V9(2) COMP. long e; /* scale is 2 */ int32; {* scale is 2 *} int(32) e; ! scale is 2 H NUMERIC(9, 2) UNSIGNED PIC 9(7)V9(2) COMP.
Defining Columns Creating a Database Table 5-1.
Creating a Database Creating Table Partitions Creating Table Partitions To promote parallel processing of queries and parallel index maintenance, you should partition data across available disk volumes. For a very large table or a table used at different geographical sites, partitions can make the data more accessible and can reduce the time required for table scans by a factor almost equal to the number of partitions. Partitions are allowed for tables of all three organization types.
Creating a Database Creating Table Partitions examples suggest possible configurations of storage pools: • You can configure a pool containing physical volumes primaried to a particular processor or set of processors. By partitioning a table across virtual volumes associated with that pool, you ensure that disk access to the table will be managed by the specified processors. SQL/MP provides a complementary mechanism for controlling (limiting) which processors run parallel queries.
Creating Table Partitions Creating a Database The trade-off of using PHYSVOL is that when you assign a partition to a particular physical volume, SMF might not be able to relocate the file (partition) automatically. For the syntax and definition of the PHYSVOL option, see the CREATE and ALTER entries in the SQL/MP Reference Manual.
Creating a Database Creating Table Partitions Note. SQL tables and indexes with many partitions (typically around 400) might cause SQLCAT, SQLUTIL, or AUDSERV processes to incur file-system error 31 or 34 because of insufficient memory in the process file segment (PFS). To increase the PFS size for any of these SQL processes, use the BIND statement CHANGE PFS command. For programs run from TACL, you can specify the PFS size in the TACL RUN command. Save the original copy of any program you modify.
Creating Table Partitions Creating a Database Sample Table Definition This example shows the definition for the PARTLOC table. This table contains data maintained at three different sites: New York, Los Angeles, and San Francisco. The table is partitioned by its primary key (which consists of LOC_CODE and PARTNUM) into three partitions, one for each site: CREATE TABLE \NY.$WHS1.INVENT.
Securing a Base Table Creating a Database +> CATALOG =MCAT +> SECURE "NNOO"; --- SQL operation complete. For more information, see the SQL/MP Reference Manual or the SQLCI HELP entry for the CREATE TABLE statement Securing a Base Table Base tables are the foundation of the database, and base table security ultimately defines much of the security for views, indexes, and DML statements.
Creating Views of Base Tables Creating a Database This example shows security that enables any network user to read or write to a table. Only the development manager (user 100, 255) has the authority to perform the listed DDL operations. Although the security gives read, write, and purge access to additional users, the DDL statements, except DROP TABLE, require ownership of the table. Any user in group 100 can drop the table. $VOL2.APPLTAB.
Creating a Database Creating a Protection View ° The purge authority for the protection view must include all users who have purge authority for the underlying table. Normally, the purge authority for a protection view is the same as the purge authority for the underlying table. ° The security string at creation time must meet security dependency requirements. If the creator’s default security string violates the rules, the creation attempt fails.
Creating a Database Creating a Shorthand View Creating a Shorthand View A shorthand view is a view derived from one or more tables and other views and defined without the PROTECTION attribute. A shorthand view can only be queried and it can be secured only for purge access. Any user who has authority to read the underlying table or tables, has authority to read the shorthand view.
Creating a Database View Security and Underlying Table Security Example 5-2. A Shorthand View of Four Joined Tables -- DEFINEs were previously added during this SQLCI -- session or inherited from the command interpreter. >> INFO DEFINE =ICAT; DEFINE NAME =ICAT CLASS CATALOG SUBVOL \SYS1.$VOL3.INVT >> INFO DEFINE =INVENTORY_VIEW1; DEFINE NAME =INVENTORY_VIEW1 CLASS MAP FILE $VOL1.MFG.IVIEW1 >> INFO DEFINE =PARTS; DEFINE NAME =PARTS CLASS MAP FILE $VOL1.MFG.
Creating Indexes on Base Tables Creating a Database This example shows a protection view on the table $VOL4.APPLTAB.EMPLOYEE. The owner of the underlying table is the same as the owner of this view. The security of the view enables any network user to read the view and any local group 200 user to update or insert rows into the view. $VOL3.APPLTAB.PREMPV2 Owner = 200,255 Security = "NG-O" This example shows a shorthand view on the table $VOL4.APPLTAB.EMPLOYEE and other tables.
Creating a Database Creating an Index Guidelines When defining indexes, consider these general guidelines: • • When you create an index, you specify the column or columns that make up the key. The key is classified into one of three levels, depending on the data types of the columns that make up the key. The key level affects performance as described under Key Levels on page 3-4. When you create an index, the index inherits the type of partition array associated with the base table.
Creating an Index Creating a Database Figure 5-2. Ordering Columns Within an Index ... Columns with equality predicates, in order of descending selectivity, from frequently used or otherwise important queries Column with range (inequality) predicate with lowest selectivity Nonpositioning key columns Columns for index-only access Primary key columns (implicitly added and maintained by NonStop SQL/MP) VST006.
Creating a Database • Creating an Index Index creation can be a long operation, depending on the size of the table and the load on the system. Therefore, two locking strategies are available: ° Default locking requires a shared table lock on the underlying table. The shared table lock ensures that no users can modify rows during the creation of the index. This lock can prohibit access to the table by other users.
Creating a Database Creating an Index a physical location based on its size estimate of the index file and on the available space in the pool. In exceptional cases you might want a file to reside on a particular physical volume. To accomplish this, you can specify a direct volume that is not in any storage pool, or you can use the PHYSVOL option to specify a particular physical volume in a pool.
Creating a Database Defining Unique Indexes Defining Unique Indexes You can use the technique of defining index keys as UNIQUE to enhance the performance of a SELECT statement that returns only one row. A unique index requires that the value of the columns that make up the index key is unique in the table. The index value is the value of columns together in the index and not the individual values of the column.
Creating a Database Creating Index Partitions This example creates a unique index on the EMPLOYEE table. Each row must contain a unique value (a social security number, in this case) because the index is unique. >> CREATE UNIQUE INDEX IEMPSS ON $VOL1.PERSNL.EMPLOYEE +> (EMP_SOCIAL_SEC_NUMBER ASC) +> CATALOG \SYS1.$VOL1.ADMIN; --- SQL operation complete.
Creating a Database Specifying Parallel Loading of Index Partitions Specifying Parallel Loading of Index Partitions To load the partitions of a partitioned index in parallel, use the PARALLEL EXECUTION ON option of the CREATE INDEX statement or the LOAD command. The parallel feature loads all partitions of an index at the same time by using multiple processes in parallel. The parallel feature does not load more than one index at the same time.
Creating a Database Specifying Parallel Loading of Index Partitions Using a Configuration File You can specify the processors and other configuration options for both record generators and sort processes in a configuration file. In a LOAD command, you can specify a different configuration file for each partitioned index.
Creating Constraints on Data Creating a Database Example 5-3. Sample Configuration File == == == == == == == Sample configuration file for loading index partitions in parallel. Creates index AGEINDEX on table CUST, which is partitioned as follows: $DATA1.SALES.CUST $DATA2.SALES.CUST $DATA3.SALES.CUST \NEWYORK.$DATA1.SALES.CUST == AGEINDEX is partitioned as follows: == $DATA4.SALES.AGEINDEX == $DATA5.SALES.AGEINDEX == \NEWYORK.$DATA2.SALES.AGEINDEX == \NEWYORK.$DATA3.SALES.
Creating a Database Using the CREATE CONSTRAINT Statement inserted. You can drop or add constraints at any time, as validity requirements for the data change, without affecting the application programs. Constraints created on a table ensure that any data entered into the table satisfies the rules imposed by the constraints. To create a constraint on a table, use the CREATE CONSTRAINT statement.
Creating a Database Using the CREATE CONSTRAINT Statement Additional Guidelines When defining constraints, also consider these guidelines: • • • • • • You can use collations in the search condition defining a constraint. Aggregate functions and subqueries are not allowed in the search condition defining a constraint. For any given row of a table, the constraint must be able to be resolved by checking only that row. Constraint names should be as descriptive as possible.
Creating a Database Examples of Creating Constraints ensure that the rows satisfy the constraint. With this testing technique and the automatic transactions, the operation minimizes the TMF overhead of a potentially very long transaction and reduces output to the audit trails. Examples of Creating Constraints This example creates a constraint that checks for a valid employee number in the range 0 through 9999. The INFO DEFINE command displays DEFINEs used to identify the table for the constraint.
Creating a Database Creating Collations +> EMPNUM < 9999; --- SQL operation complete. For the constraint VALID_EMPLOYEE_NUMBER, you might create this comment: >> COMMENT ON CONSTRAINT VALID_EMPLOYEE_NUMBER +> ON $VOL1.PERSNL.EMPLOYEE +> IS "VALID EMPLOYEE NUMBERS ARE 1 to 9998"; --- SQL operation complete. This example creates a constraint to ensure that the delivery date is greater than or equal to the order date: >> CREATE CONSTRAINT VALID_DELIV_DATE +> ON $VOL.SALES.
Creating Collation Source Files Creating a Database • • • • An SQL expression in a SELECT statement to specify different collating sequences (including upshifting) for character strings The WHERE clause in an UPDATE or DELETE statement to specify a different collating sequence for a column The CREATE INDEX statement to specify an order that differs from the base table The CREATE CONSTRAINT statement to specify a collating sequence for a column in the constraint For the syntax of these statements, see th
Creating Collation Source Files Creating a Database Example 5-4. Collation Source File (page 2 of 3) <9> ... \d192 ... \d195 \d224 ... \d227 \d199 \d231 \d208 \d240 \d200 ... \d203 \d232 ... \d235 \d204 ... \d207 \d236 ... \d239 \d209 \d241 \d210 ... \d213 \d242 ... \d245 \d217 ... \d219 \d249 \d224 ... \d221 \d253 \d255 \d198 \d230 \d216 \d248 \d197 \d229 \d222 \d254 \d033 ... \d047 \d173 \d058 ... \d063 \d064 \d091 ... <9> ...
Creating Collation Source Files Creating a Database Example 5-4. Collation Source File (page 3 of 3) \d096 \d123 ... \d126 \d127 <`> <{> ...
Creating a Database Creating Collation Objects Creating Collation Objects To create SQL collation objects, you must have the collation compiler compiled as an SQL program and registered in a catalog, as explained under Installing SQL/MP on page 2-2. When you issue the CREATE COLLATION statement, SQL invokes the collation compiler to compile the source file you specify. Note. Consider the locations for your collations carefully. Moving a collation after you create dependent objects is difficult.
Creating a Database Securing Collations HP NonStop SQL/MP Installation and Management Guide —523353-004 5- 60
6 Querying SQL/MP Catalogs SQL/MP catalogs contain information you can use to manage the database. Obtaining information from the catalogs with SQL queries can help you determine the current status of the database. You can also use the FILEINFO and VERIFY utilities to show the status of objects and the DISPLAY USE OF utility to show usage relationships. Determining Object and Program Dependencies The USAGES catalog table stores information about dependencies between objects.
Using the DISPLAY USE OF Command Querying SQL/MP Catalogs \SYS1.$V.CAT1 Catalog \SYS2.$V.CAT2 Catalog USAGES Table USAGES Table \SYS1.$V.SV.TABLEA has dependent object \SYS2.$V.P.PROG2 \SYS2.$V.P.PROG2 uses \SYS1.$V.SV.TABLEA You can display dependent objects by using the DISPLAY USE OF utility or by directly querying individual USAGES tables. The DISPLAY USE OF utility is convenient to use because it searches multiple USAGES tables for dependencies.
Displaying Information About Usages by Querying the Catalog Querying SQL/MP Catalogs This example shows the brief format: 0 Object Name Type S --------------------------------------- ---- \SYS1.$VOL.INVENT.SUPPLIER TA 1 \SYS1.$VOL.INVENT.XSUPPNAM IN 2 \SYS1.$VOL.AUSERSV.OPROG2 PG 1 \SYS1.$VOL.AUSERSV.
Querying SQL/MP Catalogs Displaying Information About Catalogs Displaying Information About Catalogs Information about all the catalogs on the node appears in the system directory of catalogs, which is the CATALOGS table in the system catalog. The CATALOGS table must reside on the subvolume SQL on the same volume as the system catalog. The default location is $SYSTEM.SQL. This query displays all SQL catalogs on the node \SYS1: >> SELECT * FROM \SYS1.$SYSTEM.SQL.
Querying SQL/MP Catalogs Displaying Information About Views This example displays tables and views described in TABLES for a given owner ID. For this example, the group ID is 240, and the user ID is 100. >> >> +> +> VOLUME \SYS1.$VOL1.SALES; SELECT TABLENAME, TABLETYPE FROM TABLES WHERE GROUPID = 240 AND USERID = 100; This example displays selected columns from a join of the TABLES and FILES catalog tables. The search condition for the join operation is WHERE TABLENAME = FILENAME.
Querying SQL/MP Catalogs Displaying Information About Constraints This example displays selected columns from a join of the TABLES and VIEWS catalog tables for group ID 240: >> VOLUME \SYS1.$VOL1.
Querying SQL/MP Catalogs Displaying Information About Columns This example displays CPRLSRCE information about a collation: >> >> +> VOLUME \SYS1.$VOL1.SALES; SELECT * FROM CPRLSRCE WHERE CPRULESNAME LIKE "%FRENCH%"; Displaying Information About Columns Information about columns appears in the COLUMNS catalog table and includes the data type definitions. You can query the COLUMNS table for information about particular column definitions, or you can obtain a list of columns for a specific table.
Querying SQL/MP Catalogs Displaying Information About Indexes This example selects the help text for the column LAST_NAME from the table EMPLOYEE: >> >> +> +> +> VOLUME \SYS1.$VOL1.PERSNL; SELECT * FROM COMMENTS WHERE OBJNAME = "\SYS1.VOL1.PERSNL.EMPLOYEE" AND OBJSUBNAME = "LAST_NAME" AND OBJTYPE = "HC"; Displaying Information About Indexes Three catalog tables contain information about keys and indexes: • • • KEYS describes the columns of each primary key and index.
Querying SQL/MP Catalogs Displaying Information About Partitions This TACL macro displays the keys and indexes of a table: ?TACL MACRO #FRAME #PUSH #INLINEECHO, #INLINEPREFIX #SET #INLINEECHO -1 #SET #INLINEPREFIX // #PUSH tname #SET tname [#SHIFTSTRING /UP/ %1%] $SYSTEM.SYSTEM.SQLCI /INLINE/ // VOLUME %2%; // // -- Report on the different keys and indexes of a table: // // SELECT I.TABLENAME, KEYTAG, KEYSEQNUMBER, COLNAME, ORDERING // FROM COLUMNS C, KEYS K, INDEXES I // WHERE C.TABLENAME = I.
Querying SQL/MP Catalogs Joining Catalog Tables With UNION This example displays all the partition information for the PARTLOC table. The query uses the LIKE predicate with the wild-card character % to indicate that a string of 0, 1, or more characters is acceptable in the wild-card position for a qualifying name. >> >> +> VOLUME \SYS1.$VOL1.
Querying SQL/MP Catalogs Displaying File and Security Attributes This command obtains a detailed format listing, including the physical file name, for the SQL table whose logical name is $VIR1.SALES.ORDERS (only part of the listing is shown): >> FILEINFO $VIR1.SALES.ORDERS, DETAIL; $VIR1.SALES.ORDERS 20 Aug 1996, 11:41 SQL BASE TABLE CATALOG $VIR1.SALES | | PHYSICAL NAME: $PVOL11.ZYS00025.
Querying SQL/MP Catalogs • • Determining Object Integrity and Consistency The name of a program stored in an OSS file is displayed as its Guardian file name equivalent and then in its path name format. If there is more than one path name linked to the program, only one path name is displayed (the first path name available to the current user). Several informational items are not displayed because they do not apply to OSS files.
Querying SQL/MP Catalogs Using VERIFY to Check Definitional Integrity catalog SALES and that have a FILECODE value less than 572. This report does not verify catalog tables because their file codes are not less than 572. >> VERIFY *.*.* FROM CATALOG SALES WHERE FILECODE < 572; ----- Verifying $VOL1.SALES.ATABLE1 $VOL1.SALES.ATABLE1 verified. ----- Verifying $VOL1.SALES.AVIEW1 $VOL1.SALES.AVIEW1 verified. ----- Verifying $VOL1.SALES.AINDEX1 $VOL1.SALES.AINDEX1 verified. --- SQL operation complete.
Querying SQL/MP Catalogs Using FILCHECK to Check Structural Consistency Using FILCHECK to Check Structural Consistency The FILCHECK utility checks the physical structure of a DP2 structured object and reports any errors.
Displaying Catalog, Object, and Program Versions Querying SQL/MP Catalogs Displaying Catalog, Object, and Program Versions The different product version updates (PVUs) of SQL/MP produce and support different versions of SQL catalogs, objects, and programs, as discussed in detail in the SQL/MP Version Management Guide.
Querying SQL/MP Catalogs Displaying Catalog, Object, and Program Versions In the information returned from either the VERSIONS or the CATALOGS table, the code A010 indicates a version 1 catalog, A011 indicates a version 2 catalog, A300 indicates a version 300 catalog, A310 indicates a version 310 catalog, A315 indicates a version 315 catalog, and so on. You can also determine the version of SQL/MP tables, views, indexes, and collations by using the FILEINFO command with the DETAIL option.
7 Adding, Altering, Removing, and Renaming Database Objects After you create a database, you can assume that the database is consistent and that application data is valid. Database management operations must ensure the same level of data consistency and validity. Any addition, alteration, or deletion to the database should be carefully planned. Only authorized persons should make changes to the active data dictionary.
Adding, Altering, Removing, and Renaming Database Objects Adding Catalogs This table summarizes the objects that can be added to the database and the SQL statements you use to add the objects: Object Operation SQL Statement Catalog (all tables) Add CREATE CATALOG Table Add CREATE TABLE View Add CREATE VIEW Index Add CREATE INDEX Collation Add CREATE COLLATION Partition Add ALTER TABLE PARTONLY MOVE ALTER INDEX PARTONLY MOVE Column Add ALTER TABLE ADD COLUMN Constraint Add CREATE
Adding, Altering, Removing, and Renaming Database Objects Adding Tables 3. Enter the CREATE CATALOG statement, with or without the SECURE option. 4. Verify that the catalog’s security is set to allow SQL objects to be added and accessed. Adding Tables Like adding new catalogs, adding new tables does not directly affect the existing dictionary objects, except for collations. Creating a table is the first step in defining SQL object dependencies, and no existing dependencies are affected.
Adding, Altering, Removing, and Renaming Database Objects Adding Views For additional information and guidelines related to adding a table, see Creating Base Tables on page 5-10. Adding Views Adding views on existing tables does not affect existing database dependencies. To add a view, use the CREATE VIEW statement, following these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2.
Adding, Altering, Removing, and Renaming Database Objects Adding Indexes To test the effect of an index on performance, follow these steps: 1. Test a sample set of queries against the production tables by using the DISPLAY STATISTICS command to obtain the statistical information. 2. Duplicate the table or tables involved to a test location. 3. Create the new index. 4. Enter an UPDATE STATISTICS statement to update the statistical information stored in the catalog and get statistics on the new index.
Adding, Altering, Removing, and Renaming Database Objects Adding Indexes The WITH SHARED ACCESS option can be used in conjunction with the PARALLEL EXECUTION ON option if the initiating node, all nodes with base table partitions, and all nodes that will have index partitions are running version 315 or later of SQL/MP software.
Adding, Altering, Removing, and Renaming Database Objects Adding Partitions to Tables and Indexes Steps for Adding an Index To add an index, follow these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2. Determine the name of the table for which you want to add the index. 3. Optionally, determine which programs depend on this table by using the DISPLAY USE OF command.
Adding, Altering, Removing, and Renaming Database Objects Adding Partitions to Tables and Indexes For a relative or entry-sequenced table, the only way to add a new partition is to add an empty partition to the end of a table with the ADD PARTITION option. You cannot use the PARTONLY MOVE option with a relative or entry-sequenced table.
Adding, Altering, Removing, and Renaming Database Objects Adding Partitions to Tables and Indexes 4. If you are splitting a partition, check that ample disk space is available for the new partition. For information about space requirements, see the ALTER TABLE or ALTER INDEX statement in the SQL/MP Reference Manual. 5. Use the DISPLAY USE OF command to determine which programs depend on the table.
Adding, Altering, Removing, and Renaming Database Objects Adding Columns Adding Columns You can add a column to any key-sequenced table. In addition, you can add a column to any relative table already defined with enough extra bytes in the RECLENGTH value to accommodate the new column. You cannot, however, add columns to entrysequenced tables or to views or indexes. To add a column, use the ALTER TABLE statement with the ADD COLUMN clause. Each ALTER TABLE statement adds only one column.
Adding, Altering, Removing, and Renaming Database Objects Adding Columns To add a column, follow these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2. Determine the name of the table to which you want to add the column. You can add columns to all key-sequenced tables or to relative tables with sufficient RECLENGTH length. 3.
Adding, Altering, Removing, and Renaming Database Objects Adding Constraints retrieve or update the column. After you SQL compile a program changed to use the new table definition, the program can use the column. • If you want to add constraints that control values in the new column or if you want to add comments on the new column, you can run the CREATE CONSTRAINT or COMMENT statement at any time after the column is added.
Adding, Altering, Removing, and Renaming Database Objects Adding Collations Adding Collations Adding collations to a database does not affect existing database dependencies. To add a collation, use the CREATE COLLATION statement, following these steps: 1. Create the collation source file, as explained under Creating Collations on page 5-55. 2. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 3.
Adding, Altering, Removing, and Renaming Database Objects Altering Database Objects All changes to a database require the specified authority for protection of the database. The authority to alter existing objects is controlled by the ownership and security of that object. For details on authorization requirements, see Authorization Requirements for Database Operations on page 4-5.
Adding, Altering, Removing, and Renaming Database Objects Altering Catalog Attributes Object Operation Statement or Command (page 2 of 2) Constraint New definition DROP CONSTRAINT CREATE CONSTRAINT Collation Security ALTER COLLATION SECURE Rename ALTER COLLATION RENAME Add/append/drop COMMENT Comment Altering Catalog Attributes Use the ALTER CATALOG statement to change the security of an entire set of catalog tables. You can alter only the security specifications of a catalog.
Adding, Altering, Removing, and Renaming Database Objects Altering Table Attributes strategy. If you alter the audit flag, you can invalidate the most recent TMF online dump, and programs expecting an audited table will receive a TMF run-time error. You can alter a single partition of a partitioned table by specifying the PARTONLY clause in the ALTER statement. For a partitioned table, if you omit PARTONLY, the statement operates on all partitions, and all partitions must be accessible.
Adding, Altering, Removing, and Renaming Database Objects Altering Table Attributes Steps for Altering Table Attributes 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2. Determine the name of the table you want to alter. 3. If you are altering security or the audit attribute, determine whether the change will affect current users or programs. 4. Enter the ALTER TABLE statement.
Adding, Altering, Removing, and Renaming Database Objects Altering View Attributes Altering View Attributes You can alter the security string specification of a view but not the attributes. You can alter the owner ID for a shorthand view but not for a protection view. To alter the security or owner ID, use the ALTER VIEW statement. Altering a view’s security neither invalidates any programs nor affects the dependent views.
Adding, Altering, Removing, and Renaming Database Objects Altering Index Attributes For more information about view attributes, see Creating Views of Base Tables on page 5-38. Altering Index Attributes The ALTER INDEX statement can alter several file and security attributes of an index. For security attributes, you can alter only CLEARONPURGE, NOPURGEUNTIL, or SECURE. The index owner and security are set and altered by those attributes of the underlying table.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions This example sets the maximum number of extents for a partition of an index located on $VOL1.SALES: >> ALTER INDEX $VOL1.SALES.XORDCUS +> PARTONLY MAXEXTENTS 300; --- SQL operation complete. Splitting, Moving, and Merging Partitions You can move rows within partitions of a base table or index. To do this, use the PARTONLY MOVE option with the ALTER TABLE or ALTER INDEX statement.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions Performing a One-Way Partition Split A one-way split moves data in the first or last part of a partition into a new partition. The remaining part of the data stays in the original partition. You can perform a one-way split with or without using the WITH SHARED ACCESS option. You can perform this move to change the format of a partition.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions This example shows how to move the latter portion of one partition into a new partition (a one-way split), using a define for the index name: >> ALTER INDEX =XPART_LOC +> PARTONLY MOVE +> FROM KEY “H00” UP TO LAST KEY TO =XPART_EUROPE +> CATALOG =INVENT_EUROPE +> EXTENT (8,8) SLACK 20; --- SQL operation complete.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions Suppose, for example, that a partition residing on $DISK2 contains rows in the key range 3000 through 5999, and a partition residing on $DISK3 contains the key range 6000 through 8999. Suppose further that users have inserted many more rows into lower key range than the higher one, and that you want to adjust the partition boundaries accordingly.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions UNRECLAIMED FREESPACE (F) and INCOMPLETE SQLDDL OPERATION (D) Flags A split partition, merge partition, or move partition boundary operation can cause these flags to be set: • UNRECLAIMED FREESPACE (F) indicates that an SQL object contains unusable space.
Adding, Altering, Removing, and Renaming Database Objects Splitting, Moving, and Merging Partitions Operational Considerations Related to F and D Flags If an SQL object has the UNRECLAIMED FREESPACE or INCOMPLETE SQLDDL OPERATION flag set, reset the flag as described previously before backing up, moving, or duplicating the object. Otherwise, these situations occur: • • If you attempt to use SQLCI DUP on an object that has either flag set, SQL returns an error.
Adding, Altering, Removing, and Renaming Database Objects Altering Columns If there is a concern that a restored table might have had the INCOMPLETE SQLDDL OPERATION flag set, use the SQLCI ALTER TABLE PARTONLY RECOVER INCOMPLETE SQLDDL OPERATION command for the table. This step will not harm the table, even if it did not have the flag set previously. • If you attempt to use SQL DUP from a node running version 315 or earlier, SQL does not recognize the flag and proceeds with the operation.
Adding, Altering, Removing, and Renaming Database Objects Altering Constraints Example 7-1. Altering a Column * Record Definition for $VOL1.SALES.ORDERS 01 ORDERS. 02 ORDERNUM PIC 9(6). 02 ORDER_DATE PIC S9(6) COMP. 02 DELIV_DATE PIC S9(6) COMP. 02 SALESREP PIC 9(4). 02 CUST_PO PIC X(35). 02 CUSTNUM PIC 9(4). >> LOG $VOL1.DBCHANGE.CNGLOG; >> CREATE TABLE $VOL1.SALES.
Adding, Altering, Removing, and Renaming Database Objects Altering Comments If you alter the security of a collation, be careful not to restrict access for dependent objects and programs. Altering collation security also alters the security of all objects and programs that use the collation. This example changes the security of a collation to allow all network users access to the collation: >> ALTER COLLATION $VOL1.SALES.SPANISH +> SECURE “NU-U”; --- SQL operation complete.
Adding, Altering, Removing, and Renaming Database Objects Dropping Catalogs This table summarizes the objects you can remove from the database and the statements and commands that perform the operations. All these operations are discussed in the following paragraphs except for dropping damaged SQL objects, described in Purging Damaged Objects With the CLEANUP Utility on page 11-29.
Adding, Altering, Removing, and Renaming Database Objects Dropping Tables Dropping Tables To drop a table, use the DROP TABLE statement. Dropping a base table with dependencies is essentially dropping each of the dependent objects separately. SQL drops all the dependencies automatically.
Adding, Altering, Removing, and Renaming Database Objects • Dropping Indexes Dropping the view also drops the view definition from the data dictionary. To re-create the environment, you must recover these definitions from backup tapes or OBEY recovery files. To drop a view, follow these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2.
Adding, Altering, Removing, and Renaming Database Objects Dropping Partitions of Tables and Indexes Dropping Partitions of Tables and Indexes Use the ALTER TABLE statement with the DROP PARTITION option to drop a partition of a key-sequenced, entry-sequenced, or relative table and the ALTER INDEX statement with the DROP PARTITION option to drop a partition of an index.
Adding, Altering, Removing, and Renaming Database Objects Deleting Columns Steps for Dropping Partitions This example drops an empty partition of a key-sequenced table: >> ALTER TABLE $VOL1.SALES.CUSTOMER +> DROP PARTITION $VOL5.SALES.CUSTOMER; --- SQL operation complete. To drop a partition, follow these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2.
Adding, Altering, Removing, and Renaming Database Objects Dropping Constraints Dropping Constraints Dropping constraints on the database is similar to making a program change. Any future data inserts or updates will not have to satisfy the constraint. The DROP CONSTRAINT statement drops only the constraint definition from the catalog and does not affect the data in the table. Dropping a constraint on a table invalidates the programs that depend on the table.
Adding, Altering, Removing, and Renaming Database Objects Dropping Comments Dropping Comments You can drop comments at any time with no effect on the database. To drop comments, use the COMMENT statement with the CLEAR option. The CLEAR option drops all comments on the specified object.
Adding, Altering, Removing, and Renaming Database Objects • • • • Renaming Objects Both DROP and PURGE automatically initiate a TMF transaction for the operation if one has not already been started. You cannot drop or purge a nonaudited object within a user-defined transaction. Only the DROP statement is valid for purging a constraint or catalog. PURGE enables you to purge shadow labels, as described under Managing Shadow Disk Labels on page 11-36.
Adding, Altering, Removing, and Renaming Database Objects Renaming Objects These are examples of requests for rename options: >> -->> --- ALTER TABLE SALES.CUSTOMER RENAME NYSALES.CUSTOMER; SQL operation complete. ALTER INDEX SALES.CUSTNAME RENAME NYSALES.CUSTNAME; SQL operation complete. To rename an object, follow these steps: 1. Start an SQLCI session. Enter a LOG command to initiate a log file for the statements and commands entered in this session. Keep the log for your records. 2.
Adding, Altering, Removing, and Renaming Database Objects HP NonStop SQL/MP Installation and Management Guide —523353-004 7- 38 Renaming Objects
8 Reorganizing Tables and Maintaining Data Sometimes you might need to restructure the data in a table by reloading or reorganizing the table.
Reorganizing Tables and Maintaining Data • Reorganizing a Database Online ° The table is reorganized within the physical or partition structure that already exists; with RELOAD, data is reorganized within the current structure only. ° The table cannot use index or data compression. Reorganizing a table offline with the SQLCI LOAD or COPY command: ° If the table is relative or entry-sequenced, this second method must be used.
Reorganizing Tables and Maintaining Data Reorganizing Key-Sequenced Files Before performing a RELOAD operation, consider these issues: • • • • • • • • The file must be key-sequenced. The operation can cause degraded performance. You can, however, control the amount of degradation by using the command’s RATE option. The higher the rate, the faster the reload occurs, but the more performance degrades. Conversely, the lower the rate, the slower the reload occurs, but the less performance degrades.
Reorganizing Tables and Maintaining Data Determining the Status of a Reorganization Determining the Status of a Reorganization The FUP STATUS command reports the status of a RELOAD operation. Use the STATUS command to determine if the operation has completed or has been suspended and to find out what percentage of the file has been processed. This command requests the status of the RELOAD operation started in the previous example for the table CUSTOMER: 14> FUP - STATUS $VOL.SALES.
Reorganizing Tables and Maintaining Data Reorganizing Partitions If you want to change the RATE or SLACK option, you can do so in the restart command. This command restarts the RELOAD operation for the table CUSTOMER, but adjusts the reloading rate to 20 percent: 17> FUP - RELOAD $VOL.SALES.CUSTOMER, RATE 20, SLACK 50 If you want to keep the same RATE and SLACK values when you restart the reorganization process, enter the RELOAD command without the RATE and SLACK parameters: 18> FUP - RELOAD $VOL.SALES.
Reorganizing Tables and Maintaining Data Changing Extent Size Values through 9999 and the other with numbers 10000 through 99999, respectively. When the last partition becomes full again at a later time, this operation can be repeated. Consider again the case of the table with three partitions based on the CUSTNUM key. The partitions are defined by the customer number (CUSTNUM) ranges 1 through 2999, 3000 through 5999, and 6000 through 99999, respectively.
Reorganizing Tables and Maintaining Data Adding Empty Partitions Adding Empty Partitions If a table is filling its primary partition or its last secondary partition, you can add an empty partition after the last existing partition to allow for growth. For relative or entrysequenced tables, the partition is always added to the end of the table. This type of extension spreads the access to the table over additional volumes or nodes.
Reorganizing Tables and Maintaining Data • • Guidelines for Loading Tables The COPY utility has a DISPLAY FORMAT option that enables you to display data on your terminal or printer. COPY can write to a tape file. The APPEND utility appends data to an existing table or to a partition of a keysequenced table. The APPEND utility is like the LOAD utility except that it adds data to a table without purging the existing data. The LOAD and APPEND utilities have similar options and rules.
Reorganizing Tables and Maintaining Data • • • Loading Individual Partitions If the LOAD utility fails during the LOAD operation, the target table or file is left in an invalid state and is unusable. You can restart the LOAD operation to overwrite the existing data using the same source file.
Reorganizing Tables and Maintaining Data Examples of Loading Tables 1. Start an SQLCI process for each partition. One way to do this is to start each SQLCI process in the processor associated with the partition to be loaded. Another way is to start SQLCI processes in the processors associated with the data sources for the LOAD command. 2. Issue one LOAD...PARTONLY command for each SQLCI process (and thus each corresponding partition). 3.
Reorganizing Tables and Maintaining Data Examples of Loading Tables Loading Data From an Enscribe File This example loads data into a table from an Enscribe file. The LOAD command includes the MOVE option to match fields to columns because DELIV_DATE is missing from the source file and the field names are not the same as the column names. The layout of the Enscribe file $ENSC.SALES.ORDERS precedes the commands that create and load the new table, $VOL1.SALES.ORDERS: * Record Layout for $ENSC.SALES.
Examples of Loading Tables Reorganizing Tables and Maintaining Data This example loads data into a table from an Enscribe file. The LOAD command with the MOVE option specifies explicit matching of the source fields and target columns. The source file and target table have these differences: • • • DELIV_DATE is a new column not in the source file; this field must be defined with a DEFAULT option to perform the LOAD. The CUST_PO column size has been increased by five bytes.
Reorganizing Tables and Maintaining Data Guidelines for Copying Tables >> LOAD $ENSC.SALES.ORDERS, $VOL1.SALES.
Reorganizing Tables and Maintaining Data • • • • Examples of Copying Tables and Files You can copy within a user-defined TMF transaction or, if the table is audited, COPY initiates the transaction. If you are copying large amounts of data, you must plan to ensure that the TMF audit trail space is large enough to handle the copied rows. For an audited table, you can alter the AUDIT attribute of the table to NO AUDIT before the COPY operation.
Reorganizing Tables and Maintaining Data Appending Data to Tables or Partitions Appending Data to Tables or Partitions The APPEND utility adds data to the end of a table or partition of a key-sequenced table. The APPEND utility is a form of the LOAD utility adapted for a specific purpose; it performs at the same speed as LOAD. Thus, the APPEND and LOAD utilities have similar options and perform similar tasks.
Reorganizing Tables and Maintaining Data • • • • • Appending Data to Tables or Partitions Three APPEND parameters control block slack: ° ° ° DSLACK controls the amount of free space in a table’s data blocks. ISLACK controls the amount of free space in a table’s index blocks. SLACK controls the amount of free space in both data and index blocks.
Reorganizing Tables and Maintaining Data Appending Data to Tables or Partitions 3. Supply each APPEND command with the specific range of input data for the target partition. Three possible strategies are: • • • Arrange the input data so that it is divided into separate files, each containing input for a specific target partition. Use these files as input to the APPEND commands. Do a SORTED append and specify FIRST KEY.
Reorganizing Tables and Maintaining Data Purging Data From SQL Tables The SORTED option indicates that the input data has already been sorted; the append operation does not need to perform further sorting. The ALLOWERRORS option ensures that append operation will proceed even if an input record contains data that could not be converted into a target column. > >& >& >& >& > > >& >& >& >& > > > > >& >& >& >& SQLCI / CPU 0, OUT $S.#LOG33, NOWAIT / & APPEND $VOL1.WEEK25.PART33, $VOL33.TARGET.
Reorganizing Tables and Maintaining Data Purging Data From SQL Tables an UPDATE STATISTICS statement to record current statistics for the table in the catalog. If the statistics are incorrect, the SQL compiler might not select the best access path for performance. • • • • You cannot include the PURGEDATA command within a user-defined TMF transaction. You cannot use the PURGEDATA command on an SQL program, view, catalog table, or index.
Reorganizing Tables and Maintaining Data Purging Data From SQL Tables HP NonStop SQL/MP Installation and Management Guide —523353-004 8- 20
9 Moving a Database The guidelines required for moving a database depend on knowing the current database scheme used at each site. The database administrator should analyze all the factors and develop a plan before attempting to move a database.
Moving a Database Determining Move Dependencies Determining Move Dependencies The steps you take to move SQL objects or an entire database depend on the layout of your database. The duplication process cannot ignore or overrule the dependency requirements of database objects, described in earlier sections of this manual. The complexity of the layout of your database dictates the complexity of the statements and commands required to move the database.
Moving a Database COPY and LOAD Services Shell and Utilities Reference Manual. The remainder of this subsection applies to SQL tables, indexes, views, and SQL programs stored in Guardian files. Caution. Some utilities, as well as SQLCI, let you request purge operations on target files that fall within the context of the command issued. When you refer to qualified file-set lists in such a command, the utility might inadvertently purge an object you did not expect to be purged.
Moving a Database DUP and BACKUP/RESTORE DUP and BACKUP/RESTORE The DUP utility duplicates objects from one location to another interactively. DUP can move SQL objects from one node to another when the nodes are connected in a network and the requesting user ID has the appropriate remote security to allow the transfer. The BACKUP and RESTORE utilities provide another method of moving SQL/MP objects. You can use these utilities to move objects to another location on the same node or to another node.
Moving a Database • DUP and BACKUP/RESTORE BACKUP/RESTORE allows these options: ° Choosing either automatic duplication or explicit duplication for dependent indexes ° ° ° ° Using tape handling features ° Automatically creating necessary catalogs with the AUTOCREATECATALOG ON option ° ° Copying the source object while other users have access to the object Automatically moving protection views with the underlying table Explicitly duplicating shorthand views named in a file set list Automatically
DUP and BACKUP/RESTORE Moving a Database Guidelines for Name Mapping in BACKUP and RESTORE To move interrelated SQL objects from one volume or node to another, you can use the MAP NAMES option. Use of this option requires familiarity with the way in which the BACKUP and RESTORE utilities handle file names.
DUP and BACKUP/RESTORE Moving a Database Restoring Files to Multiple Nodes When you restore a file set that resides on multiple nodes, the RESTORE utility searches the network for a node that matches the node number of each remote file on the tape. Whether you are restoring files to the node from which the files were backed up, or to another node on the same network, you need not be concerned about file names or the destination of remote files.
Moving a Database DUP and BACKUP/RESTORE Restoring Files to a Different Network When restoring files on a node that is not connected to the network where the original BACKUP process was run, RESTORE attempts: • • If all files on the BACKUP tape are in local format, the files are restored to the local node. If any files on the BACKUP tape are in remote format, RESTORE attempts to restore the files to the node assigned to the node number stored with the file name.
Moving a Database Moving the System Catalog Moving the System Catalog Although you can move the system catalog to a new location after SQL/MP has been installed, this operation is extremely complex. You should consider moving the system catalog only in special cases. These instructions are based on the assumptions that the TMF subsystem is operational, that the user is the super ID, and that the SQLCI2 program is on the $SYSTEM.SYSTEM subvolume and is described in the system catalog.
Moving a Database Moving the System Catalog 4. Create an OBEY command file you can use to insert the rows from the existing CATALOGS table into the re-created CATALOGS table. This command sequence queries the existing CATALOGS table and generates an OBEY command file that contains a report in INSERT-statement format. This command series appears in OBEY command file format for entering through SQLCI. Use this command file for CATALOGS table versions 300 and later.
Moving a Database Moving the System Catalog For the CATALOGS table versions that are earlier than version 300, use this OBEY command file: -- --------------------------------------------------------------- BEGIN OBEY COMMAND FILE COMMAND SEQUENCE TO QUERY THE CATALOGS -- TABLE AND GENERATE A REPORT IN INSERT-STATEMENT FORMAT -- -------------------------------------------------------------OUT_REPORT obey-insert-file CLEAR; RESET LAYOUT *; SET LAYOUT PAGE_LENGTH ALL; RESET SESSION *; SET SESSION LIST_COUNT
Moving a Database Moving the System Catalog To enable SQLCI to use the licensed SQLCI2 version rather than the normal SQLCI2 version, you must create the =_SQL_CI2_sys DEFINE pointing to the licensed version. This command performs this operation: 38> ADD DEFINE =_SQL_CI2_sys, CLASS MAP, FILE $SYSTEM.SYSTEM.SQLCI2L In the ADD DEFINE command, sys is the node (system) name without the backslash. 6.
Moving a Database Moving the System Catalog In the DROP command, sys-catalog is the volume and subvolume on which your system catalog resides. You cannot, however, enter the DROP SYSTEM CATALOG command while SQLCI2 is running, as it normally is when you are running SQLCI and have entered other commands during the current session. If you attempt to enter this command in that case, the command terminates abnormally, and the RDBMS returns an error message.
Moving a Database Moving Database Objects Each INSERT statement in the file (for CATALOGS table earlier than version 300) should look like this sample: INSERT INTO new-catalogs-vol.$SQL.CATALOGS VALUES ("\SYS1.$VOL4.INVENT ", "SQL ", "A011", 0, "U" ) ; 13. Drop the licensed SQLCI2L process so that you cannot mistakenly use the process. You should also drop the DEFINE for the SQLCI2L file.
Moving a Database Dropping and Re-creating Catalogs For information about moving an SQL program stored in a Guardian file, see Moving Programs on page 10-39. To move an SQL program stored in an OSS file, use the appropriate OSS utility. Dropping and Re-creating Catalogs To move a catalog, you must first drop all the objects described in it, drop the catalog, and then re-create it in a new location. Finally, restore the SQL objects to disk, referring to the new catalog name.
Moving a Database Moving Tables Moving Tables You can move a table with the SQLCI DUP, LOAD, or COPY utility, or with the Guardian utilities BACKUP and RESTORE. Each utility involves some special considerations. The DUP and BACKUP/RESTORE utilities enable you to specify the source tables by name or by qualified file-set list. If the table and its dependent objects all reside on the same subvolume and are moved to another subvolume, you can use a target file-set list to specify the new location.
Moving a Database Moving Tables COPY and LOAD do not move dependent objects automatically except any indexes that are defined on the table. Both COPY and LOAD require that you create the new table before copying or loading the data. For additional information on copying or loading tables, see Section 8, Reorganizing Tables and Maintaining Data, or the SQL/MP Reference Manual.
Moving a Database Moving Tables Operational Steps To move a table, follow these steps: 1. Determine the name of the table you want to move. 2. Determine the dependent objects with the DISPLAY USE OF command. Any dependent programs are invalidated when you drop the old table. 3. Check that sufficient space exists on the targeted volumes to create the new table and its dependent objects. 4. Create an OBEY command file for the command if the command is long or will be reused.
Moving Tables Moving a Database the ADMIN subvolume. The MAP NAMES option is used to define the target subvolumes for each object source. The CATALOG option defines the new catalog for each object. The table and its dependent objects to be moved follow: EMPLOYEE A table that resides on $VOL1.PERSNL, described in the catalog $VOL1.PERSNL EMPLIST A protection view that resides on the same subvolume, described in the same catalog as its underlying table, EMPLOYEE XEMPNAME An index that resides on $VOL2.
Moving Tables Moving a Database The table and dependent objects to be moved follow: EMPLOYEE A table that resides on $VOL1.PERSNL, described in the catalog $VOL1.PERSNL EMPLIST A protection view that resides on the same subvolume, described in the same catalog as its underlying table XEMPNAME An index that resides on $VOL2.PERSNL, described in the catalog $VOL2.PERSNL These commands accomplish the move operation: BACKUP $TAPE, $VOL1.PERSNL.EMPLOYEE, AUDITED,LISTALL RESTORE $TAPE, *.*.
Moving a Database >> +> >> +> +> >> >> +> >> >> +> +> +> +> +> Moving Views CREATE TABLE $VOL1.PERSNL.EMPLOYEE LIKE $OLD.PERSNL.EMPLOYEE WITH CONSTRAINTS; CREATE INDEX $VOL2.PERSNL.XEMPNAME ON $VOL1.PERSNL.EMPLOYEE (LAST_NAME, FIRST_NAME) CATALOG $VOL2.PERSNL; ALTER TABLE $VOL1.PERSNL.EMPLOYEE NO AUDIT; LOAD $OLD.PERSNL.EMPLOYEE, $VOL1.PERSNL.EMPLOYEE, SORTED, SLACK 20; ALTER TABLE $VOL1.PERSNL.EMPLOYEE AUDIT; CREATE VIEW $VOL1.PERSNL.
Moving a Database Moving Indexes specify the VIEWS EXPLICIT option, the view is moved. If the file set list does not include the view but you do specify VIEWS EXPLICIT, the view is not moved automatically. You can specify not to automatically duplicate views with the underlying table by including the VIEWS OFF option. The BACKUP and RESTORE utilities always move protection views when moving the underlying table.
Moving a Database Moving Collations The DUP utility enables you to either implicitly move all indexes with the underlying table or to turn the implicit index duplication off. If you specify INDEXES OFF, no indexes are moved with the underlying table. DUP implicitly moves indexes with the underlying table by default or when you specify the INDEXES IMPLICIT option, regardless of whether the indexes are named in the file set list.
Moving a Database Moving a Database to a Different Node or Different Volumes Moving a Database to a Different Node or Different Volumes Moving a database involves moving a set of SQL objects from one environment to another. This move might be of database objects from one group to another group or of an entire database from one node (system) to another node. This discussion examines a scenario where you want to move a complete grouping of SQL objects defined in one or more catalogs.
Moving a Database Steps for Moving a Database Steps for Moving a Database To move a database, follow these steps, which are demonstrated in Example 9-1 on page 9-26: 1. Determine the name of the SQL catalogs involved in this move by querying the CATALOGS table. 2. Determine the names of the SQL objects involved in this move by querying the catalogs determined in Step 1. 3. Determine the dependencies with the DISPLAY USE OF command or with catalog queries.
Moving a Database Example of Moving a Database Example 9-1. Example of Moving the Sample Database (page 1 of 5) 1. Determine the name of the SQL catalogs involved in this move by querying the CATALOGS table. You should always log the commands and information returned in the move operations. >> LOG log-file CLEAR; >> SELECT CATALOGNAME FROM $SYSTEM.SQL CATALOGS; CATALOGNAME ------------------------\SYS1.$DATA.INVENT \SYS1.$DATA.PERSNL \SYS1.$DATA.SALES 2.
Moving a Database Example of Moving a Database Example 9-1. Example of Moving the Sample Database (page 2 of 5) 3. Determine the dependencies with the DISPLAY USE OF command or with catalog queries. The layout of the dependencies determines the structure of the correct DUP command used in Step 5. For more information about interpreting dependencies, see Catalog Mapping Schemes for DUP on page 9-30. 4. If you need a consistent copy of these tables, check that the SQL objects are not in use. 5.
Moving a Database Example of Moving a Database Example 9-1. Example of Moving the Sample Database (page 3 of 5) PVIEW $DATA.PERSNL.EMPLIST TO $DATA1.PERSNL.EMPLIST INDEX $DATA.PERSNL.XEMPNAME TO $DATA1.PERSNL.XEMPNAME INDEX $DATA.PERSNL.XEMPDEPT TO $DATA1.PERSNL.XEMPDEPT DUPLICATED TABLE $DATA.PERSNL.JOB TO $DATA1.PERSNL.JOB DUPLICATED TABLE $DATA.SALES.CUSTOMER TO $DATA1.SALES.CUSTOMER PVIEW $DATA.SALES.CUSTLIST TO $DATA1.SALES.CUSTLIST INDEX $DATA.SALES.XCUSTNAM TO $DATA1.SALES.
Moving a Database Example of Moving a Database Example 9-1. Example of Moving the Sample Database (page 4 of 5) >> VERIFY $DATA1.PERSNL.*; ... <-- omitted miscellaneous tables --- Verifying $DATA1.PERSNL.DEPT --- $DATA1.PERSNL.DEPT verified. --- Verifying $DATA1.PERSNL.EMPLIST --- $DATA1.PERSNL.EMPLIST verified. --- Verifying $DATA1.PERSNL.EMPLOYEE --- $DATA1.PERSNL.EMPLOYEE verified. --- Verifying $DATA1.PERSNL.JOB --- $DATA1.PERSNL.JOB verified. --- Verifying $DATA1.PERSNL.MGRLIST --- $DATA1.PERSNL.
Moving a Database Catalog Mapping Schemes for DUP Example 9-1. Example of Moving the Sample Database (page 5 of 5) 7. SQL compile the programs. To validate the programs, you should explicitly SQL compile them with the DEFINEs pointing to the new location of the tables and views. The following commands demonstrate setting the new DEFINEs and compiling a set of SQL programs that run in the Guardian environment. The sample database programs reside in the SAMPDB subvolume.
Moving a Database Catalog Mapping Schemes for DUP This DUP command uses a wild-card character for files in the source file-set list to specify that all objects residing on $VOL1.PERSNL are to be duplicated to a new volume. All the dependent objects also reside on $VOL1.PERSNL, so the MAP NAMES specification can use the wild-card character to identify the target location for all objects. All of the objects in this example are described in the catalog $VOL2.PERSNL. >> DUP $VOL1.PERSNL.
Renaming or Renumbering a Node Moving a Database This example shows a DUP command in which the CATALOG specification identifies individual objects. The objects residing on $VOL2.PERSNL are to be described in several catalogs. >> DUP $VOL1.PERSNL.*, +> MAP NAMES ($VOL1.PERSNL.* TO $VOL2.PERSNL.*), +> CATALOG ($VOL2.PERSNL FOR $VOL2.PERSNL.EMPLOYEE, +> $VOL2.PERSNL FOR $VOL2.PERSNL.XEMPNAME, +> $VOL2.SALES FOR $VOL2.PERSNL.
Backing Up and Purging SQL Objects Moving a Database ° • Using the MODIFY DICTIONARY utility to make the necessary changes, as follows: ° Use the MODIFY DICTIONARY CATALOG command to change node names in catalogs on the local node. ° Use the MODIFY DICTIONARY LABEL command to change node numbers in file labels of SQL objects and SQL object programs on the local node. ° Use the MODIFY DICTIONARY REGISTER command to register a userdefined catalog in the local system catalog.
Moving a Database Backing Up and Purging SQL Objects Then, use the SELECT statement to list the catalog names: >> LOG log-file CLEAR; >> SELECT CATALOGNAME FROM $volume.SQL.CATALOGS; In the example, log-file is a device, process, or disk file, and $volume is the volume on which the system catalog resides. To find all local and remote objects that need to be backed up and the catalog names of dependent objects, query the USAGES table in each catalog: >> SELECT * FROM catalog-name.
Backing Up and Purging SQL Objects Moving a Database Creating an OBEY Command File to Re-Create the Catalogs (Step 4) Create an OBEY command file containing SQL statements that will re-create your catalogs. The statements must specify the same security and the same owners for each catalog and for each catalog table that can be individually secured. The catalog security is the security of the catalog tables, except for the USAGES, TRANSIDS, and PROGRAMS tables, which can be secured individually.
Moving a Database Backing Up and Purging SQL Objects Dropping User Catalogs (Step 6) To drop all catalogs except the system catalog from the node, enter this statement for each catalog while you are logged on as the super ID: >> DROP CATALOG catalog-name; If you get an error while attempting to drop a catalog, use the SQLCI CLEANUP utility, as described later under Purging Detached SQL Objects (Step 8).
Moving a Database Renaming or Renumbering Your System Purging Detached SQL Objects (Step 8) Run the Guardian DSAP utility with the SQL option to determine if your node has detached SQL objects: 22> DSAP $volume, SQL, DETAIL Then use the SQLCI CLEANUP utility to purge any detached objects: >> CLEANUP $volume.subvolume.object ! ; For a description of the CLEANUP utility, see the SQL/MP Reference Manual. Renaming or Renumbering Your System Use the SYSGEN utility to rename or renumber your system.
Moving a Database Restoring a SQL/MP Database on a Node Re-creating the SQL Catalogs (Step 1) Use the SQLCI OBEY command to run the OBEY command file, created previously, with the commands to re-create the catalogs with the same security and ownership. The catalog owner is the user ID executing the CREATE CATALOG statement. Ownership can later be given to another user ID, if necessary, by using the ALTER CATALOG statement. Note.
Moving a Database Restoring a SQL/MP Database on a Node Making TMF Online Dumps (Step 5) When you purge SQL objects, the TMF online dumps of the objects are lost. You must make new online dumps of all audited objects restored, including the catalogs. For information about making online dumps of SQL objects, see the TMF Operations and Recovery Guide.
Moving a Database Restoring a SQL/MP Database on a Node HP NonStop SQL/MP Installation and Management Guide —523353-004 9- 40
10 Managing Database Applications Managing your database includes supporting the operating requirements and access requirements of your application programs and maintaining valid application programs. Providing this support and maintenance can include both performance-related and operational tasks. Program Validity Certain DDL statements and utility commands can invalidate a program and mark the program as invalid in the catalog and program file label.
Managing Database Applications Unexpected Events That Can Invalidate a Program These changes to SQL objects used by an SQL program file invalidate the program file: • • • • • • • • • • Adding a constraint to a table used by the program Adding a column or partition to a table used by the program (including an underlying table of a protection or shorthand view used by the program) unless the program is compiled with the CHECK INOPERABLE PLANS option and the table and any associated protection views have
Managing Database Applications Operations That Do Not Invalidate a Program catalog to register the program, and has marked the object file label as SQLsensitive and valid, but before the compiler normally terminates If a compilation terminates abnormally, the TMF subsystem backs out the updates to the catalog but cannot undo the changes to the object file label because the label for an SQL object file is always nonaudited.
Managing Database Applications Determining Validity of a Program Determining Validity of a Program A program is invalid if any of these are true: • • • • The VALID flag of the program entry in the PROGRAMS table is not set or is set to N. The VALID flag in the program file label is not set or does not correspond to the VALID flag in the PROGRAMS entry. The value of RECOMPILETIME in the program file label does not correspond to the RECOMPILETIME recorded in the PROGRAMS catalog table.
Managing Database Applications Determining Validity of a Program named COMPFILE. You can edit this file and use it as a command file in an OBEY command, directing the command interpreter to recompile the programs. >> VERIFY *.*.* WHERE SQLPROGRAM, SOURCE COMPFILE CLEAR; ... --- Verifying $VOL1.PPROGS.UEMPLIST *** WARNING $VOL1.PPROGS.UEMPLIST IS AN INVALID PROGRAM. --- SQL operation complete. Querying the PROGRAMS Catalog Table You can query the catalog tables to verify whether VALID flags are set.
Managing Database Applications SQL Compilation and Recompilation Logging might be especially helpful on a system where automatic recompilations are not wanted for performance reasons. Examine any program that has a recompilation logged to determine whether the program needs explicit recompilation. Use VERIFY to check the program entry in the catalog. Programs marked invalid in the catalog need explicit SQL compilation.
Managing Database Applications • • • Automatic Recompilation The program file is registered in the catalog. This operation includes storing a description of the program in the PROGRAMS catalog table and storing usage dependencies in the USAGES tables of the catalogs in which objects referred to by this program are described. Explicit compilation produces an object program that can be executed without first being automatically recompiled.
Managing Database Applications • Automatic Recompilation The AUDIT attribute of a table referred to by an SQL statement is altered. This does not invalidate the statement, but in these cases altering the AUDIT attribute can cause automatic recompilation: ° If a statement performs a DELETE or UPDATE set operation on a nonaudited table with a SYNCDEPTH of 1, the SQL executor returns SQL error 8203 and forces automatic recompilation of the statement.
Managing Database Applications Automatic Recompilation that are invalid because objects they reference have been changed or redefined) are actually operable plans. Note. An operable plan, while executable, might not be optimal—and that recompilation might improve performance. For more information, see Using Similarity Checks on page 10-15. Performance Considerations Performance is best if you ensure that programs do not need recompilation at run time.
Managing Database Applications Automatic Recompilation NORECOMPILE, the only way to revalidate the program is to explicitly SQL compile it again. By using NORECOMPILE, you ensure that: ° ° All programs use the best query execution plan to enhance performance. Alterations to the database, object moves, or errors in the DEFINE setup cannot inadvertently affect the production environment. If you specify RECOMPILE, determine the extent of recompilation that is best for your application.
Managing Database Applications Automatic Recompilation Following the explicit recompilation, you can stop and start the server class to ensure that all running objects use the newly compiled program. These commands illustrate this action: = = = = FREEZE SERVER SRV-SDB102 STOP SERVER SRV-SDB102 THAW SERVER SRV-SDB102 START SERVER SRV-SDB102 For more information about DEFINEs see Using DEFINEs on page 10-30.
Managing Database Applications Automatic Recompilation basis. At run time, these statements are automatically recompiled when the compiler directives allow this recompilation. The quality of the query execution plans depends on the accuracy of the statistics used by the compiler when determining the plan. If the statistics are not current, compilation can cause a valid program, but the chosen query execution plan could give less than the best performance.
Deferring Name Resolution Managing Database Applications of an UPDATE statement, all affected partitions of the table and only the affected partitions of indexes that include columns being updated must be available.
CONTROL QUERY BIND NAMES Directive Managing Database Applications The sample database described in the SQL/MP Reference Manual includes a sample program that uses execution-time name resolution. For sample scenarios of use, see Using DEFINEs on page 10-30. Figure 10-1. Name Resolution For SQL Statements Explicit SQL Compilation Run-Time Environment 1 SQLCOMP Process 2 3 SQL Load Time Execution-Time Name Resolution SQL Program File ...
Managing Database Applications Using Similarity Checks Using Similarity Checks A similarity check is a comparison made by SQL to determine whether two objects (or the compile-time and execution time version of the same object) are sufficiently similar that a serial execution plan compiled for one can work as an operable plan for the other.
Managing Database Applications Using the CHECK Option The CHECK option has three forms: • • INVALID PROGRAM specifies automatic recompilation for all SQL statements in an invalid program, or a program that refers to changed DEFINEs (if NORECOMPILE is not specified). This option is the default. INVALID PLANS specifies automatic recompilation for an SQL statement if either of these conditions occur (and NORECOMPILE is not specified): ° ° • The statement is invalid.
Managing Database Applications Using the CHECK Option These DDL operations do not invalidate a program compiled with the CHECK INOPERABLE PLANS option if the similarity check is enabled for each referenced object. The program also retains its entries in the USAGES table. (These operations, however, do update the redefinition timestamp of each referenced object in the DDL statement.) • • • • • • • • ALTER TABLE...ADD PARTITION statement ALTER TABLE...
Managing Database Applications Using the CHECK Option Example: Preventing Recompilations After a DDL Change To prevent recompilation, enable the similarity check for all referenced tables and protection views and compile the program with the CHECK INOPERABLE PLANS option. To do this, follow these steps: 1.
Managing Database Applications Using the CHECK Option Interaction Between the CHECK Option and Other SQLCOMP Options Table 10-1 on page 10-20 describes the actions of the SQL executor when it runs an SQL program compiled with a CHECK option and the RECOMPILE, NORECOMPILE, RECOMPILEONDEMAND, or RECOMPILEALL option for this situation. At SQL load time, the SQL executor detects invalid statements or statements that refer to a DEFINE that has changed since the last explicit SQL compilation.
Using the CHECK Option Managing Database Applications Table 10-1. Behavior of the SQL Executor for an Invalid Statement or a Changed DEFINE Detected at SQL Load Time (page 1 of 2) SQLCOMP Options Behavior CHECK INVALID PROGRAM Option RECOMPILE RECOMPILEALL The SQL executor recompiles (in memory) all SQL statements. This option is the default behavior.
Using the CHECK Option Managing Database Applications Table 10-1. Behavior of the SQL Executor for an Invalid Statement or a Changed DEFINE Detected at SQL Load Time (page 2 of 2) SQLCOMP Options Behavior CHECK INOPERABLE PLANS Option RECOMPILE RECOMPILEALL The SQL executor performs the similarity check as follows: • • RECOMPILEONDEMAND If the similarity check passes, the SQL executor executes the statement using its existing plan from the program file.
Using the CHECK Option Managing Database Applications Table 10-2 describes the behavior of the SQL executor when it encounters an invalid static or dynamic statement during the execution of an SQL program compiled with a CHECK option and the RECOMPILE or NORECOMPILE option for this situation. A statement is invalidated during program execution when a DDL operation takes place during program execution. After SQL load time, the SQL executor detects invalid static or dynamic SQL statements.
Using the COMPILE Option Managing Database Applications Table 10-2. Behavior of the SQL Executor for an Invalid Statement Detected After Load Time (page 2 of 2) SQLCOMP Option Behavior CHECK INOPERABLE PLANS Option (continued) RECOMPILE (continued) For an invalid dynamic SQL statement, the SQL executor performs the similarity check as follows: • • If the similarity check passes, the SQL executor executes the statement without recompilation.
Managing Database Applications • Using the COMPILE Option COMPILE INVALID PLANS directs the SQL compiler to explicitly compile these SQL statements: ° ° ° Statements that refer to changed DEFINEs. ° Uncompiled SQL statements with empty sections. The SQL compiler generates an empty section if an SQL statement refers to a nonexistent DEFINE or SQL object. Statements with plans that fail the redefinition timestamp check. Statements with altered execution plans.
Managing Database Applications • • • Using the COMPILE Option Physical name Redefinition timestamp. The new timestamp prevents future similarity checks for the plan until the SQL object changes again. Partition node array. SQL uses the partition node array to determine alternate paths when partitions are unavailable to a plan.
Managing Database Applications Enabling the Similarity Check for Tables and Protection Views If the program has not been previously compiled or does not contain similarity information, the COMPILE INOPERABLE PLANS option directs the SQL compiler to compile all SQL statements in the program. New Indexes If you add any new indexes, you might decide to explicitly SQL compile the program with the COMPILE INVALID PLANS option.
Managing Database Applications Enabling the Similarity Check for Tables and Protection Views Underlying Tables If you enable the similarity check for a protection view, the operation does not enable the check for any underlying tables. You must explicitly enable the similarity check for the underlying table. If you enable the similarity check for an underlying table, the operation does not enable the check for a protection view defined on the table.
Enabling the Similarity Check for Tables and Protection Views Managing Database Applications • Any of these file attributes: ALLOCATE LOCKLENGTH SECURE AUDITCOMPRESS MAXEXTENTS SERIALWRITES BUFFERED NOPURGEUNTIL TABLECODE CLEARONPURGE OWNER VERIFIEDWRITES EXTENT (primary and secondary) • • • • • • Statistics on the tables Column headings Comments on columns, constraints, collations, indexes, or tables Catalog where the table is registered Help text Number of columns.
Managing Database Applications Planning for TS/MP Requirements Planning for TS/MP Requirements Transaction processing applications can access both SQL and Enscribe databases. To access an SQL database, the servers must use embedded SQL statements coded in host language programs such as COBOL, C, or Pascal. To access an Enscribe database, servers use languages, such as COBOL, FORTRAN, C, or Pascal, or TAL statements, with calls to Enscribe I/O procedures.
Managing Database Applications Using DEFINEs Using DEFINEs A DEFINE is a named set of attributes and associated values stored in the process file segment (PFS) of a running process. By using logical names, you can run SQLCI commands and host programs using different sets of files, depending on the DEFINEs in effect at the time. Application code that uses logical names instead of hard-coded file names can be compiled or executed to use different physical databases.
DEFINE Rules Managing Database Applications • The DEFMODE option must be set to ON to enable adding DEFINEs. The DEFMODE setting remains in effect for the duration of the command interpreter or SQLCI session in which the setting is established. These are the DEFMODE settings and the commands to set them: • SET DEFMODE ON Enables DEFINEs; a new process inherits the DEFINE set from the initiating process. This setting is the default value.
Managing Database Applications • Using DEFINEs During Compilation In SQLCI, the DELETE DEFINE, ALTER DEFINE, and ADD DEFINE commands apply only to the DEFINE set for the SQLCI session. For example, in an SQLCI session, if you alter a DEFINE inherited from the command interpreter process, the DEFINE is altered only for the SQLCI session. When you return to the command interpreter prompt after ending the SQLCI session, the original DEFINE is still in effect.
Managing Database Applications • Using DEFINE Names With Programs After a program that uses DEFINE names is SQL compiled and registered as a valid program in a catalog, the program is valid only for the table and views identified by the DEFINEs at compile time. If a different set of DEFINEs is used at run time, the program is automatically recompiled with the new DEFINEs if automatic recompilation is enabled.
Using DEFINEs From SQLCI Managing Database Applications This example uses a logical name to identify a COBOL library. The logical name is resolved only at preprocess time when the associated information is copied into the program. EXEC SQL BEGIN DECLARE SECTION EXEC SQL SOURCE =COPYLIB(DEPT, JOB) EXEC SQL END DECLARE SECTION END-EXEC. END-EXEC. END-EXEC. Using DEFINEs From SQLCI These examples illustrate the use of DEFINEs with SQLCI.
Using DEFINEs to Switch Databases Managing Database Applications >> CREATE TABLE =EMPLOYEE +> (EMP_NUM PIC 9(6) +> EMP_NAME PIC X(30) +> SS_NUMBER PIC X(11) +> ADDRESS PIC X(30) +> CITY PIC X(30) +> ST PIC X(2) +> ZIP_CODE PIC X(5) +> PRIMARY KEY EMP_NUM) +> CATALOG =PR_CATALOG; --- SQL operation complete.
Managing Database Applications Using DEFINEs to Switch Databases Dynamically Selecting Different Databases This scenario describes a situation where you have several similar SQL/MP databases and you want a program to dynamically determine which database to access. For example, your program might select the database depending on the type of transaction.
Managing Database Applications Using DEFINEs With PATHMON Using DEFINEs With PATHMON If your application executes in a Pathway transaction processing environment, you specify DEFINE names for a server class by using the SET SERVER DEFINE command. SQL tables and views are referred to by using DEFINE names. When you specify DEFINEs within the server configuration, you are associating DEFINE names used by that server class with actual physical files.
Managing Database Applications = = = = = = = = = = Manipulating Program Files FREEZE SERVER SRV-SDB102 STOP SERVER SRV-SDB102 ALTER SRV-SDB102, DEFINE =EMP , CLASS MAP, & FILE \SYS1.$VOL2.TEST2.EMPLOYEE ALTER SRV-SDB102, DEFINE =DEPT, CLASS MAP, & FILE \SYS1.$VOL2.TEST2.DEPT ALTER SRV-SDB102, DEFINE =JOB , CLASS MAP, & FILE \SYS1.$VOL2.TEST2.
Managing Database Applications Moving Programs label. If the program file is not available, the invalidation is registered in the program’s catalog and detected at run time. Moving Programs Moving a program is similar to moving other SQL objects, but somewhat easier because programs have no dependencies. You can use either FUP DUP or SQLCI DUP or the BACKUP and RESTORE utilities to move SQL programs stored in Guardian SQL files. To move SQL programs stored in OSS files, use the appropriate OSS utility.
Managing Database Applications Moving Programs Without Recompilation Moving Programs Without Recompilation These SQL compiler options can be useful in the management of SQL programs: • The REGISTERONLY option directs the SQL compiler to register a previously SQL compiled program in a specific catalog without recompiling any SQL statements in the program. You can use this option to install a program in a catalog after you have SQL compiled and moved the program.
Managing Database Applications Moving Programs Without Recompilation Restrictions for the NOREGISTER Option These static SQL statements, when compiled with the NOREGISTER ON option, must use execution-time name resolution, or the SQL compilation fails with SQL error 2109: • • • DML statements: SELECT, INSERT, UPDATE, DELETE, and DECLARE CURSOR LOCK and UNLOCK statements GET VERSION and GET CATALOG statements To prevent automatic SQL recompilation of these statements at SQL load time, specify the CHECK
Managing Database Applications Using BACKUP and RESTORE The REGISTERONLY ON option does not generate usages in the USAGES table. If you require usages on the production system, you must explicitly recompile the program. If you do recompile the program, specify the COMPILE INOPERABLE PLANS option to improve performance. 5. Run the program with DEFINEs that point to the objects on the production system. The SQL executor uses the similarity check to compare the production tables with the development tables.
Managing Database Applications Using BACKUP and RESTORE RESTORE Program and the SQLCOMPILE Option If you restore a program using the SQLCOMPILE option, the RESTORE program invokes the recompilation of the program using the SQLCOMP CHECK option specified during the last explicit SQL compilation. The SQLCOMPILE ON option of RESTORE can restore a program and automatically recompile the program.
Using SQLCI DUP Managing Database Applications Using SQLCI DUP This example shows using the SQLCI DUP command to move a single SQL program stored in a Guardian file. After the move, the user requests explicit compilation with registration in catalog $VOL1.ADMIN: >> DUP $VOL1.PERSNL.EMPPROG, $VOL1.ADMIN.*, +> SAVEALL; >> EXIT; 21> SQLCOMP /IN $VOL1.ADMIN.EMPPROG, OUT $S.#HOLD/ CATALOG $VOL1.ADMIN If you do not need to recompile your programs, omit the second step.
Managing Database Applications Using SQLCI DUP Example: Method 1 (Moving Programs With DUP) This DUP command moves these programs to new volumes; namely, $VOL1, $VOL2, and $VOL3. The MAP NAMES option correctly defines the source and target volumes for each program. After the DUP operation, the programs are SQL compiled to register them in the new catalog. >> +> +> +> +> 19> 20> 21> 22> 23> 24> 25> 26> 27> 28> 29> 30> 31> 32> 33> 34> DUP ($OLD1.PROGS.*, $OLD2.PROGS.*, $OLD3.PROGS.
Managing Database Applications Using SQLCI DUP Step 1 queries the PROGRAMS table. The query produces a list of program names from the catalog $OLD1.ACCTG and writes the list in the log file $VOL1.PGMS.PROGLIST. >> >> >> LOG $VOL1.PGMS.PROGLIST; VOLUME $OLD1.ACCTG; SELECT PROGRAMNAME FROM PROGRAMS; This list appears in the log file: PROGRAMNAME -----------------------------\SYS1.$OLD1.PROGS.PROGA1 \SYS1.$OLD2.PROGS.PROGA2 \SYS1.$OLD3.PROGS.
11 Performing Recovery Operations The success of recovery operations depends on the effectiveness and consistency of the plan developed for handling recovery situations. Before you begin any recovery operation, you should thoroughly evaluate the tools—backup tapes, TMF online dumps, and so forth—available and appropriate for the type of failure.
Performing Recovery Operations Restoring Individual SQL Objects Restoring Individual SQL Objects The RESTORE utility can replace SQL objects that have been backed up on tape. For this discussion, restoring SQL objects and databases means you are replacing existing objects in the same location. For a discussion about using the RESTORE utility to move SQL objects and databases, see Section 9, Moving a Database.
Performing Recovery Operations Restoring Tables These are options of the RESTORE utility and how they affect restoring SQL objects: • • • RESTORE allows the use of a qualified file-set list to identify the source objects to be restored. The PURGE option effectively performs an SQL DROP statement before restoring the table.
Performing Recovery Operations Restoring Tables These inconsistencies can occur because RESTORE PARTONLY uses information from the backup tape to reconstruct the definition of the recovered partition. Steps to Restore a Table To restore a table, perform these steps. Restoring a table invalidates dependent programs. 1. Identify the table to be restored. 2. Determine the dependencies that will be affected by the drop operation by using the DISPLAY USE OF command. 3.
Performing Recovery Operations Restoring Tables This example restores a table including the dependent shorthand view, EMPSVIEW. Shorthand views are restored only when they are explicitly named in the file set list. RESTORE $TAPE, ($VOL1.PERSNL.EMPLOYEE, $VOL2.ADMIN.EMPSVIEW), & CATALOG $VOL1.PERSNL, OPEN, TAPEDATE, LISTALL The next example restores a table with a protection view from a backup operation performed on another node and volume.
Performing Recovery Operations Restoring Views Restoring Views Protection views cannot be explicitly restored; they are restored with the underlying table only. Shorthand views, however, can be only explicitly restored. When a table is restored, only those shorthand views explicitly identified by the file set list are automatically restored.
Performing Recovery Operations Restoring Databases If you are restoring databases, file set lists, or moving objects, you might not want to use the SQLCOMPILE ON option. The recompilations can be unsuccessful if objects are restored alphabetically by volume, subvolume, and file name. If programs are restored before the tables, views, or indexes on which they depend are restored, the recompilations will be unsuccessful. To restore a program, follow these steps: 1.
Performing Recovery Operations Completing the Planning Phase Completing the Planning Phase You should prepare for restoring the database when the database is consistent and inactive. To complete planning for restoring the database, perform these steps. 1. Ensure that the database is not active. 2. If you are not planning a volume-mode RESTORE operation, described under Restoring a Database as a Planned Event on page 11-9, create an EDIT file containing SQL statements that will re-create your catalogs.
Performing Recovery Operations Restoring a Database as a Planned Event 4. If you need a list of programs that will be invalidated by this procedure, use these commands to produce a list of programs in a log file: >> >> LOG log-file; FILEINFO *.*.* WHERE SQLPROGRAM; Restoring a Database as a Planned Event To restore a database as a planned event, follow these steps: 1. Check that SQL is running on the node. The system catalog must be present, and the TMF subsystem must be running.
Performing Recovery Operations Restoring a Database as an Unplanned Event 6. Verify the database by using the VERIFY utility; following is an example of the VERIFY command: >> VERIFY *.*.*; 7. Drop and re-create any invalid shorthand views. By using VERIFY in Step 6, you can identify any invalid shorthand views. 8. Perform new TMF online dumps of all catalogs and audited SQL objects.
Performing Recovery Operations Recovering Consistent Files by Resetting the BROKEN Flag Recovering Consistent Files by Resetting the BROKEN Flag When a disk volume or node crashes or a process terminates unexpectedly, files that are open at that time are left in a questionable state. In many cases, the files are really inconsistent because they were actively involved in interrupted database transactions. These files must be recovered with the volume recovery or file recovery methods.
Performing Recovery Operations Database Recovery After a Disk or Node (System) Failure Database Recovery After a Disk or Node (System) Failure When a disk or node (system) fails, often SQL catalogs tables and database files on the disk or node are left in a crash-open state. To recover the database, both the catalogs and the files must be recovered to a consistent state.
Performing Recovery Operations File Recovery File Recovery File recovery is usually the recovery method used if other methods have failed. File recovery can be used only if you consistently dump audit trails to tape and make online dumps. File recovery reconstructs an audited file from the initial starting point of the online dumps and applies all the changes to the file from the history of the audit trails. The file is recovered to the last consistent point in the database.
Performing Recovery Operations File Recovery With the TIME Option File Recovery With the TIME Option By using the file recovery feature with the TIME option, you can resolve several different kinds of problems: • • • • If a database object is purged by accident, you can use the TIME option to recover the object's file as it existed just prior to the purge. This action effectively recovers the entire file but not the catalog definition of the object.
Performing Recovery Operations Operations That Invalidate TMF Online Dumps For detailed steps to recover accidentally dropped tables, see Recovering Tables on page 11-19. Note. If you follow the first approach, and any dependant objects of the dropped table were registered in different catalog(s), those catalogs must also be recovered along with the catalog in which the dropped table was registered.
Operations That Invalidate TMF Online Dumps Performing Recovery Operations Table 11-1. SQL/MP Operations That Invalidate TMF Online Dumps SQL Statement Option Effect Recovery Strategy ALTER INDEX and ALTER TABLE NO AUDIT Invalidates all online dumps of the affected object. The object does not have any TMF file recovery protection if it is not audited. If the AUDIT attribute is later turned back on, make new online dumps of all partitions of the index or table to retain TMF file recovery protection.
Performing Recovery Operations Responding to Accidental Loss of an Audited SQL/MP Object Responding to Accidental Loss of an Audited SQL/MP Object The method for recovering an accidentally dropped SQL object depends on whether that object is a view, an index, or a table. Recovery of a single view or index is usually a straightforward operation. Recovery of a table, however, can be complex and difficult, particularly if the table has multiple dependent objects.
Performing Recovery Operations Recovering Views and Indexes The disadvantage is that it requires a lot of work because you must continually keep track of all interdependent objects and process many tapes during recovery. ° If you perform collective dumps of the SQL catalog and all its objects, you might gain a faster recovery, but you must continually update the SQL OBEY command files that you use to rebuild your SQL objects. Note.
Performing Recovery Operations Recovering Tables Recovering Tables If the SQL object purged is a table, recovery can be much more complex than one involving only views and indexes. In some cases (for example if the TMF subsystem is not configured for file recovery), recovery might not be possible at all. For this reason, follow the Recovery Precautions on page 11-17. Caution. Unless performed with great care and precision, SQL table recovery involves risk of database corruption and loss of data integrity.
Recovery Example Performing Recovery Operations copy of a FILEINFO, DETAIL statement for partitioned files, together with a copy of an INVOKE statement, after the most recent change. After you have re-created the partitions and recovered them with TMF, the timestamps in the catalogs might be wrong for every partition. Because the redefinition timestamp is the same for all partitions, you can use a single UPDATE statement for each catalog involved.
Recovery Example Performing Recovery Operations *** could not be accessed. *** ERROR from File System [11]: file not in directory or row not *** in file, or the specified tape file is not present on a *** labeled tape. >> These error messages confirm that the table has been removed. 2. Identify the EMPLOYEE table's dependent objects that might also have been dropped.
Performing Recovery Operations Recovery Example 4. Re-create the EMPLOYEE table and all its dependent objects: a. Check that the OBEY (script) command file you maintain for this purpose contains a SQLCI CREATE command for the EMPLOYEE table and all its dependent objects. b. Issue the SQLCI OBEY command to execute the commands in the OBEY command file. (In this case, the OBEY command file is named DBCREATE.
Performing Recovery Operations Recovery Example +> ,first_name +> ) +> CATALOG =persnl +> ; --- SQL operation complete. . . . >> CREATE INDEX =xempdept +> ON =employee ( +> deptnum +> ) +> CATALOG =persnl +> ; --- SQL operation complete. . . . >> CREATE VIEW =mgrlist ( +> first_name +> ,last_name +> ,department +> ) +> AS SELECT +> first_name +> ,last_name +> ,deptname +> FROM +> =dept +> ,=employee +> WHERE +> dept.manager = employee.empnum +> CATALOG =persnl +> ; --- SQL operation complete. . . .
Recovery Example Performing Recovery Operations The resulting display is identical to the one shown under Recovery Example on page 11-20. The objects exist once again. However, the EMPLOYEE table does not yet contain any data. 6. When the objects were purged, TMF set the INVALID and RELEASED attributes of the online dumps for the objects to ON. Before you can recover the objects, you must first reset these attributes to OFF, using the TMFCOM ALTER DUMPS command: ~ ALTER DUMPS ( ~ $DATA.PERSNL.
Performing Recovery Operations Recovery Example NonStop TMF on \PLUTO *0203* RECOVER FILES [57] OnLineRestore Process #1 OnlineDumpMgmt: *WARNING* TMF-9038: $DATA.PERSNL.EMPLIST: Unable to retrieve the CreateTime and RedefTime for this object from disk. Using the values from the online dump instead. 8. At the end of recovery operation, use these EMS messages to determine which files have timestamps on disk that are inconsistent with their timestamps in the SQL catalog.
Performing Recovery Operations Recovery Example If no objects are identified as inconsistent, recovery is complete. Otherwise, proceed to Step 9 on page 11-26. 9. Use a licensed SQLCI2 utility to update the timestamps in the SQL catalog to match those in the file labels for all objects identified as inconsistent in Step 8. Be sure to use a log file to record the changes to be made. Also, to reduce error, HP recommends that you use fully qualified object names in the commands you enter.
Performing Recovery Operations Recovery Example Recovering Files to New Volumes, Subvolumes, or File-IDs You can also re-create SQL objects under different file IDs and place them on different volumes or subvolumes than the source objects. Target objects can be created in a different SQL catalog, but the target object description in the catalog must match that in the source catalog. For example, in the case of a partitioned file, both the target and source files must have the same number of partitions.
Performing Recovery Operations Recovery Example must alter the dumps of the purged objects to reset the INVALID and RELEASED flags before attempting recovery. Caution. Use the TOFIRSTPURGE, TIME, or TOMATPOSITION option in the RECOVER FILES command to avoid replaying the purge operation on a target object.
Performing Recovery Operations Recovering Catalogs can recover the table only by following all the steps under Recovery Example on page 11-20. • You cannot use this method to recover a SQL object, along with its catalog, to a new location. Recovering Catalogs There are several ways you can recover a catalog that becomes corrupt. Because the catalog tables are TMF audited tables, you can use the TMF file recovery method to recover the catalogs to a point where the catalogs were consistent.
Performing Recovery Operations Purging Damaged Objects With the CLEANUP Utility The CLEANUP utility, however, is specifically designed to purge a file, the file’s catalog description, and any dependent objects, when the SQL object is damaged. Caution. The CLEANUP utility purges undamaged files in addition to damaged ones. The CLEANUP utility should never be used as a substitute for the SQL DROP statement or SQLCI PURGE command.
Performing Recovery Operations • Recovering From Peripheral Utility Program (PUP) Commands (D-series only) You cannot specify the CLEANUP command within a user-defined TMF transaction. The CLEANUP utility protects the database, however, by automatically starting its own TMF transaction for each SQL object catalog description and file label operated upon. If the CLEANUP utility fails during execution, only the deletion of the last SQL object or partition is backed out.
Performing Recovery Operations PUP RENAME 3. Check that dependent objects residing on other volumes have also been recovered and re-create objects as necessary. For example, a table resides on another volume, but a dependent index resides on the newly labeled disk volume. If the index was not recovered, re-create the index. You should also check that all view definitions are current and that all shorthand views were recovered. Re-create any views that were not recovered. Caution.
Performing Recovery Operations PUP FORMAT 2. Create an EDIT file containing CREATE CATALOG and ALTER TABLE statements to re-create the catalogs and reset the security of the catalog tables. 3. Back up the volume by using a file-mode BACKUP command. Next, use PUP RENAME to rename the disk. You might first want to label the disk with PUP LABEL to clear all the old files. Finally, restore the files as follows: 1.
Performing Recovery Operations PUP DOWN (or PUP REMOVE) and PUP UP back into operation. There is no danger of inconsistency as long as the disk brought up is identical to the disk brought down. You should always perform a PUP STOPOPENS on the volume and a PUP REFRESH on the volume to ensure valid file labels before you make the volume inactive. Caution.
Performing Recovery Operations SCF Commands (G-series only) SCF Commands (G-series only) In G-series RVUs, PUP functions are performed by SCF. SCF is an interactive interface for configuring, controlling, and collecting information from a subsystem and its objects. SCF enables you to configure and reconfigure devices, processes, and some system variables while your NonStop S-series server is online.
Performing Recovery Operations Managing Shadow Disk Labels Managing Shadow Disk Labels Shadow labels are the internal labels created by the disk process when SQL objects are dropped within a transaction. Normally, these labels are deleted soon after the transaction completes. In some situations, however, especially during abnormal processing or a system crash, these labels are not deleted until file recovery is performed.
Performing Recovery Operations Removing Shadow Labels considered for the purge. If you omit the SHADOWSONLY option, no shadow labels are affected; only files in the file set list are purged. When you use the SHADOWSONLY option, follow these guidelines. • • You must be logged on as the super ID. If you are not the super ID user, a warning is issued and nothing is purged. When you run the PURGE or CLEANUP command to remove shadow labels, no user-defined TMF transaction should be active.
Performing Recovery Operations Removing Shadow Labels HP NonStop SQL/MP Installation and Management Guide —523353-004 11 -38
12 Managing a Distributed Database Databases can be distributed over disk volumes on a single system (node) or in a network of nodes. Likewise, application programs can be distributed across processors in a single node or in a network. When managing a database distributed across volumes or nodes, use the same SQL statements you would use with a nondistributed database.
Managing a Distributed Database Maintaining Local Autonomy Maintaining Local Autonomy Local autonomy implies that a DML request, initiated either interactively or with an application program, can access local data, regardless of the availability of remote dependent objects or other local dependent objects if the local data can satisfy the request.
Managing a Distributed Database Naming Nodes Naming Nodes An SQL/MP system requires a node name. An SQL/MP system in a network requires a node number in addition to a node name. After an SQL/MP database is created using a node name and node number, you should minimize changes to the name and number. The node name is expanded in the catalog entries, and the node number is entered in the file labels throughout the database.
Managing a Distributed Database Using Catalogs in a Network Using Catalogs in a Network Each node must have a system catalog and catalogs for the objects located on that node. A catalog can hold the descriptions of objects that reside only on the same node. For example, you cannot describe a table on node \SYSA in a catalog that resides on node \SYSB. For distributed and partitioned tables or indexes, you must define a catalog to describe the partition resident on that node.
Managing a Distributed Database Maintaining Local Autonomy in a Network Maintaining Local Autonomy in a Network In the context of a network distributed database, local autonomy ensures that a user can access local data regardless of the availability of remote dependent objects. For example, if a table is partitioned with a portion on \SYS1 and another portion on \SYS2, a local user of \SYS1 can access the local partition of the table when \SYS2 is not available.
Managing a Distributed Database • • • Maintaining Local Autonomy in a Network If a query uses Index A as the access path but \NODE3 is down, the query is recompiled to attempt to access the data by using Index B or by using the primary key residing on \NODE1. If an INSERT statement tries to insert a row into Table X with values for all the columns, the insert fails if \NODE3 is down, because Index A cannot be updated.
Managing a Distributed Database Creating a Distributed Database Remote Node Availability Unavailable remote nodes can prevent programs that require data from those nodes from obtaining needed data. You can, however, distribute data in an SQL/MP database so that local data is stored locally and is available locally regardless of remote node availability.
Managing a Distributed Database Altering Distributed Objects +> CATALOG \LOCAL.$VOL1.SALES; --- SQL operation complete. This example creates a partitioned table with partitions on both a local node and a remote node: >> CREATE TABLE \LOCAL.$VOL1.INVENT.PARTLOC +> (LOC_CODE CHARACTER (3) NO DEFAULT +> PARTNUM NUMERIC (4) UNSIGNED NO DEFAULT +> QTY_ON_HAND NUMERIC (7) NO DEFAULT +> PRIMARY KEY (LOC_CODE, PARTNUM )) +> CATALOG \LOCAL.$VOL1.INVENT +> ORGANIZATION KEY SEQUENCED +> PARTITION (\REMOTE1.$VOL2.
Managing a Distributed Database Dropping Distributed Objects Dropping Distributed Objects When you DROP or PURGE a distributed table, all indexes, partitions, and views must be accessible, in addition to the catalogs that describe these objects. If you do not have the authority to drop a shorthand view, the operation only invalidates the view. When you specify dropping any partition of a table or index, the operation drops the entire table or index.
Managing a Distributed Database Supporting Replicated Data Through Indexes This example creates a local index on a remote table: >> CREATE TABLE \REMOTE.$VOL1.SALES.
Managing a Distributed Database Using Remote Servers Using Remote Servers When you use a network-distributed database, you can often control whether remote data is updated directly by a local server or indirectly by a remote server. Any local program can update or retrieve data directly by using the remote I/O capabilities of the Guardian file system and disk process.
Managing a Distributed Database Design Examples Design Examples These examples suggest the potential benefits of using the _SQL_CMP_CPUS DEFINE: • • A development-and-test environment and a production environment share a single system. By limiting the development and test activity to certain processors, you enhance the performance of the production queries. You have an existing OLTP environment, possibly supported by batch applications, and an expanding DSS environment.
SQL Compilation and the CPU Usage DEFINE Managing a Distributed Database You can use this feature together with the SMF product to manage resources in a distributed environment. For example, to set up a system in which a specified subset of processors performs both parallel query operations and data access for a particular application: 1. Using SMF, create a storage pool and assign it physical volumes that are all primaried to a specified set of processors. 2.
Using the Processor Usage DEFINE Managing a Distributed Database Third hex character: Fourth hex character: CPUs 8-11 CPUs 12-15 SQL translates each hex character into its binary counterpart. Each bit represents one processor. If a bit is on, SQL uses the corresponding processor. If a bit is off, SQL does not use the corresponding processor.
Managing a Distributed Database Changing Network Environments In the preceding example, the SYS1 specifies the system for which you are controlling processor usage. If you do not include a system name, the DEFINE applies to the current system.
Managing a Distributed Database Changing Network Environments quiesced before the communication loss for planned downtime. Transactions on other nodes will continue, and might not be adversely affected, as a result of local autonomy, but you might need to use one of the TMF interfaces (such as TMFCOM) to back out or commit the transactions manually. Transactions requiring data on the unavailable node return errors. After communication is restored, transactions can proceed normally.
Managing a Distributed Database Managing Mixed Versions of SQL/MP Managing Mixed Versions of SQL/MP Sometimes a network might be required to run in a SQL/MP mixed-version environment.
Managing a Distributed Database Managing Mixed Versions of SQL/MP HP NonStop SQL/MP Installation and Management Guide —523353-004 12 -18
13 Measuring Performance During the life of an SQL application, you might need to measure the performance of all, or part, of the application. Several NonStop software products can provide statistical information about performance. Collecting these statistics requires an in-depth understanding of the system, the layout of the database tables, and the use of the application programs. You usually gather statistics under one of these conditions: • • • • A benchmark of performance.
Measuring Performance FILEINFO Utility FILEINFO Utility The SQLCI FILEINFO utility displays the physical characteristics of SQL tables, indexes, views, collations, and programs. FILEINFO also displays information about Enscribe files. You typically use FILEINFO to display the file label information of files. For performance statistics, use the FILEINFO utility to determine the index levels and extent and data block use of a table or index.
SET SESSION STATISTICS and DISPLAY STATISTICS Commands Measuring Performance SET SESSION STATISTICS and DISPLAY STATISTICS Commands SQLCI provides the STATISTICS option of the SET SESSION command; this option displays the statistics after each DDL, DML, or DCL statement executed in the session. You can also use the DISPLAY STATISTICS command to get statistics on a single statement. The DISPLAY STATISTICS command displays statistics for the immediately preceding DDL, DML, or DCL statement.
SQL Statistics Area (SQLSA) Measuring Performance Estimated Cost Start Time End Time Elapsed Time SQL Execution Time 9 89/04/01 13:07:12.822479 89/04/01 13:07:18.865150 00:00:06.042671 00:00:00.392796 Records Records Disk Table Name Accessed Used Reads \a.$b.c.d 123 22 3 \w.$x.y.z 9987231 1 99999 \sanfran.$mamoth.longestt.
Measuring Performance Measure Performance Measurement Tool SQLSA statistics are not cumulative. For example, while a CURSOR is open, the statistics reported apply only to each specific SQL statement issued, such as the OPEN statement and each individual FETCH statement, not to the entire set of operations spanning the use of the cursor from open to close.
Statistics and Reports for SQL/MP Measuring Performance Figure 13-1. Measure Entities and Program Structures Measure Entities Tables Process (Program) SQLPROC Procedure 1 SQL Statement 1 SQL Statement 2 SQL Statement 3 Access T1, T2,T3 ••• Procedure 2 SQL Statement 1 SQL Statement 2 SQL Statement 3 Access T1, T2,T3 T3 SQLSTMT SQLSTMT SQLSTMT T2 T1 SQLSTMT SQLSTMT SQLSTMT FILE FILE FILE VST009.
Measuring Performance Statistics and Reports for SQL/MP SQLSTMT Statistics The SQLSTMT report provides information for specific statements of an SQL process. SQLSTMT entities gather statistics for all statements of a process selected for measurement; there is one SQLSTMT entity for each statement. The SQLSTMT report identifies the SQLSTMT section name for each statement.
Measuring Performance • SQL/MP Measurement Models In the case of a C, Pascal, or TAL program, no separate preprocessing occurs, and no preprocessor listing is generated. The source listing produced by the compiler, however, includes comments that show the value of SLT-INDEX. By using this value, you can find the corresponding information in the SQLSTMT report in much the same way as you can with a COBOL program. FILE Statistics You can monitor database files with the FILE entity.
Measuring Performance SQL/MP Measurement Models be required on distributed nodes or that compiler or catalog manager timeouts have occurred. The counter SQL-NEWPROCESS-TIME contains the amount of time spent waiting for the call to NEWPROCESS to complete and is included in the total startup time. • OPENS contains the number of calls to open tables that were required by this program. The elapsed time spent executing the opens is stored in OPEN-TIME.
Measuring Performance SQL/MP Measurement Models index to reduce the number of records searched before returning records that satisfy the query. • SORTS stores the number of times the external sort process was invoked to return the data in the desired order. A value in this field indicates that the data is not being retrieved in the order supported by a key (primary key or index). The amount of time spent sorting is stored in the ELAPSED-SORT-TIME counter.
Measuring Performance • • • SQL/MP Measurement Models LOCK-WAITS stores the number of times a call to the disk process waited on locked data. TIMEOUTS stores the number of timeouts issued on the file. If the number is greater than zero, the file’s timeout value might be too low, thereby defining an insufficient time. ESCALATIONS stores the number of times a record lock was escalated to a file (table) lock.
Measuring Performance SQL/MP Measurement Models HP NonStop SQL/MP Installation and Management Guide —523353-004 13 -12
14 Enhancing Performance The initial step in achieving maximum performance is providing sufficient hardware to handle the throughput and size of the application database. In addition to hardware, many factors affect the performance of a database and application. Some factors are system dependent, others are application dependent. The factors discussed in this section are specific performance issues that can arise in an SQL/MP environment after it is in use.
Enhancing Performance • Minimizing Contention Moving a partition ° When using the WITH SHARED ACCESS option, ALTER TABLE PARTONLY MOVE and ALTER INDEX PARTONLY MOVE allow concurrent access by DML statements throughout all the entire operation except for the short commit phase of the operation. To maximize concurrent access while moving a partition, specify the WITH SHARED ACCESS option in your ALTER TABLE statement. Note. The WITH SHARED ACCESS option does not support two-way splits.
Enhancing Performance Minimizing Contention During a CREATE INDEX...WITH SHARED ACCESS operation, SQL sets the AUDITCOMPRESS option to OFF for the base table. Therefore, during the CREATE INDEX operation the audit trail grows at a faster rate than it does when AUDITCOMPRESS is ON (the default). More audit trail space is needed when the AUDITCOMPRESS option is OFF; the amount depends on the intensity of write activity during the CREATE INDEX operation.
Enhancing Performance Minimizing Contention Options Available for WITH SHARED ACCESS The WITH SHARED ACCESS option supports these options: • • • REPORT starts or stops EMS reporting for the operation. Events can be sent to $0 or to an alternate collector. NAME specifies an SQL identifier as the name of the operation so that you can identify EMS messages for the operation or identify the operation in a CONTINUE statement.
Enhancing Performance Avoiding Contention Between DDL Operations DELETE, INSERT, and UPDATE access during the operation. The time difference depends largely on the number and length of transactions on the nodes that contain source and target objects for the operation, particularly the number and length of transactions that directly involve source objects for the operation.
Enhancing Performance Other Operational Considerations Because the locks used in Step 1 and Step 3 of the operation are exclusive, they have no special priority over other locks that can also be issued on the objects. Therefore, to enable the exclusive locks required by these functions, you might need to manage the application activity as follows: 1. During Step 1, do not compile programs that would require the catalogs involved for update or that refer to the affected objects. 2.
Enhancing Performance Keeping Statistics Current Keeping Statistics Current SQL/MP provides an UPDATE STATISTICS utility to collect and save statistics on columns and tables. The SQL compiler uses these statistics to help determine the cost of access plans. When you have current statistics for a table, you increase the likelihood that the optimizer chooses an efficient plan.
Enhancing Performance • • Analyzing the Possible Impact of Running UPDATE STATISTICS Run UPDATE STATISTICS after creating a new index for a table; otherwise, SQL returns a warning for subsequent operations on the table. Do not run UPDATE STATISTICS when the UNRECLAIMED FREESPACE or INCOMPLETE SQLDDL OPERATION attribute is set. The results might be incorrect. For more information, see Altering Partition Attributes on page 7-19.
Enhancing Performance Testing UPDATE STATISTICS table does not have a very large number of columns, you should probably use the ALL option whenever you update the statistics. • • • If you use the NORECOMPILE option of UPDATE STATISTICS, the operation does not invalidate the dependent programs. If you want to take advantage of the new statistics, however, you must explicitly SQL compile the dependent programs.
Testing UPDATE STATISTICS Enhancing Performance 1. Prepare a sample query from your application. (You should probably use a query used often in your application.) This example shows the displayed statistics: >> >> SELECT ...; DISPLAY STATISTICS; Estimated Cost 68 Start Time End Time Elapsed Time SQL Execution Time 89/03/10 14:26:41.494150 89/03/10 14:27:11.123179 00:00:29.629029 00:00:00.686883 Table Name Records Accessed \SYS1.$VOL.SALES.ORDERS \SYS1.$VOL.SALES.
Enhancing Performance >> >> DISPLAY STATISTICS; ROLLBACK WORK; Running UPDATE STATISTICS <--Rolls back the TMF transaction Note. There is one problem with the preceding scenario: you should not enter the UPDATE STATISTICS statement within a user-defined TMF transaction. If the table is large, the userdefined transaction might cause an error on the TMF audit trails.
Enhancing Performance Using a Test Database for Emulation If you specify the PROBABILISTIC option, SQL ignores the EXACT and SAMPLE n BLOCKS options. The PROBABILISTIC option tells SQL to use an algorithm for computing statistics that gives more accurate results than the algorithm used in earlier product version updates (PVUs) of SQL/MP. Moreover, with the PROBABILISTIC algorithm, SQL computes statistics in parallel on partitioned tables.
Obtaining Statistics Enhancing Performance • >> >> +> +> +> • >> >> +> +> +> Statistics on the columns of a table as follows: VOLUME $VOL.SALES; SELECT TABLENAME, COLNAME, UNIQUEENTRYCOUNT, SECONDHIGHVALUE, SECONDLOWVALUE FROM $VOL.SALES.COLUMN WHERE TABLENAME LIKE "%$VOL.SALES.ORDERS%"; Statistics about file information of the table as follows: VOLUME $VOL.SALES; SELECT B.TABLENAME, F.EOF, F.NONEMPTYBLOCKCOUNT, B.ROWCOUNT FROM $VOL.SALES.BASETABS B, $VOL.SALES.FILES F WHERE B.TABLENAME = "\SYS.$VOL.
Enhancing Performance Altering Statistics This example obtains statistics on the columns of the table: >> SELECT TABLENAME, COLNAME, UNIQUEENTRYCOUNT, +> SECONDHIGHVALUE, SECONDLOWVALUE +> FROM $VOL.PERSNL.COLUMN +> WHERE TABLENAME = "\PHOENIX.$VOL.PERSNL.EMPLOYEE"; TABLENAME COLNAME ---------------------------------- ------------------------UNIQUEENTRYCOUNT SECONDHIGHVALUE SECONDLOWVALUE ------------------- -------------------- ------------------\PHOENIX.$VOL.PERSNL.
Enhancing Performance +> +> Deleting a Test Database NONEMPTYBLOCKCOUNT = 2 WHERE TABLENAME = "\PHOENIX.$VOL.PERSNL.EMPLOYEE"; This example updates the DELIV_DATE column. To update the column statistics, you must update each column. >> +> +> +> +> +> UPDATE $VOL.PERSNL.COLUMNS SET UNIQUEENTRYCOUNT = 11, SECONDHIGHVALUE = 000000000000004000, SECONDLOWVALUE = 000000000000001500 WHERE TABLENAME = "\PHOENIX.$VOL.PERSNL.
Enhancing Performance Optimizing Index Use >> DROP TABLE $vol.data.table1; >> DROP TABLE $vol2.data.table2; . . >> DROP CATALOG $vol.cat1; Note. To drop an SQL program stored in an OSS file, use the corresponding OSS utility. For more information, see the Open System Services Shell and Utilities Manual. You can use the CLEANUP utility to purge SQL objects (except SQL programs stored in OSS files) and the catalogs in which they are described.
Enhancing Performance Maximizing Parallel Index Maintenance For more information about indexes, see Determining When to Use Indexes on page 3-16. Maximizing Parallel Index Maintenance Indexes are automatically updated whenever a row is inserted into the underlying table and when any key column of the index is changed. Multiple indexes can be updated in parallel. The file system accomplishes parallel index maintenance by issuing asynchronous I/O requests to each disk process serving the indexes.
Enhancing Performance Maximizing Disk Process Prefetch Capabilities Maximizing Disk Process Prefetch Capabilities SQL can enhance performance by reading blocks of data into cache asynchronously before they are needed. This disk process prefetch operation works best when you request long sequential scans through data or when your access plan has a low selectivity value (as described in the SQL/MP Query Guide).
Enhancing Performance Using an SQL DEFINE to Manage PFS Utilization SQL can enable file system double buffering only when the disk process uses virtual sequential block buffering (VSBB) and the SELECT statement specifies browse access. For more information about VSBB, see the SQL/MP Query Guide. SQL automatically uses file system double buffering when this feature will enhance the performance of the query.
Enhancing Performance Changing the PFS Size Limit This example changes the threshold from the default value of 70 percent memory utilization to 50 percent utilization by setting this DEFINE value: > ADD DEFINE =_SQL_EXE_DOUBLE_SHUTOFF, CLASS MAP, FILE X5 This example uses the file name X5. The letter X has no significance; it is used to satisfy the syntax of the FILE parameter. The number 5 specifies a 50 percent PFS utilization threshold.
Additional DEFINEs for Managing Double Buffering Enhancing Performance Additional DEFINEs for Managing Double Buffering Two other DEFINEs allow you to manage other aspects of file system double buffering: • • The DEFINE =_SQL_CMP_DOUBLE_SBB_OFF disables file system double buffering for any query that is SQL compiled while this DEFINE is in effect. The DEFINE =_SQL_CMP_DOUBLE_SBB_ON enables (turns on) file system double buffering for scanning the inner table in a nested join or key-sequenced merge join.
Using Generic Locks Enhancing Performance Figure 14-1. Generic Locking Example CUSTOMER Table NAME 20 Bytes ADDRESS ••• Primary Key ACCOUNT Table NAME 20 Bytes ACCOUNT_NO 8 Bytes BALANCE ••• Concatenated Primary Key Lock Length 20 Bytes for Generic Locking VST010.vsd If the lock length of the ACCOUNT table is defined to be the length of the NAME column, SQL/MP acquires locks for an application by using a single lock to lock all rows with the same value for NAME.
Enhancing Performance • Checking Data Integrity Generic locks provide a good solution for the application problem in which a table lock is not acceptable, but the application needs so many locks in a transaction that the number might exceed the maximum allowed by the system. The application problem occurs because the disk process allows a maximum number of locks per process on a partition.
Enhancing Performance Creating Logical Views of Data ° Constraints simplify the change process to a simple, online process. If you add one constraint, the system immediately applies the constraint to all subsequent transactions. The constraint creation process also checks the existing table to ensure that all existing rows conform to the new constraint.
Enhancing Performance Adding and Dropping Partitions Adding and Dropping Partitions For performance improvement, consider partitioning a table or index to enable them to span multiple volumes or multiple nodes. For more information, see Adding Partitions to Tables and Indexes on page 7-7. As the number of rows in the table or index increases, consider redistributing rows across partitions to balance the distribution of rows.
Enhancing Performance Supporting Sort Operations For detailed information about the CONTROL TABLE statement, including the SEQUENTIAL BLOCKSPLIT option, see the SQL/MP Reference Manual or SQLCI online help. Supporting Sort Operations For certain operations, SQL/MP requires the features of the FastSort sort/merge program. The SQL/MP software requests these services automatically without user interaction or input.
Enhancing Performance Enhancing Query Performance You can direct FastSort to use a specific set of volumes for its work. Use the SCRATCH attribute to specify an initial scratch volume. To include or exclude volumes from the pool of volumes FastSort uses once the initial scratch volume is full, use the SCRATCHON and NOSCRATCHON attributes, respectively. For more information about scratch volumes, see the FastSort Manual.
Enhancing Performance Planning for Temporary File Requirements Planning for Temporary File Requirements When conducting joins and various other operations, SQL/MP creates and uses temporary files. These files exist during the course of an operation and are used for storage during intermediate steps in the operation. When the operation is complete, the temporary files are deleted. For both serial and parallel operations, the SQL compiler determines the size and location of the necessary temporary files.
A Licensed SQLCI2 Process A licensed SQLCI2 process (licensed program) can perform privileged operations, such as deleting or updating rows in catalog tables. Normally only the super ID can perform these operations because of the potential risk to the database. The super ID must explicitly license program files before beginning. Caution. These operations can be extremely dangerous to the consistency of the database and the data dictionary.
Licensed SQLCI2 Process Running SQLCI2 as another user To use the licensed SQLCI2 process, create a DEFINE that enables SQLCI to use the licensed SQLCI2 version rather than the normal SQLCI2 version. This protects you from making unintended changes to your system when you are logged on as super.super. You must log on as the super ID and create the =_SQL_CI2_sys DEFINE, pointing to the licensed version. Use this command: 38> ADD DEFINE =_SQL_CI2_sys, CLASS MAP, FILE $SYSTEM.SYSTEM.
Licensed SQLCI2 Process Revoking an SQLCI2 License Revoking an SQLCI2 License To remove the licensed program, you can either revoke the license or purge the program. Either of these commands performs this operation: 49> FUP REVOKE $SYSTEM.SYSTEM.SQLCI2L 49> PURGE $SYSTEM.SYSTEM.
Licensed SQLCI2 Process Revoking an SQLCI2 License HP NonStop SQL/MP Installation and Management Guide —523353-004 A- 4
B Removing SQL/MP From a Node If you want to install a version of the operating system that does not support SQL/MP, you must remove SQL/MP and all SQL objects from your system (node). A version of the operating system that does not support SQL/MP does not recognize the SQL file structure. You must remove all references to SQL objects before installing that version. To remove OSS programs, use OSS utilities. For more information, see the Open System Services Shell and Utilities Reference Manual.
Removing SQL/MP From a Node Using the CLEANUP Utility to Remove SQL Objects 3. For SQL tables containing data you need to preserve for use after removing SQL, create empty Enscribe files. Then, use the SQLCI COPY or LOAD utility to copy the tables into those Enscribe files. 4. Use the CLEANUP utility to purge the SQL objects and the catalogs in which they are described. You must ensure that you do not prematurely apply CLEANUP to the system catalog in which the $SYSTEM.SYSTEM.SQLCI2 program is registered.
Removing SQL/MP From a Node Using the CLEANUP Utility to Remove SQL Objects 7. If any shadow labels remain on the node, however, you must remove them before removing SQL. At the SQLCI prompt, enter: >> CLEANUP *.*.*, SHADOWSONLY; If any SQL objects other than shadow labels remain on the node, specify the name of each object in a CLEANUP command. Repeat Step 6 on page B-2 until DSAP does not list any SQL objects except the system catalog and $SYSTEM.SYSTEM.SQLCI2. Note.
Removing SQL/MP From a Node Using the CLEANUP Utility to Remove SQL Objects HP NonStop SQL/MP Installation and Management Guide —523353-004 B- 4
C Format 2 Partitions Format 2 enabling allows SQL/MP to support partitions of a size greater than 2 gigabytes (GB) and up to 1 terabyte (TB). The size of a partition is limited by the size of the single disk upon which it resides. This appendix discusses planning, migration, fallback, interoperability, and third party considerations for using Format 2 partitions.
Planning for SQL Format 2-Enabled Tables and Format 2 Partitions Format 2 Partitions Migration, in contrast, enables you to modify tables a partition at a time to Format 2. Most of the steps involved in this process can be done while your applications continue to access and modify data in the tables. Whether you attempt conversion or migration of your tables, the steps required have many common elements. Planning is an important part of the process.
Planning for SQL Format 2-Enabled Tables and Format 2 Partitions Format 2 Partitions Figure C-2 illustrates the recommended approach for migration and fallback planning if your system is running an RVU prior to G06.03. Figure C-2. Migration and Fallback Planning, G06.03 and Earlier RVUs Migration Path Fallback Path G06.03 - G06.12 RVUs plus fallback SPRs Baseline (with Format 2-enabled tables when falling back) Pre G06.
Planning for SQL Format 2-Enabled Tables and Format 2 Partitions Format 2 Partitions Figure C-3 illustrates an alternate approach for migration and fallback planning if your system is running an RVU prior to G06.03. In general, this approach involves slightly more risk than the approach illustrated in Figure C-2 on page C-3, unless all Format 2enabled tables are cleaned up before falling back. For a description of the required cleanup steps, see Fallback Scenario 2 on page C-8. Figure C-3.
General Planning Considerations Format 2 Partitions General Planning Considerations • G06.13 RVU baseline Establishing a baseline is an important step prior to migration. HP recommends installing G06.13 RVU and running applications in production prior to using any Format 2-enabled tables. Establishing the baseline is a precaution if issues arise during migration and require temporarily suspending or partially undoing migration activities.
General Planning Considerations Format 2 Partitions • Partition management Use the partition management functions to add new Format 2 partitions and to populate those new partitions with data that currently resides in previously existing Format 1 partitions. You must move partitions serially within a single table and all of its indexes.
Operational Considerations for SQL Format 2-Enabled Table Use Format 2 Partitions establishes a point in time with which to compare performance prior to migration and enables you to manage the performance impact of future growth separately from migration. • Network environment Users with multiple Expand nodes must upgrade nodes that contain tables and views that span network nodes to the G06.13 RVU and upgrade catalogs to version 350.
Fallback Considerations Format 2 Partitions When you alter a partition (table or index) to Format 2, its label is changed to a new format with expanded fields to allow for the larger attribute values that are possible with Format 2 partitions. • • • • When all changes described in the previous item are made, you should take new TMF online dumps or backups of the tables involved to establish their TMF file recovery or backup protection.
Fallback Scenario 3 Format 2 Partitions 1. Convert all Format 2 partitions back to Format 1, with or without SHARED ACCESS, then convert all Format 2-enabled tables back to Format 1 enabled. Alternately, if some Format 2-enabled tables are not important or if their data is first reloaded back into Format 1 enabled tables, you can instead use the SQLCI DROP command to eliminate them. Note.
Fallback Scenario 3 Format 2 Partitions • You can fall back only to a supported fallback RVU (G06.03 through G06.12) to which all the appropriate fallback SPRs have been applied. Note. If you are not running a supported fallback RVU and you want to ensure that you can fall back to the RVU from which you migrated (in the scenario where one or more Format 2-enabled tables remain on your system), you must follow a two-step migration plan: • 1.
Format 2 Partitions Fallback Scenario 3 continue with the next file, but without indicating that they have been excluded. In both cases, the dump otherwise completes successfully. ° The TMFCOM RECOVER FILES command will not recover any partitions of Format 2-enabled tables (both Format 1 and Format 2 partitions) or their associated index partitions and views from online dumps.
Interoperability Considerations Format 2 Partitions ° Fallback versions of TMFARLIB contain fallback support. If you choose to use this approach, you should rebind before falling back. Note. TMFARLIB fallback support involves ignoring, and not returning to the caller, audit records generated for DML changes to Format 2 partitions (changes made to data records or blocks). Audit records for DDL changes to Format 2 partitions (changes made to labels) are still returned.
Format 2 Partitions Third-Party Provider Considerations However, there are many different utility commands and many file-list and wild-card possibilities. Some of them could have unanticipated interoperability considerations. On systems on earlier RVUs, you can restore previously supported types of files from backup tapes that also contain Format 2-enabled tables or their associated indexes or views.
Format 2 Partitions ° Third-Party Provider Considerations Fallback versions of TMFARLIB contain fallback support. You could instead provide fallback versions of these programs that are bound with this version of TMFARLIB. Note. TMFARLIB fallback support involves ignoring, and not returning to the caller, audit records generated for DML changes to Format 2 partitions (changes made to data records or blocks). Audit records for DDL changes to Format 2 partitions (changes made to labels) are still returned.
Index A Accelerator, effect on SQL validity 10-1 Access improvement defining numeric columns 5-21 defining VARCHAR columns 5-21 Access paths alternate 5-42 distributed database 12-5 distributed systems 12-5 EXPLAIN utility 10-12 primary keys 3-2, 5-42 statistics issues 14-8 unavailable 10-13 Access plan See Query execution plan Active dictionary See Data dictionary ADD COLUMN clause 5-19, 7-10 ADD CONSTRAINT statement, program invalidation 10-16 ADD DEFINE command 10-34 ADD PARTITION option 7-7 ALLOCATE fil
B Index Applications (continued) example 2-1 Archiving SQL objects 4-17 ASCII character set 5-20 ASCII collating sequence 5-20, 5-44 Attributes altering catalog 7-15 altering table 7-16 altering view 7-18 FILEINFO display 6-10 specifying column 5-28 AUDIT attribute altering index attributes 7-19 protection views 4-15 similarity check rules 10-28 Audit dump 4-11 Audit flag 6-4 Audit trails See also TMF subsystem AUDITCOMPRESS attribute 4-15 configuring 4-10, 4-13, 4-15 constraint creation 5-53 COPY utility
C Index Backing up (continued) volume-mode 4-24 BACKUP command AUDITED option 4-18, 11-8 backing up volumes 4-19 FROM CATALOG option 4-20 INDEXES EXPLICIT option 4-22 INDEXES IMPLICIT option 4-22 PARTONLY option 4-21 BACKUP utility backing up SQL programs 9-34, 10-42 description 9-4 file, local and remote 9-6 moving tables 9-17, 9-19 Base tables creating 5-10 creating views 5-38 description 1-2 using exclusively in database 3-13 BASETABS catalog table 6-4, 6-6 BEGIN WORK statement 4-16, 14-10 Benchmark, p
C Index Catalogs (continued) resolution 10-13 node name change 9-32 number 5-5, 5-6 PUP LABEL issues 11-31 PUP RENAME issues 11-32 querying 6-1 recovering 11-12, 11-29 registering programs 10-7 RESETBROKEN option 11-11 RESTORE utility 11-2 restoring 11-2 single 5-5 SQL compilation 10-11 statistics 14-7 structure 5-3 table relationships 5-3 tables 5-2 TMF protection 11-2 upgrading 2-14 version information 6-15 CATALOGS catalog table description initializing 2-4 location 2-5 requirements 5-4 displaying cont
C Index Columns (continued) deleting 7-33 displaying information 6-7 indexed benefits 3-18 defining 3-20 integrity checking 5-47 invalidating programs by adding 10-2 KEYTAG 3-19 masking 3-15 names 5-19 null value 5-26 performance issues 5-20 renaming 3-14, 5-38 specifying attributes 5-28 specifying indexes 3-20, 3-21 values restriction 3-18 COLUMNS catalog table, displaying information 6-7 Comments adding 7-13 altering 7-28 changing 7-28 clearing 7-28, 7-35 displaying 6-7 dropping 7-35 similarity check ru
D Index CREATE CATALOG statement catalog description 1-3 description 7-2 PHYSVOL option 5-7 securing catalog tables 5-7 CREATE COLLATION statement 7-13 CREATE CONSTRAINT statement 5-52/5-55, 7-12/7-14 CREATE INDEX statement adding an index 7-4 description 5-42/5-47 invalidating programs 10-16 PHYSVOL option 5-45 Sort operations 14-26 uses 3-20 CREATE SYSTEM CATALOG command 2-4, 2-5 CREATE TABLE statement adding tables 7-3 altering columns, example 7-26 DEFINEs 10-34 description 5-10 guidelines 5-11/5-19 P
D Index Database (continued) adding objects 7-1 administrator See DBA altering 7-13 changing 7-1, 7-13 complete backup 11-8 consistency FILCHECK utility 6-14 TMF 4-10 VERIFY utility 6-12 crash-open files 11-12 creating 5-1 dependencies 6-1 distributed See Distributed database dropping objects 7-28 emulation 14-12 enhancing performance 14-1 integrating new columns 7-11 layout 3-13 management active dictionary 1-3 environment 1-1 guidelines 7-1 tasks 1-10 tools 1-10 moving 9-1, 9-24/9-25 online reorganizati
D Index DEFINEs (continued) distributed databases 12-1 duration 10-31 examples, with SQLCI 10-34 identifiers 10-30 INVOKE statement 10-33 name resolution 10-13 names locally distributed databases 12-1 network distributed databases 12-3 run-time environment 10-33 naming rules 10-30 network distributed databases 12-3 OBEY command files 10-31 PATHMON 10-37 PATHMON environment, set duration 10-38 PATHMON, altering 10-37 programs 10-32 SORT DEFAULTS 14-26 SQL recompilation 10-10 SQLCI 10-31 SQL_CMP_CPUS 12-11
E Index DML statements description 1-10 local autonomy 12-5 security considerations 4-4 Double buffering 14-18 DOUBLE PRECISION data type 5-22 Downgrading catalogs 2-16 Downtime by program, preventing 10-25 DROP CONSTRAINT statement 10-16 DROP INDEX statement 10-16 DROP statement COLLATION 7-34 description 7-28 VIEW 7-30 DROP TABLE statement 10-16 DROP VIEW statement 10-16 DSAP utility 11-36 DSL statements 1-10 DSLACK option APPEND command 8-16 LOAD command 8-7 RELOAD command 8-3 Dump See also Backing up
F Index Extent usage 13-2 EXTENTS option, FILEINFO 13-2 F Fallback situation 2-15 FastSort programs 2-8 SORTPROG process 14-26 SQL/MP calls 14-26 swap files 5-44 Field names defining 5-19 FILCHECK utility 6-14 File names resolution 10-13 File recovery recovering an SQL object 11-17, 11-27/11-29 File set, BACKUP utility 9-6 File system, double buffering 14-18 File Utility Program See FUP FILEINFO utility displaying attributes 6-10 displaying physical file names 6-10 displaying statistics 13-1 displaying v
G Index FUP (File Utility Program) (continued) SECURE command 2-8 SQLCI2 license A-1 STATUS command 8-4 SUSPEND command 8-4 G Generic locks 14-21 GET CATALOG statement 10-41 GET VERSION OF PROGRAM statement 10-13, 10-41 GROUP BY clause, avoiding sort 3-21 Group manager, security issues 4-4 Guardian names, resolution 10-13 H Hardware changing or moving 13-1 requirements 2-1 HEADING attribute 5-28, 10-29 HELP TEXT attribute 5-28, 10-29 statement 7-16 Help text COMMENTS table 6-7 similarity check rules 10-
K Index INDEXES catalog table, displaying information 6-8 INDEXES EXPLICIT option BACKUP command 4-18, 4-22 BACKUP utility 9-23 RESTORE utility 9-23 INDEXES IMPLICIT option BACKUP command 4-22 BACKUP utility 9-23 DUP utility 9-23 RESTORE utility 9-23 INDEXES OFF option, DUP utility 9-19, 9-23 Index-only access 3-20 INFO DEFINE command 5-7 INITIALIZE SQL command 2-4, 2-6, 9-9 Initializing SQL/MP 2-4 Inoperable plan compilation example 10-25 description 10-15 Insert operation, WITH CHECK 3-15 INSERT stateme
L Index L LABEL command, PUP 11-31, B-1 Labels, shadow 11-36 LARGEINT data type 5-22 Levels of indexes 13-2, 14-7 Licensed SQLCI2 process A-1 LIKE clause, CREATE TABLE statement 9-3, 9-21 LIKE predicate 6-4 LISTALL clause 7-35 Listing catalog names 9-34 NOREGISTER option 10-40 REGISTERONLY option 10-40 SQL objects 9-34 LOAD utility compared to COPY 8-7, 9-3 DataLoader/MP 8-9, 8-15 DSLACK option 8-7 ISLACK option 8-7 MOVE option 8-12 MOVEBYNAME option 7-26 moving data 8-7 moving database 9-23 moving tables
N Index Measure product (continued) ELAPSED-RECOMPILE-TIME counter 13-9 ESCALATION counters 13-10, 13-11 FILE entity 13-5, 13-8, 13-10 index creation 7-4 LOCK-WAITS counters 13-10, 13-11 OPENS counter 13-9 RECOMPILES counter 13-9 RECORDS-ACCESSED counters 13-9, 13-10 RECORDS-USED counter 13-10 SORTS counter 13-10 SQLPROC entity 13-5 SQLSTMT entity 13-5 SQL-NEWPROCESS counter 13-8 SQL-OBJ-RECOMPILE-TIME counter 13-8 statistics reports 13-6 TIMEOUTS counters 13-10, 13-11 Measuring performance 13-1/13-11 Mem
O Index NORECOMPILE option CHECK options 10-22 UPDATE STATISTICS statement 14-9 NOREGISTER option example 10-42 RESTORE SQLCOMPILE option 10-43 SQL compiler listing 10-40 NOREGISTER option, similarity check rules 10-41 NOT NULL clause 5-27 NULL clause 5-26 Null values 5-20, 5-26 Number of columns, similarity check rules 10-28 Number of indexes, similarity check rules 10-27 Numeric columns, defining for faster access 5-21 NUMERIC data type columns 5-20 DEFAULT SYSTEM clause 5-27 defining 5-22 O OBEY comma
P Index Partitions (continued) creating 7-7 displaying information 6-9 distributed databases 12-8 distributed systems 12-5 dropping 14-25 Format 2 2-11, 7-9, C-1/C-14 invalidating programs by adding 10-2 large quantities 5-34 loading, in parallel 8-9 local 12-9 lock limit 14-23 merging 7-20, 7-23 moving 7-20, 9-23 one-way split 7-21 performance benefits 7-7, 14-25 reorganizing 8-1, 8-5 restoring 11-3, 11-4 separate, loading 8-9 SMF 5-32 specifying 5-36 splitting 7-20, 8-2, 8-5 table organization 5-32 two-
P Index PHYSVOL option CREATE CATALOG statement 5-7 CREATE INDEX example 5-46 creating indexes 5-45 creating partitions 5-33 creating tables 5-18 system catalog 2-5 PINs 2-13 Planning database and data dictionary 1-3, 3-1 Plan, access 10-6, 10-12 Plan, query execution 10-6, 10-12 PRIMARY KEY clause 3-2, 3-6 Primary keys See also User-defined primary key access paths 3-2, 5-42 description 3-2, 5-11 entry-sequenced files 3-9, 5-14 generic locks 14-21 indexes 5-42 logical 3-2 null values 5-27 performance iss
Q Index PROGRAMS table 10-16 Protection views AUDIT attribute 4-15 defining 3-14, 5-38 securing 5-38 similarity check rules 10-28 PUP (Peripheral Utility Program) description 11-1 DOWN command 11-33 FORMAT command 11-33 LABEL command 11-31, B-1 options 11-31 REMOVE command 11-33 removing SQL objects B-1 RENAME command 11-32 REVIVE command 11-33 UP command 11-33 PURGE command 7-35 PURGE target option 9-4 PURGE utility 7-28, 7-35 PURGEDATA command 8-18 Purging SQL objects 9-33, B-1 Q Queries catalog access
S Index REGISTERONLY option, SQL compiler listing 10-40 Relationships See Dependencies Relative files 3-1, 3-10, 5-14 Relative tables 3-1, 3-10, 5-14, 5-15 Releases, compatibility 2-1, 12-17 RELOAD command 8-1, 8-2, 8-3 Remote Duplicate Database Facility (RDF) 4-1 Remote node availability 12-7 Remote servers 12-11 RENAME option 7-36 RENAME statement, program invalidation 10-16 Renaming a node 9-32 Renaming objects 7-36 Renumbering a node 9-32 Reorganizing databases online 8-2 Replicated data 12-10 Reports
S Index Security (continued) recommendations 4-4 Safeguard 4-8 schemes 4-4 shorthand view 3-15, 5-40 system catalog 5-10 table 5-37 user IDs 4-4, 5-1 view 5-38, 7-18 SELECT statement NOREGISTER 10-41 performance 5-47 views 14-24 SENSITIVE flag 10-6 Sequential cache 14-17 SERIALWRITES file attribute, similarity check rules 10-28 Servers constraints 14-23 remote 12-11 SET SESSION STATISTICS ON command 13-1, 13-3 SET STYLE VARCHAR_WIDTH command 6-5 Shadow labels 11-36 SHADOWSONLY option 11-36 Shorthand views
S Index SMF (continued) FILEINFO command 6-10 PHYSVOL option 5-33 processor usage 12-13 system catalog 2-5 Sort operations EXPLAIN utility 14-26 index creation 5-44 performance 14-26 specifying indexes to improve performance 3-21 Sort programs 2-8 SORTED option 9-21 SORTPROG, FastSort process 14-26 SORTS counter, Measure 13-10 SORT_DEFAULTS DEFINEs 14-26 SOURCEDATE option 9-4 Split partitions one-way split 7-21 two-way split 7-22 SQL See also SQL/MP compiler listing 10-40 executor, behavior after SQL load
S Index SQL/MP (continued) local autonomy 12-6 logging 10-5 options and performance 10-32 PATHMON DEFINEs 10-10 query plans 10-12 SQL SENSITIVE flag 10-6 SQL VALID flag 10-6 statistics 10-12 STOREDDEFINES option 10-32 execution cost 13-9 initializing 2-4, 2-6 installing 2-1, 2-2 measurement models 13-8 migrating to a new version 2-10 node name for system 12-3 objects archiving 4-17 distributed 12-1, 12-8 PUP LABEL issues 11-31 PUP RENAME issues 11-32 purging 9-33 removing B-1 restoring 11-2 programs 10-5,
T Index Statistics (continued) performance 13-1, 14-8 program execution 13-9 similarity check rules 10-28 SQL compilation 10-12 SQLCI STATISTICS option 13-3 SQLPROC report 13-6 SQLSA 13-5 SQLSTMT report 13-7 UPDATE STATISTICS 14-7/14-11 updating for tables and columns 14-7 STATUS command 8-4 Status information in programs 13-4 STOREDDEFINES option COMPILE option 10-25 during SQL compilation 10-32 Subvolume names, name resolution 10-13 SUSPEND command 8-4 SUT (Site Update Tape) 2-2 Swap files 5-44 Swap fil
U Index Tables (continued) PHYSVOL option 5-18 preallocating space 5-14 primary keys 3-4, 5-11 purging data 8-18 relative 3-1, 3-10, 5-14 renaming 7-36 reorganizing 8-1 restoring 11-2, 11-6 security 5-37, 10-29 security dependencies of indexes 4-5 similarity check rules 10-27 statistics, updating 14-7 USAGES catalog table 10-16 VERSIONS catalog table 6-15 view security 3-15, 5-38 TABLES catalog table, displaying information 6-4, 6-5, 6-6 TACL DEFINEs, example 10-35, 10-36 Tasks, database management 1-10 T
V Index Unique index keys and performance 5-47 Unique indexes 3-18, 5-47 UNLOCK statement 10-41 UNRECLAIMED FREESPACE flag 7-24 Unstructured files 8-7 UP option (PUP) 11-33 Update operation, WITH CHECK 3-15 UPDATE statement 10-41 UPDATE STATISTICS statement adding indexes 7-5 description 14-7/14-11 effect on performance 14-8 program invalidation 10-2, 10-16 testing 14-9 UPGRADE CATALOG command 2-14 UPGRADE SYSTEM CATALOG command 2-15 Upgrading programs 2-12 UPSHIFT attribute 5-28, 10-29 USAGES catalog tab
W Index Views (continued) protection 3-14 renaming 7-36 restoring 11-6 securing 3-15, 5-38, 7-18 security and underlying table security 5-41 VIEWS catalog table displaying information 6-6 shorthand view creation 5-40 VIEWS EXPLICIT option, DUP utility 9-21 VIEWS OFF option, DUP utility 9-19 Volume recovery TMF 1-5, 11-12, 11-31 Transaction backouts 4-12 Volumes auditing 4-12, 5-4 labeling 11-31 mirrored 2-2, 4-1 PUP LABEL command 11-31 PUP REMOVE and REVIVE 11-33 removing 11-33 Safeguard protection 4-8 Vo