RDF System Management Manual
Table Of Contents
- RDF System Management Manual
- What’s New in This Manual
- About This Manual
- 1 Introducing RDF
- RDF Subsystem Overview
- RDF Processes
- RDF Operations
- Reciprocal and Chain Replication
- Available Types of Replication to Multiple Backup Systems
- Triple Contingency
- Loopback Configuration (Single System)
- Online Product Initialization
- Online Database Synchronization
- Online Dumps
- Subvolume- and File-Level Replication
- Shared Access DDL Operations
- EMS Support
- SMF Support
- RTD Warning Thresholds
- Process-Lockstep Operation
- Support for Network Transactions
- RDF and NonStop SQL/MX
- Zero Lost Transactions (ZLT)
- Monitoring RDF Entities With ASAP
- 2 Preparing the RDF Environment
- 3 Installing and Configuring RDF
- 4 Operating and Monitoring RDF
- 5 Managing RDF
- Recovering From File System Errors
- Handling Disk Space Problems
- Responding to Operational Failures
- Stopping RDF
- Restarting RDF
- Carrying Out a Planned Switchover
- Takeover Operations
- Reading the Backup Database
- Access to Backup Databases in a Consistent State
- RDF and NonStop SQL/MP DDL Operations
- RDF and NonStop SQL/MX Operations
- Backing Up Image Trail Files
- Making Online Dumps With Updaters Running
- Doing FUP RELOAD Operations With Updaters Running
- Exception File Optimization
- Switching Disks on Updater UPDATEVOLUMES
- 6 Maintaining the Databases
- 7 Online Database Synchronization
- 8 Entering RDFCOM Commands
- 9 Entering RDFSCAN Commands
- 10 Triple Contingency
- 11 Subvolume- and File-Level Replication
- 12 Auxiliary Audit Trails
- 13 Network Transactions
- Configuration Changes
- RDF Network Control Files
- Normal RDF Processing Within a Network Environment
- RDF Takeovers Within a Network Environment
- Takeover Phase 1 – Local Undo
- Takeover Phase 2 – File Undo
- Takeover Phase 3 – Network Undo
- Takeover Phase 3 Performance
- Communication Failures During Phase 3 Takeover Processing
- Takeover Delays and Purger Restarts
- Takeover Restartability
- Takeover and File Recovery
- The Effects of Undoing Network Transactions
- Takeover and the RETAINCOUNT Value
- Network Configurations and Shared Access NonStop SQL/MP DDL Operations
- Network Validation and Considerations
- RDF Re-Initialization in a Network Environment
- RDF Networks and ABORT or STOP RDF Operations
- RDF Networks and Stop-Update-to-Time Operations
- Sample Configurations
- RDFCOM STATUS Display
- 14 Process-Lockstep Operation
- Starting a Lockstep Operation
- The DoLockstep Procedure
- The Lockstep Transaction
- RDF Lockstep File
- Multiple Concurrent Lockstep Operations
- The Lockstep Gateway Process
- Disabling Lockstep
- Reenabling Lockstep
- Lockstep Performance Ramifications
- Lockstep and Auxiliary Audit Trails
- Lockstep and Network Transactions
- Lockstep Operation Event Messages
- 15 NonStop SQL/MX and RDF
- Including and Excluding SQL/MX Objects
- Obtaining ANSI Object Names From Updater Event Messages
- Creating NonStop SQL/MX Primary and Backup Databases from Scratch
- Creating a NonStop SQL/MX Backup Database From an Existing Primary Database
- Online Database Synchronization With NonStop SQL/MX Objects
- Offline Synchronization for a Single Partition
- Online Synchronization for a Single Partition
- Correcting Incorrect NonStop SQL/MX Name Mapping
- Consideration for Creating Backup Tables
- Restoring to a Specific Location
- Comparing NonStop SQL/MX Tables
- 16 Zero Lost Transactions (ZLT)
- A RDF Command Summary
- B Additional Reference Information
- C Messages
- D Operational Limits
- E Using ASAP
- Index
Preparing the RDF Environment
HP NonStop RDF System Management Manual—524388-003
2-3
Data Communication (Expand) Resources
Data Communication (Expand) Resources
RDF sends filtered audit data from the primary system over the network to the backup
system. A communications path between the systems can be any form of Expand
linkage. Plan to configure sufficient communications resources between the primary
and backup systems so that RDF can do the following:
•
Handle the peak rate of audit data
•
Catch up processing in any audit trail if the communications paths go down and
are restored (without RDF reinitialization)
If you are using a dedicated Expand path with high throughput, you should set
PATHPACKETBYTES to 8192. If you are not using a dedicated Expand path, you
should use Multipacket frames with PATHBLOCKBYTES set to 8192. (See also
Specifying System Generation Parameters for an RDF Environment.)
RDF is designed to extract audit information from the primary system and transmit it to
the backup system as quickly as possible. If you are not using the ZLT capability, this
limits the number of transactions that could be lost if a disaster should occur at the
primary system. See Unplanned Outages Without ZLT in section 1.
To estimate the data communications resources needed for RDF, calculate the amount
of audit trail data generated per second during peak loads. If your business has
seasonal peaks, such as holidays or the ends of calendar quarters, consider the peak
rate at those times.
The discussion that follows pertains to the master audit trail. If you are replicating
auxiliary audit trails, you should use the same algorithm for each auxiliary audit trail.
Use the following sampling process once an hour for two weeks to establish your
needs:
1. Enter a FUP INFO command for the current TMF MAT and record the end-of-file
(EOF) value; for example:
FUP INFO $AUDIT.ZTMFAT.*
2. Enter a FUP INFO command for the current MAT 5 minutes later and record the
EOF value; for example:
FUP INFO $AUDIT.ZTMFAT.*
CODE EOF LAST MODIF OWNER RWEP TYPE REC BLOCK
$AUDIT.ZTMFAT
AA000003 134 11292672 10:05 -1 GGGG
CODE EOF LAST MODIF OWNER RWEP TYPE REC BLOCK
$AUDIT.ZTMFAT
AA000003 134 11653120 10:10 -1 GGGG