Reload Analyzer Manual
Tandem Reload Analyzer Manual Page 19 of 42
Index blocks function much like data blocks in that they contain records. Those records
are actually pointers to respective data blocks. These index blocks are the mechanism
for key-sequenced data retrieval.
When an index block becomes full, it splits, a new index block is created, and some of
the pointers are moved from the old index block to the new one. The first time this
occurs in a file, the disk process must generate a new level of indexes. It does this by
allocating a higher-level index block containing the low key and pointer to the two
lower-level index blocks (which in turn point to many data blocks). The disk process
must do this again each time the “root” (highest-level) block is split.
The DP2 disk process sometimes performs a three-way block split, creating two new
blocks and distributing the original block's data or pointers (plus the new record or
pointer) among all three.
As online transactions and batch jobs take place, both index and data blocks can split.
Eventually, this process causes the file to become fragmented or disorganized. The
next section explains how this happens and shows the difference between an
organized and disorganized file.
File Disorganization
Figure 1-2 shows an organized file. This file is organized because:
The data blocks are physically contiguous on the disk and are arranged according
to the primary record keys.
The index and data blocks are uniformly filled with data and have little or no
wasted disk space.
The file has the fewest possible index levels.