Enscribe Programmer's Guide
be indicated by error 122. Buffered cache is the default for audited files because TMF can recover
committed, buffered updates lost due to a system failure.
NOTE: Be careful in using the combination of nonaudited, buffered files with a sync-depth of 0
(no checkpointing). This combination provides high-performance updates but might compromise
data integrity in certain situations. Restartable applications (old master to new master, for example)
are not a concern. However, with online transaction applications there is a risk that some updates
buffered in the cache could be lost if there is a primary CPU failure. This is not a problem if the
primary-to-backup switch is deliberately set or is due to a controller path error, because no processor
failure is involved. If there is a disk process backup takeover due to primary CPU failure, the disk
process returns an error 122 on the next request of any kind for that file, indicating a possible
prior loss of buffered updates.
If a nonaudited buffered file with a sync-depth of 0 is used, the application should use SETMODE
95 to flush the buffered updates to disk before closing the file. The application should not depend
on the FILE_CLOSE_ procedure to do this, because FILE_CLOSE_ does not return an error and
consequently there would be no indication of a possible prior loss of buffered updates (error 122).
The disk process avoids fragmentation of cache memory space by grouping all 4096- byte blocks
in one area, all 2048-byte blocks in another area, and so forth. You set the amount of cache
memory devoted to each block size by using the PUP SETCACHE command. For the system disk,
the system operator sets the amount of cache memory devoted to each block size during system
configuration.
The disk process cache manager maintains ordered lists of its cache blocks (one for each size of
cache block), with the most recently used at the top of the list and the least recently used at the
bottom. When the cache manager needs a new block, it typically uses the entry at the bottom of
the appropriate list. After a block has been used, its entry moves to the top of the list and it thereby
becomes the most recently used block. As blocks are used, the various entries in the list gradually
migrate downward toward the bottom of the list.
Index and bit-map blocks, however, are kept longer in cache than are data blocks. The cache
manager always uses data blocks whenever they reach the bottom of the list, but allows index and
bit-map blocks to migrate through the list twice before using them.
Because this technique is believed to be advantageous in every application environment, there is
no way to disable it.
Sequential Block Buffering
When reading a file sequentially, you can reduce system overhead if you enable sequential block
buffering when you open the file. Note that this applies to read access only.
Sequential block buffering essentially relocates the record deblocking buffer from the disk process
to your application's process file segment (PFS). The Enscribe software then uses the PFS buffer to
deblock the file's records.
Without sequential block buffering, the file system must request each record separately from the
disk process; for each record, this involves sending an interprocess message and changing the
environment. With sequential block buffering enabled, an entire block is returned from the disk
process and stored in the PFS buffer. Once a block is in the PFS buffer, subsequent read access
to records within that block is performed entirely by the file system (not the disk process) and
requires no hardware disk accesses, no communication with the disk process, and no environment
changes.
If sequential block buffering is to be used, the file usually should be opened with protected or
exclusive access. Shared access can be used, although it can cause some problems.
File Access 55