Enscribe Programmer's Guide
FILE_READUPDATELOCK64_/READUPDATELOCK[X] operation causes the record to be deleted
from the file.
If a transaction that is associated with a dequeue request aborts, the disk process reinserts the
record at the same logical location from which it came. The abort operation also awakens any
processes waiting to dequeue a record from the file.
If the read operation is not successful because the file is empty or all records are locked, the disk
process retains the request and waits until one of these events occurs:
• The disk process waits until a transaction completes that includes insertion of a record. When
the new record is available, the disk process retries the
FILE_READUPDATELOCK64_/READUPDATELOCK[X] operation. Note that either of these can
make a record available:
◦ A transaction that commits after inserting a record
◦ A transaction that aborts after dequeuing a record
◦ If a special queue-file timeout on the read operation expires, the disk process returns an
error 162 (operation timed out) to the requesting process.
One exception is the use of exact positioning. If the application requests exact positioning and the
file is empty or the record does not exist, the FILE_READUPDATELOCK64_/READUPDATELOCK[X]
operation receives an error 11 (record not found) and does not queue the request.
Generally, errors (other than operation timed out) on a
FILE_READUPDATELOCK64_/READUPDATELOCK[X] operation should be handled like errors on
normal write operations. That is, the transaction should be aborted.
Note the behavior of queue files when generic locking is used. If the lock key-length of a queue
file is less than the actual key length, the disk process will perform generic locking on inserted
records. Inserting a record when generic locking is enforced will lock existing records that have
the same key for the lock key-length. This prevents existing records with the matching generic key
from being dequeued until the encompassing transaction completes.
Impact of Records Causing Data Errors
When using audited queue files, there is an additional consideration for error processing. If a
dequeuing operation returns a bad record that causes the application to abort the transaction, the
bad record is reinserted into the file.
Consider the case where a transaction is started, a record is dequeued, and the contents of the
data returned to the application causes it to abort the transaction (either due to a programmatic
abort or process failure). The abort operation causes the bad record to be reinserted into the queue
file. If the application performs another dequeue operation, it retrieves the same record and could
possibly abort again.
Although this might not cause difficulties, the application would not progress past the bad record.
To avoid this situation, validate record contents prior to processing data.
This problem only affects audited operations; in the unaudited case, the bad record is not reinserted
into the file, but is lost.
Dequeuing Records and Nowait I/O
If the time limit expires prior to the queue file timeout, the
FILE_READUPDATELOCK64_/READUPDATELOCK[X] request is canceled if it was a file-specific
call (that is, the file number is other than -1). With non file-specific calls,
FILE_READUPDATELOCK64_/READUPDATELOCK[X] is not canceled for the queue file. A canceled
FILE_READUPDATELOCK64_/READUPDATELOCK[X] can result in the loss of a record from the
queue file. This problem is particularly acute for queue files, since a dequeuing operation can be
delayed until the file contains a record that fits the request.
Accessing Queue Files 113