Introduction to NonStop SQL/MP
Other Architectural Features
NonStop SQL Architecture
113425 Tandem Computers Incorporated 3–21
In a DSS environment, for example, users can issue ad hoc queries that differ greatly in
size and duration, causing the DSS workload to fluctuate unpredictably. With the
mixed workload feature, the DBA or system manager can establish different priorities
for different queries. For example, the DBA can give a lower priority to long-running
queries so that queries of shorter duration can execute quickly, thereby helping to
balance resource utilization.
In an OLTP environment, user queries should execute quickly to meet users’
expectations of low response times. A batch query that updates an OLTP table or
generates a report would receive a lower priority than the OLTP queries. Essentially,
the batch query runs in the background and does not interfere with the performance of
the OLTP application.
The mixed workload feature is especially helpful when NonStop SQL/MP uses
parallel query execution for a large query. Without the mixed workload feature, data
access managers in many processors could become tied up as they scan all the
partitions of a table in parallel. Instead, each data access manager checks its queue at
intervals while it is executing a long-running request.
If a high-priority request is pending, the data access manager preempts processing the
long-running request and serves the high-priority request. Moreover, the priority of a
long-running query is decreased to give another query of equal priority a chance to
execute. In this way, long-running queries are serviced only if the concurrently
running OLTP or high-priority DSS workload does not need processor and disk
resources.
The mixed workload feature also ensures that a low-priority query will eventually
resume executing after it has been preempted by a number of high-priority requests.
A cyclical scheduling algorithm allows the low-priority query to execute a short time
before it is once again preempted.
Sequential Block Buffering The most efficient way to satisfy some large DSS queries and batch reports and
updates is to scan the table sequentially. For sequential access, the data access
manager can use an I/O method called sequential block buffering, which returns data
to the file system one block at a time instead of one row at a time. (A block can be 512,
1024, 2048, or 4096 bytes long.)
If a query accesses most rows in a table and most columns in each row, sequential
block buffering reduces the number of messages it takes to return the qualifying data
to the file system.
Consider the following query:
SELECT *
FROM EMPLOYEE
FOR BROWSE ACCESS ;
If each row in the EMPLOYEE table is 96 bytes and blocks are 1024 bytes, the data
access manager can return 10 rows at a time, reducing the number of messages by
approximately 90 percent. This method is also called real sequential block buffering
(RSBB) because the data access manager passes the actual block of rows to the file