Mixed Workload Design Priority Guidelines White Paper

Hewlett-Packard Company—528875-001
Mixed Workload Design
Priority Guidelines
Summary
This white paper briefly describes the high level design for the Mixed Workload Design and presents some
general guidelines to control mixed workload applications using priority settings.
MWE Basic Design
The HP NonStop Kernel operating system uses a priority-based design to control process dispatch. The
highest priority executable process on the ready-list is dispatched.
Applications rely on a variety of system services, including communication, security, transaction, and data
services. These services are typically provided using a client-server model, using non-stop process pairs.
The heart of the database engine is the data access manager, more commonly referred to as DP2.
By definition, DP2 runs at system priority (220), which ensures that the operating system and transaction
services function properly.
Definition: Priority Inversion
When a service process executes at system priority, a low priority transaction can utilize high
priority services.
Low priority workload requests for services can delay service for higher priority workloads. This
behavior is termed “priority inversion”.
To prevent priority inversion, the service process must not service lower priority requests while
higher priority workloads are waiting for any resources, AND must preempt active service when
higher priority workloads require new resources.
The use of message priority by the high priority system service (DP2) to prevent priority inversion includes
both CPU and I/O resource contention. The methods used to resolve this contention include the following:
1. CPU context switch, which compares the priority of the currently active request with the priority
of processes on the ready-list. This is used to resolve conflicts for the CPU resource.
2. Request context switch, which compares the priority of the currently active request with the
priority of the message queue. This is used to resolve conflicts for the data services.
3. Reduction in query read ahead, which compares the priority of the currently active request with
the priority of any active disk I/O. This is used to resolve conflicts for I/O services.
4. Query deferral, which compares the priority of the current active request with the priority of any
active disk I/O. This is used to resolve conflicts for I/O services when parallel query workloads
are active.

Summary of content (3 pages)