Tandem Capacity Model (TCM) Manual
7 Modeling Batch Workloads
TCM uses a multiple priority closed queueing model to predict task response time and transaction
throughput for batch transactions. It is assumed that OLTP transactions have higher priority than
batch transactions and that batch transactions do not interfere with OLTP activity. System resources
that are not used for processing OLTP transactions are left for batch use. TCM uses the remaining
system resources to calculate the CPU and disk seconds for processing batch transactions and then
takes the sum of CPU and disk seconds as batch response time. For a more detailed explanation
of the batch formula that TCM uses, see Appendix D: “TCM Response Time Formulas”.
This chapter:
• Describes the assumptions made about batch transactions
• Defines batch terms
• Explains the differences in modeling batch transactions in MeasTCM, the WA model, and the
Performance model
• Explains how to calibrate batch response time to observed response time
Assumptions
TCM makes the following assumptions about batch transactions:
• Batch transactions run at a lower priority than OLTP transactions. While increased online
activity interferes with the response time of batch activity, increased batch activity does not
interfere with the response time of online transactions.
• Batch activity in the disk process does not interfere with OLTP activity at all. Mixed workload
enhancement (its use or disuse) has no effect on this assumption.
• Batch transactions do not use system resources uniformly. With OLTP activity, TCM assumes
that transactions are spread evenly across CPUs and disks. With batch activity, you can define
how batch activity should be allocated to your system resources. To model batch transactions
accurately, you must understand how batch applications run on your system.
• Process categories run in parallel across CPUs, but that run in series within CPUs.
For example, Tape Read and DB Update are process categories in a batch application. Tape
Read and DB Update run one after the other; that is, sequentially. DB Update runs at the same
time in two separate CPUs. Tape Read uses 100 CPU seconds. DB Update uses 300 CPU
seconds but occurs simultaneously in two CPUs, so its net CPU seconds is 300/2 = 150 CPU
seconds for response time. The total response time needed by Tape Read and DB Update is
100 (Tape Read) + 150 (DB Update) = 250 CPU seconds. Figure 28 (page 97) illustrates
this example.
96 Modeling Batch Workloads