User`s guide
Polling Mode
10-3
Additionally, background tasks like host-target communication or updating
the target screen run in parallel with sample-time-based model tasks. This
allows you to interact with the target system while the target application is
executing in real time at high sample rates. This is made possible by an
interrupt-driven real-time scheduler that is responsible for executing the
various tasks according to their priority. The base sample time task can
interrupt any other task (larger sample time tasks or background tasks) and
execution of the interrupted tasks resumes as soon as the base sample time
task completes operation. This gives a quasi parallel execution scheme with
consideration to the priorities of the tasks.
Latencies Introduced by Interrupt Mode
Compared to other modes, interrupt mode has more advantages. The exception
is the disadvantage of introducing a constant overhead, or latency, that reduces
the minimal possible base sample time to a constant number. The overhead is
the sum of various factors related to the interrupt-driven execution scheme and
can be referred to as overall interrupt latency. The overall latency consists of
the following parts, assuming that the currently executing task is not
executing a critical section and has therefore not disabled any interrupt
sources:
•
Interrupt controller latency — In a PC-compatible system the interrupt
controller is not part of the x86-compatible CPU but part of the CPU chip set.
The controller is accessed over the I/O-port address space, which introduces
a read or write latency of about 1 µs for each 8 bit/16 bit register access.
Because the CPU has to check for the interrupt line requesting an interrupt,
and the controller has to be reset after the interrupt has been serviced, a
latency of about 5 µs is introduced to properly handle the interrupt
controller.
•
CPU hardware latency — Modern CPUs try to predict the next couple of
instructions, including branches, by the use of instruction pipelines. If an
interrupt occurs, the prediction fails and the pipeline has to be fully reloaded.
This process introduces an additional latency. Additionally, because of
interrupts, cache misses will occur.