|Non-Confidential||PDF version||ARM 100400_0001_03_en|
|Home > Determinism Support > Low-latency interrupt mode|
The low-latency interrupt mode can be enabled or disabled using the System Control Register. The fast interrupt bit controlling the interrupt mode is disabled by default to allow some enhanced performance, and can be modified if you require a higher level of control on the determinism. By enabling low-latency interrupt mode, entry into an interrupt routine is slightly quicker, but with a slight reduction in global core performance.
When the low-latency interrupt mode is disabled, the interrupts
are inserted in the decoder stage and are seen as a branch instruction
targeting the interrupt vector. This means that all instructions
in the pipeline must finish their execution before starting to execute
new instructions from the interrupt handler. When those instructions
in the pipeline depend on a load that misses, this time depends
on the external memory latency. Another case of instructions that might
take time to complete are long instructions such as
This mode enables better speculative instruction execution, and
therefore better average performance.
When the low-latency interrupt mode is enabled, the following are flushed:
This behavior has the following effect on the data side:
The core LSU supports up to four accesses so that, for example, a load with a significant memory latency does not block a subsequent load/store access requested by the integer core. This is the normal behavior when the low-latency interrupt mode is disabled. When the low-latency interrupt mode is enabled, the following table shows that Strongly-Ordered and Device read accesses, in addition to all store accesses, affect the performance because they wait for cacheable loads to have their data returned.
Table 8-4 Performance and determinism effects in low-latency interrupt mode
|Low-latency interrupt mode||Instructions per cycle performance||Level of determinism|
|Enabled||High minus 3-4%||High|