Task Scheduling


Document Tasks describes tasks as means to implement control processes. Below, I outline the related scheduling.


To reiterate, the scheduling is strictly cooperative. Each task handler runs to completion, without any interruption of the control flow, from a scheduling point of view. The only exceptions are run-time errors, which will abort the erroneous task handler, and manual aborts and resets.


  • have a priority, and
  • are invoked using time-triggers.

Invoking means to call the task’s handler procedure.

Tasks can dynamically change their priority and also timer. Any such changes will apply for the next handler invocation. Tasks can also change the handler to be invoked next. None of these flexibilities are visible and relevant to the scheduler.

Each task has its own hardware-based set of timers, as described here. At any point, one of the timers is relevant to produce the ready status and signal. Consequently, the scheduler does not need to keep track of the timing itself, it can simply check for the readiness of a task.

The timing schedule is global, that is, each task’s available timers use the same set of periods.

Task priority and timer interval are orthogonal.

The Loop

Here’s the current code of the scheduler in Tasks.mod, see the explanations below.


    tasks: ARRAY MaxNumTasks OF Task;     (* all installed tasks *)
    Ct*: Task;                            (* current task *)
    rqh, rqt: ARRAY NumPrios OF Task;     (* run queues, head and tail *)

  PROCEDURE queue(t: Task);
  (* put 't' into the corresponding run queue *)
    VAR prio: INTEGER;
    (* ... *)
  END queue;
  PROCEDURE next(VAR cq: INTEGER): Task;
  (* return the next task from the run queues, or NIL *)
    VAR ct: Task;
    (* ... *)
    RETURN ct
  END next;

    CONST MaxNumTasks0 = MaxNumTasks - 1;
    VAR ready: BOOLEAN; i, cq: INTEGER; t: Task;
    cq := 0;
    WHILE ready DO
      FOR i := 0 TO MaxNumTasks0 DO
        IF tasks[i] # NIL THEN
          IF ProcTimers.Ready(i) THEN
            t := tasks[i];
            ProcTimers.SetTimer(i, t.timer);
            IF t.runstate = Inactive THEN
              t.runstate := Queued
              INCL(overflowBits, i)
      cq := 0;
      ready := FALSE
    ELSIF ~ready DO
      Ct := next(cq);
      IF Ct # NIL THEN
        Ct.runstate := Active; Ct.run; Ct.runstate := Inactive
  END Loop;

END Tasks.

The Loop Explained

Data Structures

As the number of tasks is limited by the available timer controllers in the FPGA, all installed tasks are referred to from the ARRAY tasks. This array is not sorted in any way, the priorities come into play with the run queues.

For each possible priority, there’s one run queue, implemented as linked list. Each such list uses a pointer to its head, rqh[prio], and one to its tail, rqt[prio]. I’ll explain the rationale for this design choice below.

Ct points to the currently running task.

Checking Readiness

The scheduler checks each task’s readiness using ProcTimers.Ready(i). In addition, there’s a “global” check ProcTimers.GetReadyStatus(ready) if any task is ready to run.


  • After each invocation and run of a task handler from the run queues, ProcTimers.GetReadyStatus(ready) checks if any task has become ready.

  • If yes, the scheduler scans tasks to identify all ready tasks using ProcTimers.Ready(i), which are put on the run queues by queue.

  • Newly ready tasks are appended at the tail to the run queue corresponding to the task’s priority. Hence, all tasks of the same priority will be activated in a round robin fashion.

  • If a newly ready tasks is still in a run queue, it is not added again. This erroneous situation is recorded in the bitfield overflowBits. The handling of error and disturbance conditions will be described in a forthcoming document.

  • After one scan of tasks, the scheduler starts to call the handlers of the tasks in the run queues, starting with the task of highest priority, working through all run queues until any tasks has become ready again based on their timers.

  • The watchdog is reset between the activations of two tasks.

Timing Measurements

A first implementation used only one run queue, and queue inserted the newly ready tasks into the queue based on their priorities. Measuring the number of clock cycles for different part of the scheduler showed that the worst case number of required clock cycles for queue increased with the number of tasks – six tasks: 183 cycles, 16 tasks: 453 cycles; worst case values.

This is an unfortunate property for a real-time scheduler. Splitting the single run queue into one run queue per priority remedies the situation, as now inserting a ready task is straight-forward thanks to the rqt pointer to the tail of the queue – independent of the number of tasks, queue requires 66 clock cycles.1

Scheduling Overhead

In my testing scenario, scanning and possibly scheduling 16 tasks takes 3,250 clock cycles in the worst case which corresponds to 68 microseconds (us) with a 48 MHz clock. If the shortest task timer period is 5 ms, all 16 task handlers must execute within that time frame to avoid overflowing the scheduler. 68 us is 1.4 percent of 5 ms, ie. the scheduler itself “steals” 1.4 % of the possible run-time from the tasks’ handlers. With longer shortest task timer periods, say 10 or 20 ms, the scheduler overhead is (obviously) even lower.

  1. I have experimented with one run queue, plus pointers to the last task in the queue for each possible task priority, but the additional code complexity does not seem to warrant the advantage of only one queue. ↩︎