A (control) process defines and implements an independent thread of control, or thread of execution, as part of a control program. Such program can comprise one or more processes, each focusing on its defined goals and duties. Processes should be as self-contained as possible, with as narrow as feasible interfaces to each other (high cohesion, low coupling).

Processes are the dynamic building blocks of a control program, much like Oberon modules are static building blocks regarding program structure. Oberon modules provide an ideal conceptual and implementation substrate to realise processes.

A process gets invoked by a scheduling mechanism, and it can, but is not required to, hold state between invocations.

Cooperative Scheduling

Oberon RTS uses cooperative scheduling, where processes are given full control of their processor, or processor core, until they “voluntarily” yield this control back to the scheduler. There are substantial benefits with the cooperative approach, including being very simple and transparent, avoiding complex schemes to arbitrate access to shared data and devices, and allowing for simple context switches with little overhead.

With cooperative scheduling, a running process can be sure to never be interrupted (but see the following sections about interrupts and error handling), which is relevant regarding shared data and devices. If a process concludes its work on shared resources before it yields control back to the scheduler, no lock-out mechanism such as semaphores are required.


We can distinguish between two needs, or uses, of interrupts. First, interrupts can react to genuine external events in the controlled system (environment), such as detection of a required measurement, or react to exceptional situations in the control system, such as failure states. Second, interrupts can be used to implement purely internal regular and recurring mechanisms, for example to empty a buffer towards a peripheral device.

Oberon RTS avoids the second type by implementing such mechanisms in the FPGA, and reserve the use of the RISC5 interrupt for the first type.

Interrupts and interrupt handlers can muddy the simple concept of cooperative scheduling somewhat, and depending on their interaction with the processes this must be dealt with accordingly, for example using lock-out mechanisms. However, such lock-out mechanisms could delay the timely handling of an interrupt, which runs against the basic idea and even the usefulness of an interrupt.

Run-time Error Handling

A process could at any point be reset due to the run-time error handling. This may or may not be an issue, depending on the use case. We’ll discuss this topic when revisiting error handling and recovery.

Shared Resources

As outlined above, sharing resources such as data or peripheral devices is usually straightforward with cooperative scheduling. If, however, a process has not concluded all its work on a shared resource when yielding control back to the scheduler, a reserve and lock-out mechanism must be employed, for example a semaphore.

An example would be a shared peripheral device, where one process wants to transmit more data than the device’s buffer can hold. It then has to yield control to wait for the buffer to be empty again – busy waiting is not permitted –, but cannot allow another process to access the device in the meantime.

Note that semaphores and similar mechanisms require careful considerations regarding deadlocks, which are absent with pure cooperative scheduling.

Process Implementation

In a cooperatively scheduled system, processes can be implemented using

  1. Coroutines
  2. Tasks


One coroutine represent the complete thread of execution of a process, and control between coroutines is transferred explicitly. A coroutine is created by passing one procedure, which contains all the process' code. While control processes basically could transfer execution control explicitly among themselves, this does not result in modular control programs with high cohesion and low coupling. Therefore, a scheduler transfers control to processes, which transfer control back to the scheduler when yielding.

Each coroutine has its own stack, which has pros and cons. On the pro side, the process' state can be held in its stack, which is preserved between process invocations by the scheduler. On the con side, it might be difficult to calculate the needed size of a process' stack, which must accommodate the deepest procedure call chain. This can result in stack space that is rarely used, and thus “wasted” memory, which can be an issue in memory-constrained systems.


Tasks employ one or more procedures (handlers), which get invoked by the scheduler. Each such procedure always runs to completion, and yielding control back to the scheduler is implicit.

The procedures of all tasks run using the same stack. If the stack grows “downwards” in memory, the scheduler sits atop, and calls all task procedures. Hence, each task handler can make use of the full size of the stack (apart from any local variables of the scheduler itself). Compared to coroutines, no memory is “wasted” for stack reserves of single processes.

However, the task needs to explicitly retain any process state between handler calls, which means more legwork by the programmer. On the plus side, the process state is open and transparent, which can be useful for error recovery involving restoring the state and continuing with the corresponding handler.