Operating Systems
Introduction
In this section we will discuss the use of an operating system in graphical user interface applications.
Embedded devices are becoming more and more advanced. The majority of the systems are not only handling the graphical user interface, but often also complex control algorithms and tasks.
These tasks can for example be motor control, data aquisition, or security related tasks. Many modern devices contain communication protocol stacks like TCP/IP, for communication with data centers; or radio stacks like Bluetooth for communication with other local devices.
Interleaving other tasks with the user interface
In a simple device with the graphical user interface and only a few simple support task, like an egg timer, it is possible to structure the whole application around the user interface code. The application does very little besides the regular user interface updates, so the execution of the other tasks can with fair success be embedded into the user interface code.
As soon as the device contains more advanced functionality that "runs in the background" with separate timing requirements like regulating a motor, it quickly becomes difficult to integrate the two task in one while supporting the requirements.
As we discussed in the previous articles the graphics engine must keep drawing new frames to support a fluent user interface. If the graphics engine pauses this while running other tasks, the frame rate will decrease. Likewise, if the other tasks only run between the frames, in the idle time, then these task will suffer when the user interface is rendering complex scenes where there is less idle time. These effects makes it difficult to manually interleave the ui task with other complex tasks.
An example
Assume for the rest of this section that we are building a bluetooth speaker with a display. We have 3 major tasks: run the graphical user interface, feed music to the speaker, and handle the bluetooth stack for communication with other devices.
It is not difficult to see that an application architecture centered on the user interface is not good: Imagine e.g. that we blend the music code with the user interface and put the code for starting playback in the eventhandler for a button in the user interface. Now the user interface is locked for the time it takes to start the music. Any animation running will be stopped meanwhile.
In general, the responsiveness of the user interface becomes dependant on the execution time of the music tasks (start, stop, next, etc.). This is a general problem, that we will come back to.
And what happens if we also want to be able to start music from Bluetooth? Should the user interface somehow be involved in that?
And how do we give priority to the music tasks, so that the music is without pauses? At the same time we also want the user interface to run with the highest performance when there is no music tasks to run.
All this can be solved by using an operating system with tasks, communication means, and synchronization.
RTOS
A real-time operating system is a small piece of software that supports applications with various services and distributes computing resources to the tasks in the application.
Using a RTOS allows you to structure your application in a number of independent, but coorporating tasks. These tasks are then executed concurrently by the RTOS when they have work to do and according to their priority.
We can even split a job into a high priority and a low priority task. Assume that we have to read bluetooth data from a buffer very fast when it arrives, and put it into a larger application buffer. The handling of the data can be postponed a little. This way we end up with two bluetooth tasks.
For our example we will start 4 tasks from main:
int main() {
...
os_start_task(gui_task, medium_priority);
os_start_task(music_task, low_priority);
os_start_task(bt_comm_task, high_priority);
os_start_task(bt_appl_task, low_priority);
os_start_scheduler();
}
A similar split can be done with the music task: A high priority task to feed data to the speaker, and a low priority task to control what song is playing and sending notifications to the user interface.
The result using different priorities as above is that the bt_comm_task is running when there is data to handle and the user interface task runs otherwise. When the user interface task is waiting for the display, the two low priority task can run. The operating system scheduler will handle this time distribution for us.
In a typical TouchGFX application the user interface is waiting for the display in every frame, and it is also regularly waiting for the graphics accelerator, ChromArt, to finish drawing elements. This means that there will be many small pauses where the lower priority task can run. The operating system scheduler will automatically change the MCU to run these task when the higher priority tasks are waiting.
Task communication
When we use multiple task we also need a safe way of communicating between the tasks. One simple case is from the user interfaced to the music task. Here we need, among other cases, the music task to wait until the gui_task asks it to start playing a song. A simple way to implement that is to use a message queue. The music task sleeps until there is a message in the queue. The scheduler wakes the task when there is a message in the queue and when the higher priority tasks are not busy.
...
music_task_input_queue = os_create_queue(10); //10 element queue
...
In the user interface, when "Play" is pressed, we send a message to the music task's queue:
void ScreenMusic::handlePlayPressed()
{
os_send_message(music_task_input_queue, play_message);
}
The music task can wait for a message by reading the queue. This will block the task until a message arrives:
...
Message message;
os_receive_message(music_task_input_queue, &message);
After putting the message into the queue of the music task, the user interface is continuing to run and rendering the frame as fast as possible. We are not wasting time on handling the play message immediately. But, when the rendering is done and the ui task is waiting before rendering the next frame, the scheduler will change the execution to the music task, which will handle the incoming messages.
Similary we can also give the user interface an input queue. The music task can then send a notification message e.g. when the song has ended. The user interface task should not wait for a message, but quickly check if a message is available without blocking, and read it in case.
This setup gives a very loose connection between the tasks in the system. We can actually test the music task without using the user interface, and we can also easily start music from the bluetooth task.
Handling interrupts
Some tasks needs to run as a response to an interrupts. In our example the bluetooth communication task is such an example. We want that task to run when the bluetooth chip has a new package for us. Assuming that we can get an interrupt in that case, we can send a message from the interrupt handler:
void BT_DataAvailable_Handler(void)
{
os_send_message(bt_data_queue, data_available_message);
}
Other synchronization primitives than queues are also available. Semaphores and mutexes for example are found in many operating systems.
FreeRTOS
TouchGFX is tested with the FreeRTOS operating system during development. TouchGFX has very little requirements and can run on many other operating systems, but FreeRTOS is a good starting point unless you have some specific requirements.
FreeRTOS is a simple operating system that is free to use in commercial application. It is supplied in source code with the STM32 Cube firmware with ready to use examples for all STM32 microcontrollers.
See freertos.org for further information and license terms for FreeRTOS.
TouchGFX OS Wrappers
TouchGFX in its default configuration runs on FreeRTOS and uses a single message queue to synchronize with the display controller and a semaphore to guard the access to the framebuffer.
This is handled by the OSWrappers class defined in
touchgfx/os/OSWrappers.cpp
. This class has the following methods:
Method | Description |
---|---|
signalVSync() | This method should be called from the display driver when the display is ready for the next frame. |
waitForVSync(); | Called by the graphics engine to wait. Should not return until signalVSync is called. |
takeFrameBufferSemaphore() | Called by the graphics engine and the accelerator to gain direct access to the framebuffer |
giveFrameBufferSemaphore() | Called to release the direct access again. |
The default implementation uses a message queue to implement the VSync (frame) synchronization. The graphics engine task is sleeping until the next VSync arrives.
This OSWrapper class is generated by the TouchGFX Generator. Read more about the Generator here.
No RTOS
TouchGFX can also run without an operating system. In this case you must start the graphics enging main loop directly in your main:
int main()
{
...
touchgfx::HAL::getInstance()->taskEntry();
//never returns
}
Not using an RTOS does not lower the performance of TouchGFX. It may increase the MCU load and it will make it more difficult to run other tasks together with TouchGFX.
As described above you now need to drive any other task manually while the user interface is running in your main.
Model::tick
One way is to perform a task check in the Model class once in every frame:
Model.cpp
void Model::tick()
{
//run other tasks here
music_task_tick();
bluetooth_task_tick();
}
Using this method all tasks will be executed once in every frame. The time consumed by the tasks will be added to the rendering time of the user interface. This is a simple and acceptable solution for simple systems, where all tasks can terminate quickly.
OSWrappers
Another method is to use the hooks in the OSWrappers class. As explained above the graphics engine calls method on this class when it needs to wait for events. You can use this to do other work while waiting for said events:
OSWrappers.cpp
static volatile uint8_t vsync_sem = 0;
void OSWrappers::signalVSync()
{
vsync_sem = 1;
}
void OSWrappers::waitForVSync()
{
do {
// Perform other work while waiting
music_task_tick();
bluetooth_task_tick();
} while(!vsync_sem);
vsync_sem = 0;
}
Using this method the idle task between the frame can be fully used by the other tasks, but the amount of time the tasks get will vary.
It is important that the tasks can divide their work in to small steps of maybe 1 millisecond. Otherwise it will hurt the user interface performance.