In this section we will discuss the use of an operating system in graphical user interface applications.
Embedded devices are becoming more and more advanced. The majority of the systems are not only handling the graphical user interface, but often also complex control algorithms and tasks.
These tasks can for example be motor control, data aquisition, or security related tasks. Many modern devices contain communication protocol stacks like TCP/IP, for communication with data centers; or radio stacks like Bluetooth for communication with other local devices.
In a simple device with the graphical user interface and only a few simple support tasks, like an egg timer, it is possible to structure the whole application around the user interface code. The application does very little besides the regular user interface updates, so the execution of the other tasks can with fair success be embedded into the user interface code.
As soon as the device contains more advanced functionality that "runs in the background" with separate timing requirements like regulating a motor, it quickly becomes difficult to integrate the two tasks in one while supporting the requirements.
As we discussed in the previous articles the graphics engine must keep drawing new frames to support a fluent user interface. If the graphics engine pauses this while running other tasks, the frame rate will decrease. Likewise, if the other tasks only run between the frames, in the idle time, then these tasks will suffer when the user interface is rendering complex scenes where there is less idle time. These effects makes it difficult to manually interleave the ui task with other complex tasks.
Assume for the rest of this section that we are building a bluetooth speaker with a display. We have 3 major tasks: run the graphical user interface, feed music to the speaker, and handle the bluetooth stack for communication with other devices.
It is not difficult to see that an application architecture centered on the user interface is not good: Imagine e.g. that we blend the music code with the user interface and put the code for starting playback in the eventhandler for a button in the user interface. Now the user interface is locked for the time it takes to start the music. Any animation running will be stopped meanwhile.
In general, the responsiveness of the user interface becomes dependant on the execution time of the music tasks (start, stop, next, etc.). This is a general problem, that we will come back to.
And what happens if we also want to be able to start music from Bluetooth? Should the user interface somehow be involved in that?
And how do we give priority to the music tasks, so that the music is without pauses? At the same time we also want the user interface to run with the highest performance when there is no music tasks to run.
All this can be solved by using an operating system with tasks, communication means, and synchronization.
A real-time operating system is a small piece of software that supports applications with various services and distributes computing resources to the tasks in the application.
Using an RTOS allows you to structure your application in a number of independent, but coorporating tasks. These tasks are then executed concurrently by the RTOS when they have work to do and according to their priority.
We can even split a job into a high priority and a low priority task. Assume that we have to read bluetooth data from a buffer very fast when it arrives, and put it into a larger application buffer. The handling of the data can be postponed a little. This way we end up with two bluetooth tasks.
For our example we will start 4 tasks from main:
A similar split can be done with the music task: A high priority task to feed data to the speaker, and a low priority task to control what song is playing and sending notifications to the user interface.
The result using different priorities as above is that the bt_comm_task is running when there is data to handle and the user interface task runs otherwise. When the user interface task is waiting for the display, the two low priority task can run. The operating system scheduler will handle this time distribution for us.
In a typical TouchGFX application the user interface is waiting for the display in every frame, and it is also regularly waiting for the graphics accelerator, ChromArt, to finish drawing elements. This means that there will be many small pauses where the lower priority task can run. The operating system scheduler will automatically change the MCU to run these tasks when the higher priority tasks are waiting.
When we use multiple tasks we also need a safe way of communicating between the tasks. One simple case is from the user interfaced to the music task. Here we need, among other cases, the music task to wait until the gui_task asks it to start playing a song. A simple way to implement that is to use a message queue. The music task sleeps until there is a message in the queue. The scheduler wakes the task when there is a message in the queue and when the higher priority tasks are not busy.
In the user interface, when "Play" is pressed, we send a message to the music task's queue:
The music task can wait for a message by reading the queue. This will block the task until a message arrives:
After putting the message into the queue of the music task, the user interface is continuing to run and rendering the frame as fast as possible. We are not wasting time on handling the play message immediately. But, when the rendering is done and the ui task is waiting before rendering the next frame, the scheduler will change the execution to the music task, which will handle the incoming messages.
Similary we can also give the user interface an input queue. The music task can then send a notification message e.g. when the song has ended. The user interface task should not wait for a message, but quickly check if a message is available without blocking, and read it in case.
This setup gives a very loose connection between the tasks in the system. We can actually test the music task without using the user interface, and we can also easily start music from the bluetooth task.
Some tasks needs to run as a response to an interrupts. In our example the bluetooth communication task is such an example. We want that task to run when the bluetooth chip has a new package for us. Assuming that we can get an interrupt in that case, we can send a message from the interrupt handler:
Other synchronization primitives than queues are also available. Semaphores and mutexes for example are found in many operating systems.
TouchGFX is tested with the FreeRTOS operating system during development. TouchGFX has very little requirements and can run on many other operating systems, but FreeRTOS is a good starting point unless you have some specific requirements.
FreeRTOS is a simple operating system that is free to use in commercial application. It is supplied in source code with the STM32 Cube firmware with ready to use examples for all STM32 microcontrollers.
See freertos.org for further information and license terms for FreeRTOS.
TouchGFX in its default configuration runs on FreeRTOS and uses a single message queue to synchronize with the display controller and a semaphore to guard the access to the framebuffer.
This is handled by the OSWrappers class defined in
touchgfx/os/OSWrappers.cpp. This class has the following methods:
|signalVSync()||This method should be called from the display driver when the display is ready for the next frame.|
|waitForVSync()||Called by the graphics engine to wait. Should not return until signalVSync is called.|
|isVSyncAvailable()||(Optional)Returns true if VSync has occured. Can be used to avoid blocking in the waitForVSync.|
|signalRenderingDone()||(Optional)Remove any outstanding VSync signals.|
|takeFrameBufferSemaphore()||Called by the graphics engine and the accelerator to gain direct access to the framebuffer|
|giveFrameBufferSemaphore()||Called to release the direct access again.|
The default implementation uses a message queue to implement the VSync (frame) synchronization. The graphics engine task is sleeping until the next VSync arrives.
This OSWrapper class is generated by the TouchGFX Generator. Read more about the Generator here.
TouchGFX can also run without an operating system. In this case you must start the graphics enging main loop directly in your main:
Not using an RTOS does not lower the performance of TouchGFX. It may increase the MCU load and it will make it more difficult to run other tasks together with TouchGFX.
As described above you now need to drive any other task manually while the user interface is running in your main.
One way is to perform a task check in the Model class once in every frame:
Using this method all tasks will be executed once in every frame. The time consumed by the tasks will be added to the rendering time of the user interface. This is a simple and acceptable solution for simple systems, where all tasks can terminate quickly.
Another method is to use the hooks in the OSWrappers class. As explained above the graphics engine calls method on this class when it needs to wait for events. You can use this to do other work while waiting for said events:
Using this method the idle task between the frame can be fully used by the other tasks, but the amount of time the tasks get will vary.
Another solution is to use the OSWrappers::isVSyncAvailable and OSWrappers::signalRenderingDone functions. This will allow the application to avoid having multiple while-loops. These functions are used by the TouchGFXGenerator when a No-operating-system configuration is selected.
It is important that the tasks can divide their work in to small steps of maybe 1 millisecond. Otherwise it will hurt the user interface performance.