[Development] iMX6 EGLGS 2D (QtWidgets) painting acceleration

Thiago Macieira thiago.macieira at intel.com
Sun Sep 2 20:01:31 CEST 2018


On Sunday, 2 September 2018 06:05:57 PDT Uwe Rathmann wrote:
> Thiago argues, that a QPainter based render engine results in performance
> bottlenecks. But if it is possible to have significantly better results
> with QNanoPainter then there is no conceptual problem and Qt development
> could improve the situation by working on the code of the OpenGL paint
> engine.

There is a conceptual problem. You're never going to match the performance of 
something designed with retained mode and OpenGL in mind if every redraw every 
frame from scratch. In an imperative-mode paint engine, every time 
paintEvent() happens, the world restarts. Every frame can be completely 
different from the previous one. So the code that maps this back to OpenGL has 
to do a lot of state management behind the scenes to avoid recreating 
everything from scratch.

You can get a lot of things done, but there's a limit. The complexity can 
become unmanageable at some point. Either way, there's a ceiling of 
performance.

Back in 2010, when Qt developers began talking to Clutter developers due to 
the MeeGo project, Clutter developers told us how they accomplished COGL 
(Clutter On OpenGL). Clutter had (has?) a huge state-management and caching 
engine, trying to reorder your imperative painting events so that the amount 
of OpenGL work was reduced, with fewer state transitions. It worked pretty 
well and that's what the Qt 4.6 through 4.8 OpenGL Paint Engine tried to do 
too. I don't know qnanopainter, but there's a good change it's doing that too.

Clutter achieved pretty good results, but mind you that developers who worked 
on it worked for Intel (Openedhand had been acquired recently) and only 
optimised for the Intel desktop-class GPU, for which the driver source was 
open, allowing the developers to understand what cost more. Nokia developers, 
on the other hand, were fighting against the PowerVR mobile-class GPU with the 
Imagination closed-source drivers, running on seriously underpowered ARMv6 
CPUs. On one particular Nokia device, the glSwapBuffers() call itself was 
taking 13 ms, or 78% of the time budget for each frame. 

The solution to do CPU work to buffer GPU commands to reorder them and cache 
states simply did not get us the performance we needed. By the end of 2010, we 
had already noticed this and we had two developers (Gunnar and Kim) working on 
figuring out how to write proper code to use the GPU the way the GPU is meant 
to be used. This event is what I am calling the end of the OpenGL paint engine 
experiment and the birth of the scene graph.

-- 
Thiago Macieira - thiago.macieira (AT) intel.com
  Software Architect - Intel Open Source Technology Center






More information about the Development mailing list