[Development] Charts and DataVis Questions

Rutledge Shawn Shawn.Rutledge at theqtcompany.com
Mon Jan 25 11:20:43 CET 2016

On 23 Jan 2016, at 19:52, Sean Harmer <sh at theharmers.co.uk<mailto:sh at theharmers.co.uk>> wrote:

On 23/01/2016 12:45, Uwe Rathmann wrote:

The OpenGL acceleration in Charts module is really impressive ...
Unfortunately part of the truth is, that the performance of the software
renderer does not necessarily be that far behind.

Now try it against OpenGL with 100k points rendering to a 4k screen. The difference between software and hardware will increase with those parameters (up to some fill rate or vertex rate that the hardware can handle).

You especially don’t want to do antialiasing with the software renderer (been there done that: it was about 2008 and using AA when rendering a QPainterPath made it several times slower than without), whereas vertex antialiasing with OpenGL is doable.  QtCharts so far uses GL_LINES for the line graph, but AFAIK the only way to get antialiasing with that approach is to turn on multi-sampling, which performs well only on certain desktop graphics cards (line graphs are simple enough, but it wouldn’t be so great to turn on MSAA in the whole QtQuick scene if your app is complex and you expect it to be portable).  I’ve been working on an antialiasing line graph, outside of Qt Charts so far though.  It’s similar to qtdeclarative/examples/quick/scenegraph/graph but does mitering right in the vertex shader, and antialiasing by setting the transparency proportional to distance away from the virtual line, in the fragment shader.

And of course with GPU rendering you can have full-frame-rate dynamism, whether the data is actually changing that fast or you are interacting - zooming, panning, moving a time cursor to see the corresponding data point, etc.  My laptop can render 60FPS while keeping the CPU at its lowest clock rate.  Or maybe a raspberry pi would have sufficient power to run an oscilloscope display, with the trace so smooth that it looks like an analog scope; I haven’t tried that, but it would make a nice demo.

Data modelling is another consideration.  I think the holy grail would be if we could send the actual data to the GPU unmodified, and render it there.  Vertex AA requires generating duplicate vertices though, to be able to expand them away from the line, to give it thickness.  So, for speed (but not memory conservation) we want to keep that array of vertices around, add new datapoints to one end and remove old ones from the other - as opposed to generating vertices each time we render one frame.  So it needs to have that kind of API, and you then should try to minimize any additional copying: store the data how you like but manage the vertices incrementally, or add each new sample to the vertex array and don’t bother keeping your own copy.  So I tried writing a data model which works that way: it stores the vertices on behalf of the rendering code, without exposing them directly in the API.  Whereas QLineSeries both stores data and renders it, as you can see in the example with the use of the append() function.  So maybe it could be refactored so that you can instead implement a model by subclassing an abstract base class, similar to the way that QListWidget is a QListView with an internal model, whereas in non-trivial applications you write your own QAIM and use QListView and/or QML ListView.  But a time series is just one kind of data, and only makes sense with certain types of visualization.  So we could follow through and write abstract model classes for other kinds of data that can be visualized, but this kind of modelling amounts to making assumptions, which requires a lot of care to keep it as widely applicable as possible.

Later I want to try using a geometry shader to expand the datapoints into vertices.  That will be less portable though (won’t work on OpenGL ES).  But maybe it would make zero-copy (on the CPU) visualization possible, as long as you are OK to model the data the way that the rendering code expects (a time-value struct, just two floats or doubles per data point; or maybe two arrays, one for times and one for values).

My mitering shader has trouble with excessively high-frequency data, so resampling is useful, to get the sample rate down to one sample per horizontal pixel, or less.  There is some recent research on how to do that while preserving the psychological impression of how the data looks, which I’ve been implementing (only on the CPU so far):


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/development/attachments/20160125/3f96df92/attachment.html>

More information about the Development mailing list