[Development] Charts and DataVis Questions

Alexis Jeandet alexis.jeandet at member.fsf.org
Tue Jan 26 10:20:10 CET 2016


Le lundi 25 janvier 2016 à 10:20 +0000, Rutledge Shawn a écrit :
> > On 23 Jan 2016, at 19:52, Sean Harmer <sh at theharmers.co.uk> wrote:
> > 
> > On 23/01/2016 12:45, Uwe Rathmann wrote:
> > >  Hi,
> > > 
> > > > The OpenGL acceleration in Charts module is really impressive
> > > > ...
> > > Unfortunately part of the truth is, that the performance of the
> > > software
> > > renderer does not necessarily be that far behind.
> > Now try it against OpenGL with 100k points rendering to a 4k
> > screen. The difference between software and hardware will increase
> > with those parameters (up to some fill rate or vertex rate that the
> > hardware can handle).
> You especially don’t want to do antialiasing with the software
> renderer (been there done that: it was about 2008 and using AA when
> rendering a QPainterPath made it several times slower than without),
> whereas vertex antialiasing with OpenGL is doable.  QtCharts so far
> uses GL_LINES for the line graph, but AFAIK the only way to get
> antialiasing with that approach is to turn on multi-sampling, which
> performs well only on certain desktop graphics cards (line graphs are
> simple enough, but it wouldn’t be so great to turn on MSAA in the
> whole QtQuick scene if your app is complex and you expect it to be
> portable).  I’ve been working on an antialiasing line graph, outside
> of Qt Charts so far though.  It’s similar to
> qtdeclarative/examples/quick/scenegraph/graph but does mitering right
> in the vertex shader, and antialiasing by setting the transparency
> proportional to distance away from the virtual line, in the fragment
> shader.
> 
> And of course with GPU rendering you can have full-frame-rate
> dynamism, whether the data is actually changing that fast or you are
> interacting - zooming, panning, moving a time cursor to see the
> corresponding data point, etc.  My laptop can render 60FPS while
> keeping the CPU at its lowest clock rate.  Or maybe a raspberry pi
> would have sufficient power to run an oscilloscope display, with the
> trace so smooth that it looks like an analog scope; I haven’t tried
> that, but it would make a nice demo.
Yes not only, for embedded measurement there would be a lot of
users....

> 

> 

> 
> Data modelling is another consideration.  I think the holy grail would be if we could send the actual data to the GPU unmodified, and render it there.  Vertex AA requires generating duplicate vertices though, to be able to expand them away from
>  the line, to give it thickness.  So, for speed (but not memory conservation) we want to keep that array of vertices around, add new datapoints to one end and remove old ones from the other - as opposed to generating vertices each time we render one frame.
>   So it needs to have that kind of API, and you then should try to minimize any additional copying: store the data how you like but manage the vertices incrementally, or add each new sample to the vertex array and don’t bother keeping your own copy.  So I tried
>  writing a data model which works that way: it stores the vertices on behalf of the rendering code, without exposing them directly in the API.  Whereas QLineSeries both stores data and renders it, as you can see in the example with the use of the append() function.
>   So maybe it could be refactored so that you can instead implement a model by subclassing an abstract base class, similar to the way that QListWidget is a QListView with an internal model, whereas in non-trivial applications you write your own QAIM and use
>  QListView and/or QML ListView.  But a time series is just one kind of data, and only makes sense with certain types of visualization.  So we could follow through and write abstract model classes for other kinds of data that can be visualized, but this kind
>  of modelling amounts to making assumptions, which requires a lot of care to keep it as widely applicable as possible.
Indeed, the openglseries example, sucks 1,6GB for  2x5000000 points
that makes around 160B per data point. Also the perfs are really bad
where QCustomPlot(modified to use QVectors) may be faster without
OpenGL.

> 

> 

> 
> Later I want to try using a geometry shader to expand the datapoints into vertices.  That will be less portable though (won’t work on OpenGL ES).  But maybe it would make zero-copy (on the CPU) visualization possible, as long as you are OK to
>  model the data the way that the rendering code expects (a time-value struct, just two floats or doubles per data point; or maybe two arrays, one for times and one for values).
Doubles would be good when using epoch like times series with high
sampling rate.
> 

> 

> 
> My mitering shader has trouble with excessively high-frequency data, so resampling is useful, to get the sample rate down to one sample per horizontal pixel, or less.  There is some recent research on how to do that while preserving the psychological
>  impression of how the data looks, which I’ve been implementing (only on the CPU so far):
> 

> 

> 
> http://skemman.is/stream/get/1946/15343/37285/3/SS_MSthesis.pdf
I pointed this publication initially but I think that the Largest
Triangle Three Buckets isn't as good(visual result) as the method
implemented in QCustomPlot and needs more CPU.
At least for line plot.

> 

> 

> 
> 
> 
> 

> 




> _______________________________________________
> Development mailing list
> 
Development at qt-project.org> 
http://lists.qt-project.org/mailman/listinfo/development> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/development/attachments/20160126/a328c47b/attachment.html>


More information about the Development mailing list