[Development] New Qt Multimedia

Lars Knoll lars.knoll at qt.io
Thu May 27 08:07:23 CEST 2021


> On 26 May 2021, at 18:14, Jason H <jhihn at gmx.com> wrote:
> 
>>> 4. On the removal of QAbstractVideoFilter AND QVideoProbe: Disappointed to hear this. I previously used this for read-only frames for analysis, i.e. Barcode reading and object detection. How do we do that now?
>> 
>> You can get full access to the video data from a QVideoSink through the newVideoFrame() signal and mapping the video frame (or converting it to a QImage). With that you should have the access to the data that you need. (A small detail that’s still missing right now but that I hope to have in place for 6.2 is the ability to connect several sinks to a media player or capture session, but it’s not really required as you can connect several slots to the signal). If you feel like to need more than that, let me know what it is.
> 
> So the most common case I need supported is getting the video frame for processing while maintaining the full-speed live preview. Is this the multi-sink scenario?

You can work with one sink by connecting to it’s newVideoFrame() signal. The only thing that’s a bit more cumbersome in that case it how to get to the video sink, but neither that should be  large problem.

Multiple sinks are mainly required to be able to support multiple video output surfaces in a generic way on the framework level.

> Typically, With Camera connected o a VideoOutput, I use QVideoProbe to throw the frame (pixel data as QByteArray, because the library doesn't care) to a thread for multicore async processing. A typical 1 megapixel image on RaspberryPi4 takes ~150ms using ZBar or ZXing (I find ZXing is more like 100ms), so this gets about 6 processed frames a second, which seems responsive enough to the user because they are looking at the live display.
> 
> Since you asked for actual code, attached is the code I use to do this. It may not be perfect code (long story made short, I just rewrote this from memory) but it is what I whipped up, and works reasonably well for now.  I've used this approach for barcodes and OpenCV.
> 
> If it matters: I disclaim any copyright for the attached files.<barcodevideoprobe.cpp><barcodevideoprobe.h>

Thanks for the snippet. I think this should be perfectly doable. Connect to the signal, then map the QVideoFrame and copy out the Y channel.

Cheers,
Lars



More information about the Development mailing list