[Development] New Qt Multimedia

Lars Knoll lars.knoll at qt.io
Wed May 26 17:06:02 CEST 2021

> On 26 May 2021, at 16:24, Jason H <jhihn at gmx.com> wrote:
>> There are still open issues and gaps in the implementation that need fixing, but the code is now in a decent enough shape to merge it back to dev and continue on that branch. We will however now have everything ready in Qt Multimedia for the 6.2 feature freeze, and will be working with an exception on this module. Especially the support for camera and media capture on Windows and the Android backend still need more work. The gstreamer backend for Linux and AVFoundation for iOS and macOS should be in a pretty decent shape. QNX support is missing right now, and is planned after 6.2.
> So the changes listed in the bullet points look great!
> As someone who has attempted to work with Qt Multimedia on the mobile platforms, I can say that I was looking forward to this.
> I do have a few questions (mobile focused):
> 1. On Android, it looks like you're using the old Camera API and not Camera2? (https://developer.android.com/reference/android/hardware/camera2/package-summary) I found that Camera2 paralleled AVFoundation, so between Camera being deprecated and Camera2 resembling AV Foundation, I am surprised Camera (old) was targeted?

That’s purely because the old code base from Qt 5 used Camera and we didn’t yet find time to change that. The intention is to upgrade to Camera2 at some point, but we simply didn’t have enough time to do that change until now.
> 2. Recording WAV on Android was a problem.  I get that this was not Qt's fault, but having universal WAV recording would be good. Is it in scope? (Solution was to just do it in Java, grabbing raw PCM data), but iOS could deliver WAV if you told it to in the container format.

There’s a QWavDecoder class in Qt 6 multimedia that can handle both decoding and encoding of WAV files. I don’t really want it to become public API, but I do think we should have an integrated solution that can be used as a fallback.
> 3. Support for image/video depth data? On iOS, these are mini-images embedded within the image itself. Is it in scope? (Can be from disparity or LIDAR, IR, etc)

Not for 6.2 at least.
> 4. On the removal of QAbstractVideoFilter AND QVideoProbe: Disappointed to hear this. I previously used this for read-only frames for analysis, i.e. Barcode reading and object detection. How do we do that now?

You can get full access to the video data from a QVideoSink through the newVideoFrame() signal and mapping the video frame (or converting it to a QImage). With that you should have the access to the data that you need. (A small detail that’s still missing right now but that I hope to have in place for 6.2 is the ability to connect several sinks to a media player or capture session, but it’s not really required as you can connect several slots to the signal). If you feel like to need more than that, let me know what it is.


More information about the Development mailing list