[Interest] Fastest Way to convert a QVideoFrame to an QImage?

Till Oliver Knoll till.oliver.knoll at gmail.com
Fri Feb 14 15:27:20 CET 2014


Changing the subject of an email is /not/ enough when replying to an email in a /different/ thread: emails do have "message IDs" which are not only evaluated by the NSA, but also by every decent email client on this planet which supports "threaded views". 

So I took the liberty to "re-hijack" your text and reply in the proper thread again, and since we now have the words "NSA" and "hijack" in this text I am sure we have everyone's attention ;)

Am 13.02.2014 um 21:20 schrieb Jason H <scorp1us at yahoo.com>:

> Oh, I thought I was clear, I am trying to get video (file and camera) and pass it to the ZBar library for processing.

That's what you initially told us and which was absolutely clear...

> ZBar takes barcode images and give you barcodes. I said it expects "Y800" (identical to 8-bpp gray).

... this as well...

> It doesn't take a container, just a uchar buffer. I do not have control over the zbar library.

... but this is the new and crucial information in order to properly reply to your "What is the fastest way to convert a QVideoFrame to a QImage (which should be a greyscale non-indexed image". 

Because we can easily answer the last part already: "You don't need a QImage, you really need a simple uchar buffer". So the fastest way surely includes "not to write into a QImage (which apparently does not support non-indexed 8bit greyscale images anyway) at all, but directly into the target uchar buffer" (and that's why I insisted on getting that crucial part, because had you said that the library is a "Qt library" and expects a QImage as argument my answer would have benn different).

I have never worked with QVideoImage, so I cannot add anything except what others have already said: in case the video stream is sent to the GPU and decoded there that means that the decoded video frames never reside in main RAM but "live" only in VRAM and are super-imposed with the other graphics (windows, buttons, desktop elements, ...) by the GPU itself.

So unless the actual media framework provides any means to "download" the decoded frames from VRAM to RAM there is no way for Qt to access it. Or you might at least get an "OpenGL texture handle" and you could download the "texture data" yourself using GL calls.

What I said is just to illustrate (and might not be accurate at all) that it varies from platform to platform (media framework) and can be very hardware-specific how you get (if! Keyword: DRM) access to the decoded image data.

But I think in your case you were actually referring to a "live video camera stream", and there I agree it would make very much sense to get access to the individual frames on the application (Qt) level ;)

But again, I have never used any "Qt video APIs", so I am not much of a help here.


So let's assume you end up with a RGBA QImage: the fastest way would then be to

* Grab the QImage::bits()
* pay attention to possible padding of the scanlines! Older (< Qt 4.7) docs about QImage specifically talked about that matter, but the more recent docs seem to be silent on that matter. I think as long as you iterate over 32bit RGBA data you should not have to worry (the only mention about alignment is in QImage::scanLine which talks about a "32-bit boundary)
* pay attention to endianness when accessing the "raw pixel data". You might want to use the macros qRed, qGreen etc.
* iterate width*height times over the raw QImage buffer, convert to 8bit greyscale with your conversion of choice (if you get RGB) and
* write the resulting 8bit grey value into your new target uchar buffer
* done

It doesn't get much faster than that, except if you would write an OpenCL kernel (with a clever buffer transfer strategy like "Round Robin" buffer allocation, or making sure you restrict the computation to the CPU cores instead of the GPU) or use some Voodoo assembler code. ;)

However I don't expect a simple greyscale conversion to be a bottleneck for images, especially if you manage to get hold of the original YUV data (= no conversion necessary, just a simple copy of interleaved pixels).

Cheers,
  Oliver


More information about the Interest mailing list