[Qtwebengine] Preparing external textures for HTML5 video in qtwebengine
jocelyn.turcotte at digia.com
Fri Jan 31 10:23:58 CET 2014
On Thu, Jan 30, 2014 at 02:00:49PM -0600, kenz kiran wrote:
> Here is the problem:
> When ever we get a new frame from the Video Decoder, before we blit the
> Quads from the Quad list received for the frame, we need to 'update()'
> which would internally do
> glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTexName) and
> glEGLImageTargetTexture2DOES(GL_TEXTURE_EXTERNAL_OES, (GLeglImageOES)image).
> where 'image' would be the EGLImage of the latest video frame from the
> In Chromium's existing design, before "ANY" draw call say DrawArrays or
> DrawElements, Chromium calls "PrepareTexturesForRender()" which internally
> calls Update() on each of the StreamTextures.
> But In case of delegated rendering, since we don't use chromium's GL
> Rendeder but use QT as the parent compositor rendering the received
> Quadlist, we don't make any draw calls using Chromium's gl infrastructure.
> So at this point we need a "preparetextures()" like call from
> delegated_frame_node. And this should happen in GPU thread and QT render
> thread should be blocked till this completes.
> So wanted to check this forum (specifically: Jocelyn and maybe Gunnar) to
> get some ideas on how to resolve this problem.
The latest moment that the Chromium GPU thread touches the textures is in DelegatedFrameNode::fetchTexturesAndUnlockQt.
This happens _ONCE_ for every TransferableResource, so if you absolutely need to update your stream on the Chromium GPU thread, we will have to add some way to update this kind of resource every time that Qt will draw this texture.
If on the other hand it's possible for you to do the stream update in the Qt rendering thread, you could do so in MailboxTexture::bind, or in a subclass of MailboxTexture. This happens right before a QSGGeometryNode attached to this texture gets drawn by the Qt scene graph.
> Also chromium folks have raised some concerns of texture's being deleted
> while they are being composited by QT. I think mailboxes and existing infra
> should cover this case. But I haven't studied them in much depth.
> Here is the excerpt:
> Q: How exactly do you render with the texture if not DrawArrays or
> DrawElements? Does the actual final drawing not go through the
> gpu/commandbuffer layer, i.e. do you pull out or share the underlying GL
> texture somehow with an external GL context?
> WillUseTexture() is supposed to be called from any GL command being parsed
> that causes the texture to be used for drawing or blits.
> I am certain we don't go through, gpu/command buffer setup for drawing
> (hence no DrawArray/DrawElements). Yes we have shared context setup. The
> texture ids are known to the QT rendering context (which is not the same
> used by GPU thread).
Anything that Chromium does _in the browser process_ using ui::Compositor and cc::LayerTreeHost in order to render delegated frames, is instead rendered by the Qt Quick scene graph. We use DrawQuads and TransferableResources provided through DelegatedFrameData as input, convert them to QSGNode/QSGTexture subclasses, and then fetch the texture IDs directly from the MailboxManager in the in-process GPU thread using the provided mailboxes in the TransferableResources.
The command buffer layer in Chromium interfaces two sets of texture IDs with one-another, the client IDs (the ones known by cc::GLRenderer, each context has its own set of client IDs) and the global service IDs (the real GL texture IDs used by GLES2DecoderImpl in the GPU process to do the actual rendering). By interfacing with the MailboxManager we get access to the service texture IDs directly, and since we setup our GL context as shared as you mentioned, we are able to render them.
We don't use the command buffer layer in QtWebEngine since we want Qt to control the main GL frame buffer and we want to avoid having to render to an intermediate FBO.
> Concern: Does that not cause problems to share the texture that way since
> the gpu TextureManager and ContextGroup manages the lifetime of the texture
> (and could try to delete it)?
> I recommend you either have your view system use the gpu stack's GL client
> interface, or you share the texture externally in a more explicit way.
AFAIK the delegated renderer already provides this functionality. The way it works is that a TransferableResource is owned by the producing child compositor, but is held alive when tranfered to a parent compositor through a DelegatedFrameData and won't be deleted/reused until the parent decided to return it to the child compositor through a cc::CompositorFrameAck. The resource shouldn't be returned as long as it's possibly referenced by any unswapped frame in the GL pipeline.
I don't know how you manage the lifetime of your SurfaceTexture, but if it behaves differently on this level we might have to adjust.
> Otherwise, if you are already pulling out the GL texture from underneath
> the chromium gpu stack, you might as well pull out the SurfaceTexture
> handle as well and update it (see GetLevelImage()).
Right now we've been able to do everything we needed only with the provided texture IDs, but adding image support sounds like something that we can do too at this level.
More information about the QtWebEngine