[Interest] Qt3D Framegraphs

Andy asmaloney at gmail.com
Fri Aug 31 14:03:16 CEST 2018


The contours/silhouetting proved a bit of a leap right now so I backed off
to look at the offscreen side of it.

I removed the depth pass and am just trying to get a simple frame graph
working for on-and-off screen capture.

I have the following frame graph (in YAML, but it should be clear):

RenderSurfaceSelector:
  Viewport:
    ClearBuffers:
      buffers: ColorDepthBuffer
      clearColor: "#80faebd7"
      NoDraw: {}
    CameraSelector:
      objectName: cameraSelector
      FrustumCulling: {}
      RenderPassFilter:
        matchAny:
        - FilterKey:
            name: renderingStyle
            value: forward
      RenderCapture:
        objectName: onScreenCapture
      RenderTargetSelector:
        target:
          RenderTarget:
            attachments:
            - RenderTargetOutput:
                attachmentPoint: Color0
                texture:
                  Texture2D:
                    width: 512
                    height: 512
                    format: RGBAFormat
        ClearBuffers:
          buffers: ColorDepthBuffer
          clearColor: "#80faebd7"
          NoDraw: {}
        RenderPassFilter:
          matchAny:
          - FilterKey:
              name: renderingStyle
              value: forward
        RenderCapture:
          objectName: offScreenCapture

Results of the render captures:

   onScreenCapture: https://postimg.cc/image/antf2d43h/
   offScreenCapture: https://postimg.cc/image/e7fcs5z3h/

The onscreen capture is correct - yay a forward renderer!.

1) Why isn't the offscreen one clearing the background colour using
ClearBuffers? (Isn't obvious in postimage, but the background is
transparent.) I tried moving ClearBuffers all over the place, but can't get
it to work.

2) How do I fix the aspect ratio of the offscreen image (assuming I want
the final image to be 512x512)? Do I need to give it its own camera and
adjust its aspect ratio somehow?

Thanks for any guidance!

---
Andy Maloney  //  https://asmaloney.com
twitter ~ @asmaloney <https://twitter.com/asmaloney>



On Fri, Aug 24, 2018 at 11:24 AM Andy <asmaloney at gmail.com> wrote:

> Paul:
>
> Thank you very much for the detailed responses!
>
> This has given me a lot more to work on/understand.
>
> The ClearBuffers part was very useful for understanding what's actually
> happening. This would be good info to drop into the QClearBuffers docs.
>
> I guess I now have to dive into render passes, render states, and
> materials now. :-)
>
> I also have a better appreciation for why most examples are QML - writing
> these in C++ is time consuming and error-prone. I've written a little
> (partially working) experiment to specify them in YAML so I don't have to
> pull in all the QML stuff just for defining my framegraph(s). I may
> continue down that road.
>
> Have there been any thoughts/discussions on providing a non-QML way to
> declare these? Could be useful for tooling (Qt Creator plugin for defining
> them visually?) as well.
>
> Thanks again for taking the time to go through this.
>
> ---
> Andy Maloney  //  https://asmaloney.com
> twitter ~ @asmaloney <https://twitter.com/asmaloney>
>
>
>
> On Tue, Aug 21, 2018 at 9:10 AM Paul Lemire <paul.lemire at kdab.com> wrote:
>
>>
>> On 08/21/2018 01:54 PM, Andy wrote:
>>
>> Thank you so much Paul!
>>
>> That gives me something to start working on/pick apart. I see now how
>> onscreen vs. offscreen works and can concentrate on getting the onscreen
>> working the way I want first since they are very similar.
>>
>> 1) "I assume you want to fill the depth buffer with a simple shader
>> right?"
>>
>> I think so? Ultimately I want to experiment with a cel-shaded scene, but
>> for now I'd be happy with adding some black contours on my entities using
>> depth - slightly thicker lines closer to the camera, thinner farther away.
>> Is this the right setup for that?
>>
>>
>> Hmm that's not necessarily what I pictured. Usually a render pass where
>> the depth buffer is filled is used as an optimization technique so that 1)
>> You draw your scene with a very simple shader to fill the depth buffer 2)
>> You draw you scene again using a more complex shader but you then take
>> advantage of the fact that the GPU will only execute the fragment shader
>> for fragment whose depth is equal to what is stored in the depth buffer.
>>
>> If you want to draw contours (which is usually referred as silhouetting)
>> the technique is different. Meshes are composed of triangles which are
>> specified in a given winding order (order in which the triangles vertices
>> are specified, either clockwise or counterclockwise). That winding order
>> can be used at draw time to distinguish between triangles which are facing
>> the camera and triangles which are backfacing the camera. (Usually another
>> optimization technique is to not draw backfacing triangles a.k.a backface
>> culling).
>>
>> A possible silhouetting technique implementation can be to:
>> 1) draw only the back faces of the mesh (slightly enlarged) and with
>> depth writing into the depth buffer disabled.
>> 2) draw the front faces of the mesh (with depth writing enabled)
>>
>> See http://sunandblackcat.com/tipFullView.php?l=eng&topicid=15 for a
>> more detailed explaination, there are other implementation with geometry
>> shaders as well (http://prideout.net/blog/?p=54)
>>
>> In practice, you would play with render states to control back face /
>> front face culling, depth write ... e.g:
>> RenderStateSet {
>>             renderStates: [
>>                 DepthTest { depthFunction: DepthTest.Equal } // Specify
>> which depth function to use to decide which fragments to key
>>                 NoDepthWrite {} // Disable writing into the depth buffer
>>                 CullFace { mode: CullFace.Front } // Cull Front faces
>> (usually you would do back face culling though)
>>             ]
>> }
>>
>> Note that cell shading might yet be another technique (with a different
>> implementation than silhouetting). Usually it involves having steps of
>> colors that vary based on light position in your fragment shader. It might
>> even be simpler to implement than silhouetting actually.
>>
>> The above link actually implements a combination of both techniques.
>>
>>
>>
>> 2) "Have you tried the rendercapture ones?"
>>
>> Yes I have. That's how I got my render capture working (once those
>> examples worked).
>>
>> One thing that wasn't clear to me before was where to attach the
>> RenderCapture node. In the rendercapture example, it's created and then the
>> forward renderer is re-parented, which is what I did with mine. Your
>> outline makes more sense.
>>
>>
>> I suppose it was made purely by convenience to avoid having to rewrite a
>> full FrameGraph, but I do agree that makes understanding a lot harder.
>>
>>
>> ClearBuffers (and NoDraw!) now make sense too. In QForwardRenderer they
>> are on the camera selector which seems strange.
>>
>>
>> That's a small optimization. If your FrameGraph results in a single
>> branch (which QForwardRenderer probably does), you can combine the
>> ClearBuffers and the CameraSelector as that translates to basically clear
>> then draw.
>>
>> If your framegraph has more than a single branch:
>> RenderSurfaceSelector {
>>     Viewport {
>>           CameraSelector {
>>                 ClearBuffers { ...
>>                     RenderPassFilter { ... } // Branch 1
>>                     RenderPassFilter { ...} // Branch 2
>>                 }
>>          }
>>     }
>> }
>>
>> What would happen in that case is:
>>
>> 1) clear buffers then draw branch 1
>> 2) clear buffers then draw branch 2
>>
>> So in the end you would only see the drawings from Branch 2 because the
>> back buffer was cleared.
>>
>> In that case you should instead have it like:
>>
>> RenderSurfaceSelector {
>>     Viewport {
>>           CameraSelector {
>>                 ClearBuffers { ...
>>                     RenderPassFilter { ... } // Branch 1
>>                 }
>>                RenderPassFilter { ...} // Branch 2
>>          }
>>     }
>> }
>>
>> or (which is a bit easier to understand but adds one branch to the
>> FrameGraph)
>>
>> RenderSurfaceSelector {
>>     Viewport {
>>           CameraSelector {
>>                 ClearBuffers { ...
>>                     NoDraw {}
>>                 } // Branch 1
>>                 RenderPassFilter { ... } // Branch 2
>>                 RenderPassFilter { ...} // Branch 3
>>          }
>>     }
>> }
>>
>>
>>
>> 3) If I want to use any of the "default materials" in extras - Phong,
>> PhongAlpha, etc - then in (3) and (4.3) the filterkeys must be
>> "renderingStyle"/"forward", correct? Or can I even use them anymore if I'm
>> going this route?
>>
>>
>> Correct. The RenderPassFilter is really there to allow you to select
>> which RenderPass of your Material's Technique to use. So the default
>> materials can only be used if your RenderPassFilters has filterKeys that
>> match any of the filterKeys present on the Material's RenderPasses. Not
>> that this can result in several RenderPasses to be selected (if your
>> material defines several render passes per technique)
>>
>> So you could probably hijack the default materials and add FilterKeys or
>> RenderPasses (at which point it's probably easier to roll your own
>> Material).
>>
>> Another possible approach is to have 2 Entities referencing the same
>> GeometryRenderer but each Entity having a different Material and a
>> different Layer component. You could then use a LayerFilter in the FG to
>> draw all Entities that have a given Layer first, then select all Entities
>> that have the other layer to draw second. That might be a way to reuse the
>> default Materials in some cases and not mess with RenderPasses and
>> RenderPassesFilters. (I think we have a layerfilter manual test you could
>> take a look at)
>>
>> Thinking back about a depth filling pass, your Material would likely have
>> a Technique with 2 render passes, one with keys to use when we want to fill
>> the depth buffer and one with keys to use to draw.
>>
>>
>> 4) I will use the offscreen to generate snapshot images and video - I
>> assume I can turn offscreen rendering on/off dynamically by simply
>> enabling/disabling the RenderTargetSelector?
>>
>> I suppose yes (haven't tested) or you could add a NoDraw {} and toggle
>> its enabled property to decide when to execute that part of the FG.
>>
>>
>>
>> Thanks again for your help. I finally feel like I'm in danger of
>> understanding something here!
>>
>>
>> On Mon, Aug 20, 2018 at 1:20 AM Paul Lemire <paul.lemire at kdab.com> wrote:
>>
>>> Hi Andy,
>>>
>>> Please see my reply below
>>>
>>> On 08/15/2018 02:59 PM, Andy wrote:
>>>
>>> I've been struggling with framegraphs for a very long time now and still
>>> don't feel like I understand  their structure - what goes where or what
>>> kind of nodes can be attached to what. I can throw a bunch of things
>>> together, but when it doesn't work I have no idea how to track down what's
>>> missing or what's in the wrong place.
>>>
>>> Can anyone give an outline of what a framegraph would look like to
>>> facilitate all of the following for a given scene:
>>>
>>> 1. rendering in a window onscreen
>>> 2. depth pass for shaders to use
>>>
>>> I assume you want to fill the depth buffer with a simple shader right?
>>>
>>> 3. render capture for taking "snapshots" of what the user is seeing
>>> onscreen
>>> 4. offscreen rendering of the current scene at a specified size (not the
>>> UI window size)
>>> 5. render capture of the offscreen scene to an image
>>>
>>>
>>> I've not tested but the I would image what you want would look like the
>>> frame Graph below:
>>>
>>> RenderSurfaceSelector { // Select window to render to
>>>
>>> Viewport {
>>>
>>> // 1 Clear Color and Depth buffers
>>> ClearBuffers {
>>>     buffers: ClearBuffers.ColorDepthBuffer
>>>     NoDraw {}
>>> }
>>>
>>>
>>> // Select Camera to Use to Render Scene
>>> CameraSelector {
>>>     camera: id_of_scene_camera
>>>
>>> // 2 Fill Depth Buffer pass (for screen depth buffer)
>>> RenderPassFilter {
>>>     filterKeys: [ FilterKey { name: "pass"; value: "depth_fill_pass"] //
>>> Requires a Material which defines such a RenderPass
>>> }
>>>
>>> // 3 Draw screen content and use depth compare == to benefit for z fill
>>> passs
>>> RenderPassFilter {
>>>    filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
>>> Requires a Material which defines such a RenderPass
>>>    RenderStateSet {
>>>         renderStates: DepthTest { depthFunction: DepthTest.Equal }
>>>         RenderCapture { // Use this to capture screen frame buffer
>>>             id: onScreenCapture
>>>         }
>>>    }
>>> }
>>>
>>> // 4 Create FBO for offscreen rendering
>>> RenderTargetSelector {
>>>     target: RenderTarget {
>>>           attachments: [
>>>             RenderTargetOutput {
>>>                 attachmentPoint: RenderTargetOutput.Color0
>>>                 texture: Texture2D { width: width_of_offscreen_area;
>>> height: height_of_offscreen_area; .... }
>>>             },
>>>            RenderTargetOutput {
>>>                 attachmentPoint: RenderTargetOutput.Depth
>>>                 texture: Texture2D { width: width_of_offscreen_area;
>>> height: height_of_offscreen_area; .... }
>>>             } ]
>>>    } // RenderTarget
>>>
>>>         // Note: ideally 4.1, 4.2 and 4.3 and 1, 2, 3 could be factored
>>> out as a reusable subtree (if using QML)
>>>
>>>         // 4.1 Clear FBO
>>>         ClearBuffers {
>>>               buffers: ClearBuffers.ColorDepthBuffer
>>>               NoDraw {}
>>>        }
>>>
>>>        // 4.2 Fill Depth Buffer pass (for offscreen depth buffer)
>>>     RenderPassFilter {
>>>         filterKeys: [ FilterKey { name: "pass"; value:
>>> "depth_fill_pass"] // Requires a Material which defines such a RenderPass
>>>     }
>>>
>>>     // 4.3 Draw content into offscreen color buffer and use depth
>>> compare == to benefit for z fill pass
>>>     RenderPassFilter {
>>>        filterKeys: [ FilterKey { name: "pass"; value: "color_pass"] //
>>> Requires a Material which defines such a RenderPass
>>>        RenderStateSet {
>>>             renderStates: DepthTest { depthFunction: DepthTest.Equal }
>>>             RenderCapture { // Use this to capture offscreen frame buffer
>>>                 id: offScreenCapture
>>>             }
>>>        }
>>>     }
>>> } // RenderTargetSelector
>>>
>>> } // CamerSelector
>>>
>>> } // Viewport
>>>
>>> } // RenderSurfaceSelector
>>>
>>>
>>>
>>>
>>> Using the forward renderer in Qt3DExtras, I can do (1) and (3), but I've
>>> been supremely unsuccessful at implementing any of the rest despite many
>>> many attempts - even working with the examples. (And the deferred renderer
>>> examples - which might help? - don't work on macOS.)
>>>
>>> Have you tried the rendercapture ones ? which are in tests/manual
>>>
>>>
>>> I am using C++, not QML. I tried replacing my framegraph with a
>>> QML-specified one but can't get that to work either (see previous post to
>>> this list "[Qt3D] Mixing Quick3D and C++ nodes").
>>>
>>> Can anyone please help? I'm stuck.
>>>
>>> Thank you.
>>>
>>> ---
>>> Andy Maloney  //  https://asmaloney.com
>>> twitter ~ @asmaloney <https://twitter.com/asmaloney>
>>>
>>>
>>>
>>> _______________________________________________
>>> Interest mailing listInterest at qt-project.orghttp://lists.qt-project.org/mailman/listinfo/interest
>>>
>>>
>>> --
>>> Paul Lemire | paul.lemire at kdab.com | Senior Software Engineer
>>> KDAB (France) S.A.S., a KDAB Group company
>>> Tel: France +33 (0)4 90 84 08 53, http://www.kdab.fr
>>> KDAB - The Qt, C++ and OpenGL Experts
>>>
>>>
>> ---
>> Andy Maloney  //  https://asmaloney.com
>> twitter ~ @asmaloney <https://twitter.com/asmaloney>
>>
>>
>>
>> --
>> Paul Lemire | paul.lemire at kdab.com | Senior Software Engineer
>> KDAB (France) S.A.S., a KDAB Group company
>> Tel: France +33 (0)4 90 84 08 53, http://www.kdab.fr
>> KDAB - The Qt, C++ and OpenGL Experts
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20180831/e7b7be89/attachment.html>


More information about the Interest mailing list