[Interest] [Qt3D] Virtual Reality (Vive/Oculus) Example
Daniel Bulla
epozzt at gmail.com
Tue Jul 18 10:29:25 CEST 2017
Hi,
I started adding support for OpenVR/Vive and native OculusSDK to Qt3D.
You can find it on Github: https://github.com/dabulla/qt3d-vr-example
It can easily be setup, as described in the Readme.
*What works?*
- Stereoscopic 3D Rendering of Qt3D Qml scenes in the Headset.
- Headtracking
*What is still in progress?*
- Headtracking has high latency
- Mirroring to Monitor
- Motion controllers + Input
- ... (lots of things imaginable)
*Implementation*
There is QHeadmountedDisplay which does the job of a Qt3DQuickWindow but
uses Direct-Mode Rendering. It can be used just like a Window that is
placed at the headset and will sync to it's refresh rate. OpenVR or
OculusSDK (aka 'SDKs') also handle distortion correctly, so there is no
need to do this manually.
The rendered Qt3D Qml scene must use a VrCamera to enable head-tracking and
correct stereoscopic rendering. VrCamera configures projection-parameters
for the headset by querying the SDK.
I only tested this on windows at the moment.
*Future*
I'd really like to proceed with the implementation and look forward to
getting some feedback at this point. I read the qt styling guide and
started a clean implementation on github and wanted to contribute it to
qt3d at some point. I'm open for suggestions and could also need advice
with the following problem:
*Tackling latency*
The pose read from the headset needs approx. 3 frames before it finally
ends up as a shader-uniform for rendering. This takes 3 frames too long (it
is extremely noticable and you get nausea very quickly)!
- In the main-thread "getEyeMatrices" is called, this gets the
prediction of the head pose for the very next frame.
- The matrices are handed to Qt3D using the QTransform of the
Camera-Entity (VrCamera).
I suspect that it takes several frames before the transform ends up as a
uniform in the shader and wonder how I can tweak Qt3D to keep this latency
down. A tip of a Qt3D expert or architect would be really handy here.
My ideas thus far:
- Skip the frontend and introduce a QVirtualRealityAspect (which has to
be done earlier or later anyway). I wonder if this could apply transforms
faster. I guess it would not run in the same thread as QRenderAspect?
- Introduce a way to set a callback function for a shader uniform. The
shader could query a very recent head-pose just before rendering ("late
latching").
- Ensure that UpdateWorldTransformJob is executed after setting the
transform, but before rendering (would this kill parallelism?).
- Don't use synchronous rendering. But this would require me to heavily
patch QRenderAspect I guess. At least two not-yet-customizable steps must
be altered:
- Render to a texture owned by the SDK which we only have a
GLuint texture-id
of (this is only true for Oculus).
- Call a custom swapBuffers method.
Hope you enjoy it and don't get sea sick :-)!
--
Daniel Bulla
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.qt-project.org/pipermail/interest/attachments/20170718/8d412583/attachment.html>
More information about the Interest
mailing list