[Development] QEvent::accept() vs. the newer event delivery algorithms in Qt Quick; remaining API issues; etc.

Shawn Rutledge Shawn.Rutledge at qt.io
Wed Oct 21 20:54:07 CEST 2020



> On 16 Oct 2020, at 22:43, Giuseppe D'Angelo via Development <development at qt-project.org> wrote:
> 
> Specifically: mouse events have been designed for 90's (?) widgets; they come with assumptions that clearly show this, like the ones you mentioned:
> 
> * there is only one mouse cursor;
> * a mouse event lands first and foremost on the widget below the cursor;
> * if the widget accepts it, it becomes the only grabber;
> * if the widget ignores it, it propagates to the parent (why the parent? why not the widget below it, in Z-stack, which might be a sibling? because widgets are meant to be laid out, not to stack on top of each other)

But in Qt Quick it’s the item below, in Z-order.  Which might often be the parent, but not always.  (Pointer Handlers inside any item are visited at the same time the item is visited: they don’t have their own Z order.)

> "How to get there" => I have no idea -- I didn't do any research in that regard, compared the competing APIs, etc.. But something is pretty clear to me here, and it's causing a knee-jerk reaction: whatever the new system is, it needs to be:
> 
> 1) independent from the current one, which should be marked as "legacy" and somehow left working and left alone; as you write, trying to shoehorn a more complex system on top of the existing one will just cause regressions all over the place;

So far we haven’t needed to do anything that extreme, except that handling events in pointer handlers is a bit different than in items.  And since we aren’t adding features to widgets, there’s no push to refactor event delivery there either.

> 2) living in Qt Gui and therefore serving QtQuick, QtWidgets and 3D at the same time;

Yes the next step is probably to factor out the event delivery code from QQuickWindow into a QDeliveryAgent (?) class so that it can be used in Qt Quick scenes that are embedded in Qt Quick 3D.  But QPointerEvents are 2D events, not 3D.  I asked when I started this whether it might not be a good idea to add a z coordinate to QEventPoint, just in case we want to reuse the same events for that.  (QPointF doesn’t seem so suitable for that, but what should we use then?  I want to be able to directly bind object positions to points that came from events.  But then we have to agree whether the point object for both is 2D or 3D.)  But I did not get positive feedback to do that.  So far it seems that event delivery in 3D relies on hit testing, a bit like the widget UIs: pretend there’s a ray going into the scene from the cursor or touchpoint position “above” the rendering, find the first 3D object that it hits, and assume that object has the responsibility to handle the event.  So no fancy hierarchical propagation, I guess?  But if the object that it hits is a Qt Quick scene rendered to a texture, then we need the delivery agent to take over delivery to items within the scene.  That’s my understanding so far, without having tried to implement it yet. 

Another thing I didn’t say anything about this time is the idea that gestures could be recognized first and then delivered.  I.e. use QNativeGestureEvent much more.  Delivery is simpler then, because it never needs splitting: you are sure that one item or handler will accept it, and then delivery is done.  But then maybe we end up delivering touch events and gesture events in parallel, because touch events can be used for some purposes (like showing feedback about where your fingers are, if nothing else), and gestures for others (dragging, flicking, pinching, tapping, etc.)  Gestures often take more than one event to “begin”.  Sometimes when a gesture is recognized, it can mean that some other interaction that began using the raw touchpoints needs to be rolled back; but I hope that can be avoided, at least mostly.  We should at least get the native pinch gesture working on more platforms the way it works on macOS.  But it also may be that native gesture recognition is good for touchpads, but not for touchscreens.  On a touchpad, the abstraction is that the gesture occurs entirely at the point where the mouse cursor is, regardless how far apart your fingers are spread out, which is why it’s OK not to split up the touchpoints across different items.  So maybe these two approaches (centralized “native” gesture recognition, and distributed QtQuick-style gesture recognition) will always coexist.  Anyway that has been on my backlog of things to work on for a few years now.  But it’s another thing the Windows platform maintainers could work on, because I think Windows does native gesture recognition on touchpads, but we don’t support it.  And I can try that with libinput at some point on Linux. 

> 3) be done 100% of public APIs available in C++ and QML. An user should be able to write a Flickable in Widgets using nothing but public and supported APIs. And it should just work™ if combined with a Flickable coming from Qt.

Well combining widgets and Quick is problematic, but yeah there’s more than one way to write a flickable.  Pointer Handlers should have public C++ API eventually.  I always figured if our set of provided handlers is not enough, users should be able to subclass them and write the less-common ones themselves.  It should be better than subclassing QQItem to do that.  But then we commit to the API, so that’s why it’s been put off this long.

> The point is, widgets also need to handle multiple cursors. Widgets also need kinetic flicking while reacting to presses (a QPushButton in a QScrollArea). Widgets also need multitouch / gesture support. Leaving them alone as "done" or "tier-2" sends an awful message to those users. And let's not open the pandora box of QQuickWidget.

But management is telling us not to spend time adding features to widgets anymore.  Since we have such a small team maintaining both widgets and quick, we have to focus; and if we make Quick and Controls compelling enough, and just as feature-complete, hopefully more users will be happy to switch to Quick within the next few years.

Meanwhile, QScroller does kinetic flicking well enough… but it relies on the old gesture framework, which is not so great.  (Too bad we couldn’t remove that in Qt 6, but it has users, and I don’t want to break QScroller either.)  But kinetic flicking is terrible without GPU acceleration.  Both the gesture framework and the frenetic repainting conspire to make it a CPU hog.  So I wish we had time to develop a tiled rendering feature that somehow works with all the graphics APIs, so that the scrollable content doesn’t need to try to repaint itself entirely at 60FPS.  That one thing would make widget applications much more “fluid”; and there are many applications that don't really need all the other kinds of fluidity that the scene graph provides.



More information about the Development mailing list