[Development] Pointer Handlers will be Tech Preview in 5.10

J-P Nurmi jpnurmi at qt.io
Sat Oct 21 10:46:22 CEST 2017


> On 28 Sep 2017, at 16:54, Shawn Rutledge <Shawn.Rutledge at qt.io> wrote:
> 
> 
>> On 28 Sep 2017, at 13:36, J-P Nurmi <jpnurmi at qt.io> wrote:
>> 
>>> On 28 Sep 2017, at 13:07, Tor Arne Vestbø <Tor.arne.Vestbo at qt.io> wrote:
>>> 
>>> On 28/09/2017 13:05, Tor Arne Vestbø wrote:
>>>> If we can't have a generic GestureRecognizer type with dynamic recognizer behavior based on which handler callback is bound, then it should be TapGestureRecognizer, DragGestureRecognizer, etc.
>>> 
>>> Or if we want to follow the existing naming of MouseArea and TouchArea, perhaps GestureArea, with TapGestureArea, DragGestureArea, etc
>> 
>> I would prefer attached properties and signals. Similarly to Keys.onPressed. Attaching onTapped from the outside of a component would be similar to overriding an event handler in C++. There would be a single attached object instance with multiple signal handlers. They objects would not be piling up like FooAreas and BarHandlers do.
> 
> We’ve had that discussion a couple of times already, so I’ll start by repeating what I said then (more or less).  (I did like the idea to begin with, until I saw the problem with it.)
> 
> If the only way is to use attached objects, it would mean one Item can’t have two instances of the same attached object (as André also pointed out).  If you declare discrete handlers though, then you can declare event-filtering conditions too: this handler reacts only to right mouse button, this one reacts only to left button while Control is held down (and does something completely different), this one reacts only if the device is a touchscreen, etc.  Being able to have multiple instances per Item gives you more degrees of freedom.  If you limit yourself to a single Javascript callback per Item, then you are forced to write more logic (testing various conditions and deciding what to do about them) in Javascript.
> 
> And in fact, Keys.onPressed has exactly that problem, too: if your use case is any more complex than just passing text input to some other slot/invokable, your JS is probably messy.  Shortcuts are much nicer to detect single key combinations, in that you can declare a separate instance for each command which needs a shortcut, instead of needing a big switch (and also worrying about which item has focus, but that’s a different problem).  Now you can say, well there are also a lot more signals like Keys.onEnterPressed, and so on; but not all of the keys of the keyboard are there.  (Why not?)  And again, onAPressed (let’s say you want to detect control-A) wouldn’t be a one-liner anyway, even if the signal existed.
> 
> Syntactically I think the handler use cases so far are looking OK.  Sometimes the declaration is a one-liner; and when it’s not, at least it’s more declarative than the equivalent attached-object-javascript would be.  (Look in tests/manual/pointer)
> 
> But I think attached objects could be a supplemental way of using handlers or modifying existing handlers’ behavior, somehow.  I’ve got a little prototype of one variety of that, but it doesn’t do much yet.  I was thinking it would be an aggregator, able to find and expose all the handler instances within one Item and its children, so that you can manipulate them outside the context of their declaration; and that it would also aggregate common state like pressed and hovered, and tell you which of those handlers is active.
> 
> Can we do that and also go to the next step of using the attached property on its own?  That seems harder, without duplicating code.  Maybe it would have to create the appropriate handler for you.  But Controls 2 has a related problem: how to reuse logic from Handlers without needing to make instances.  Controls 2 has the policy of creating as few objects as possible, to save memory and improve the startup time.  I just have a vague idea of putting the logic into static methods or some such, but haven’t really tried to think about specific API for that.  It’s hard to do without instances, because each handler needs to remember some state.  And if this was a C framework, that would be _your_ problem: hold onto a struct which holds the state, and pass that into each function which you need to use.  Maybe we could do something similar.  Before we can offer public C++ API, we need to make PIMPLs anyway; so we can try to make the private classes usable on their own, so that Controls can use them.  But that’s just a vague idea.
> 
> Another thing we could try is to have some sort of global object per device, so you can query or bind to the mouse position and button state, and track all the touchpoints, without depending on event propagation to Items within the scene.  This is an easy starting point for young frameworks, but we don’t have it in QtQuick for some reason.  For example, at some point Qt 3D didn’t have any concept of delivering mouse events to objects (and I didn’t check whether it does now), because that’s harder in 3D.  So I get the impression that in most 3D scene graphs you have to be more explicit: when the mouse moves, or is pressed, you invoke some sort of picker to find out which object was pressed; the event doesn’t get delivered to scene objects by itself.  It’s more DIY, so maybe it’s more cumbersome, or leaves you more in control, depending on how you look at it.  (But I haven’t used enough 3D frameworks, so I don’t know the proportion which do it that way.)  Anyway knowing where the mouse is and which buttons are pressed would be easier (if that’s all you want to know) if it’s exposed on some sort of singleton or Window-attached property.  Imagine writing a 2D CAD system where you want to keep a label updated with the current cursor coordinates (and maybe draw crosshairs, and show hover feedback related to nearby items in the scene): that’s a little hard in QtQuick.  You can try to use hover events; but hover propagation is expensive, and because of that, somebody decided to optimize by not propagating it too far.  But the entry point is QQuickWindow, so it shouldn’t be that hard to keep track of global state per device.  But if you have multiple devices, it’s not so convenient to watch all of them in QML.  We pretend there is only one mouse, but that might be wrong; and we’re probably not going to pretend there is only one touch device.
> 
> The global-state-monitoring idea sounds similar to QObject event filtering… but we don’t have a generic way to do that in QML, only in C++.  When the reason you say “I want attached properties” is because you want to override behavior, maybe something like that would give you another way to override it: accept the event early, before it gets delivered to the items.  (But it feels hacky.)  It’s analogous to the way that keystrokes which are bound to shortcuts don’t get delivered to items.
> 
> The difference between most gesture frameworks and QtQuick is that we detect gestures in a distributed fashion. For example detecting a pinch really only requires monitoring the touchpoints on the whole screen (or within the whole window) to see if it looks like the user is making that gesture.  But because Qt Quick is oriented towards delivery of the original device events to items rather than handling them on the whole window, we detect the gesture within the context of one item instead.  Maybe that’s less efficient.  It’s not something I’ve tried to change, anyway.  But when the OS provides native gestures, they take a different path.  First the gesture was recognized, and then we try to deliver the whole gesture.  We could have tried to make it always work that way. But sometimes Items need the individual touchpoints anyway, and you can’t quite have global gesture recognition and touch delivery at the same time, because one invalidates the other.  Which is why macOS makes you choose ahead of time; and on some other systems there has to be a recall mechanism so the recognizer can recall touch events that were already sent and treat them as part of a gesture; or a delay so that the recognizer can decide whether the touches are part of a gesture or not, before they are sent.  All three of those have problems.  The nice thing about saying that we always deliver touch is that we didn’t have to make that choice.  That was true only until QNativeGestureEvent was added.  But the more we try to make use of gestures provided by the various platforms, the more we will be forced to depend on that style of delivery, and eventually touchpoint delivery might be relatively rare.  So maybe we should have already been planning on that.  It might have been enough to deal with the mouse and with touchpoints only on a per-window basis, but not deliver them; and rather deliver gestures to all the items.
> 
> But with all the demands for backwards compatibility, we are stuck anyway.  We started by delivering mouse events, not mouse gestures, therefore we must always and forevermore continue to do that.  Then we added touch delivery, so we can never remove it.  With such constraints, big ideas are next to impossible.  At some point, progress cannot be made without behavior changes.  So most of the above is at best irrelevant until QtQuick 3 / Qt 6 (only if we are willing to make behavior changes then, and break some existing code!), but that’s what we’ve been trying to get ready for, by introducing this stuff.

There are two different requirements. A) basic mouse and touch input handling, and B) complex gesture handling.

I think the former could be best solved by making it simply possible to handle mouse and touch input events. I can see two sustainable approaches for this. First and foremost, put some effort into cleaning up the C++ APIs of the primitive Qt Quick UI elements (QQuickRectangle, QQuickImage, QQuickText...), make them public, and let people override event handlers in C++. Secondly, provide attached QML event handlers that you can configure to get called before or after the respective C++ event handler (Keys.priority). This gets semantically close to the traditional way of overriding event handlers.

The latter can very well be solved by providing creatable gesture recognizer objects, which are essentially additional filter objects in the tree. This, however, I don’t see as a sustainable solution for basic mouse and touch input handling needs. It’s like telling people to use Shortcut objects to do basic key input handling.

--
J-P Nurmi



More information about the Development mailing list