[Development] need to handle touch events rather than depending on mouse event synthesis

Shawn Rutledge shawn.t.rutledge at gmail.com
Thu Mar 1 15:26:00 CET 2012


On 1 March 2012 10:48, Alan Alpert <alan.alpert at nokia.com> wrote:
> The overlapping MouseArea/TouchArea is an interesting idea, and might explain
> why we'd need a TouchArea when it's virtually identical to the 'PointerArea"
> element. But then we'd have three area interaction elements that are virtually
> identical, with a few slight differences in functionality (like wheel events)
> and some name changes (onTap instead of onClicked despite identical
> functionality...).
>
> Perhaps we could just add an enum to MouseArea? EventType { MouseEvents,
> TouchEvents, MouseAndTouchEvents (default) }. That allow you more fine-grained
> control, with identical default behavior that doesn't require event synthesis
> that messes with other elements. Not to mention that the common case is you
> don't care what pointy thing they used to say 'do that', there are devices
> with both mouse and touch and often the app really doesn't care which it was.

This has the advantage of fewer QML elements to know about, but I
still wonder how silly the name will sound in a few years; it already
does sound silly on mobile devices.  Another thing is that if you
expect a right-click or a wheel movement, those are usually emulated
with gestures on touch devices, but should the do-everything MouseArea
really be responsible for doing that too?  We already have GestureArea
right?  I misspoke about having PointingArea be just like MouseArea; I
was thinking it would actually not have the multi-button concept, thus
no right-clicking, middle-clicking, back/foward or other buttons, or
wheel.  You would still need MouseArea because those events are
mouse-specific.

Renaming would be easier now than later when there are way more apps
using it.  But if we can live with the retro-sounding name for the
foreseeable future, and if we agree that for touch devices, the
mouse-emulation gestures should be in MouseArea but the rest of the
possible gestures should not, it would save some work for the existing
apps and QML component sets.  Right-clicking is pretty useful after
all, regardless of whether it is done with a real right mouse button,
or emulated via touch.

Already I think if a hypothetical QML component needed gesture
recognition, pinch-zoom functionality and dragging all on the same
area, it would need to have stacked Areas, right?  Of course I should
try it before assuming that it already works.  ;-)

We could try to have one InteractionArea that does it all, but maybe
it would tend to get unwieldy over time as new devices are introduced.
 We would need to have good generic names for every event type which
we are able to imagine now, and be prepared to add more later on.
Before multi-touch devices were actually introduced, I would have
probably failed to imagine that the event could include the size,
shape and angle of the finger or other object, as either an ellipse or
a blob; but evdev apparently supports that.  If the old Sun Starfire
mockup were ever really implemented, the multi-touch surface becomes a
scanner too; then we would need to distinguish fingers from donuts and
coffee cups (some devices can already reject palms and just see the
fingers even today), scan any image that is intentionally pressed
against the surface, OCR any text, and any of these items can be
treated as "input".  Each item has a 2d location on the screen like a
finger does now.  So if that happens, QML could add an ImageScanArea
or some such, and that would be another Area type which applications
would need to start using.

The broader question is should it be considered normal and healthy for
applications and components to stack up Areas for all possible kinds
of input that they know how to support, or should we try to combine
them more than they are now?  To me the stacking seems more likely to
survive future changes, assuming that it works well enough.

> It doesn't solve the name issue, but that one is a difficult one because it
> leads to a lot of API differences which are purely naming. I'd rather use a
> MouseArea's onClicked signal for a touch UI than have to switch to using
> TouchArea's onTapped everywhere just because this is the mobile app UI.
> PointerArea's onJab (onPointedWithEnthusiasm? We've run away from the metaphor

onPointSelected maybe?

> a little here...) might not solve this, but it would feel like an unnecessary
> change during the transition period even if it did.

BTW back in the 80's I knew an old civil engineer who was new to
computers (more of a slide-rule guy) and thought that "mouse" referred
to what we usually call the cursor (the arrow on the screen).  I've
also seen in the context of CAD digitizing tablets that the puck you
move around on the tablet can be called a cursor, especially if it has
crosshairs for accurately digitizing existing drawings.  If this
confusion occurs again after younger generations forget about physical
mice, maybe the MouseArea name won't be so bad after all.

Anyway right now I think we need to reach a conclusion on whether to
1) leave MouseArea alone and add PointingArea (or other name), with
single-click only and a generic handler name for that
2) add touch support (touches emulating the mouse) for every feature
that MouseArea supports, into MouseArea itself
3) same as #2 but leave something out
4) same as #2 but rename it anyway
5) any better ideas?



More information about the Development mailing list