[Development] resolution independence (was Re: Retina display support)

Shawn Rutledge shawn.t.rutledge at gmail.com
Thu Oct 4 17:09:18 CEST 2012

On 4 October 2012 16:27, Tony Van Eerd <tvaneerd at rim.com> wrote:
> I think proper resolution dependence and independence requires, at times at least, 2 separate measurements for each widget:
> visual measure:
>         - physical measurement is in arcminutes (ie angle between <top-of-dot, your-eye, bottom-of-dot> for the smallest dot you can see - ie discerning the last line of an eye chart)
>         - note that this is dependent on how far (average if necessary) the user is from the screen
>         - need a scale-factor for audiences/users/situations that do not have 20/20 vision
>         - so visual measurements should be in "logical arcminutes" or something like that - arcminutes * scale_factor
>         - In many cases, until recently, the visual measure == pixels.
> touch measure:
>         - phyiscal measurement is mm, but also needs scale-factor for given situations
>         - ie "big thumbs" or wearing gloves or physical impairments, etc - or even just for touch-novices or per situations (touch in car vs touch on device)
> The problem, of course, is that both these measurements need to be rendered onto the same surface.

If you declare

Rectangle {
    width: 20mm
    height: 10mm
    scale: 1.5

then you'd expect it to be 30mm wide, right?  And if you want to make
it global, it can be bound to a zoom value for the whole app.

> We also need to be able to get "real" units, in order to render, for example, a ruler. (But is that a ruler on the device screen, or a ruler on the attached 55inch monitor...).

Maybe it's theoretically possible to choose which QScreen is relevant.
 I discovered that OSX has a means to find out which screens are
"mirrors", but so far the code is ignoring those, and will create a
QScreen instance only for the primary screen of the mirror set.
Perhaps one of the most common use cases is to have your laptop
display mirrored on a projector, and then you really don't known what
size the image is, because as far as I know the projector doesn't have
a way of knowing how far away the screen is.  (I guess they could use
laser or ultrasonic distance measurement or feedback from the zoom and
focus mechanisms or some such, but I haven't heard of it.)  That
reminds me I need to try my patches with a projector and see what

But if QScreen had another property mirrorSets, like virtualSiblings,
in which you are guaranteed that the first one is the primary, then
maybe your app could choose to scale all of its mm dimensions with
respect to the screen that you choose.  Just an idea... I'm not sure
what the API would look like for that.

More information about the Development mailing list