[Development] status and use cases for QScreen

Lorn Potter lorn.potter at gmail.com
Fri Oct 12 21:03:21 CEST 2012


On 13/10/2012, at 12:47 AM, Shawn Rutledge <shawn.t.rutledge at gmail.com> wrote:

> I got started working on QScreen, its properties, notifiers, and
> implementation on all 3 platforms after I noticed that the
> documentation was out of sync with the implementation in Qt5.

[on a side note]
While getting reacquainted with qsystems, I noticed that if you added brightness, contrast, and maybe backlight state from qsystems to QScreen, we could get rid of the now mostly QScreen wrapper of QDisplayInfo.


> Another issue is that on Mac OS X, I used the API available to detect
> "mirror sets" (the case when you are showing the same desktop on both
> your "main" screen and a secondary one), and to detect which one is
> the primary of the mirror set.  I thought it might be less confusing
> and bug-prone to just ignore the non-primary display, because why
> would an application care about a screen which is just a copy of the
> primary one, and doesn't add anything to the virtual desktop?  

Because that mirroring can change to side-by-side at any time.

What if the non primary screen is not the same size? The non primary screen would have different properties, so it probably shouldn't be ignored.

I guess it depends on what you mean by 'ignore'.


> But
> today I realized that making it behave the same on Linux is harder,
> because you can use xrandr to arrange screens so they are
> side-by-side, partially overlapping, or fully overlapping.

Sometimes when dealing with hardware level stuff, you cannot make the API act the same on every platform and you have to go with what the platform does, as that is what it's users expect.

>  So one
> screen will not necessarily be forced to be an exact mirror of
> another, and therefore I cannot say for sure which one is the primary
> and which one ought to be ignored.  So you will always get two QScreen
> instances in that case, and that has me thinking maybe it should be
> the same on OSX.  You would need to pay attention to which one is
> first in the list of virtualSiblings; according to the docs, the
> primary is guaranteed to come first.  But what if you have 3 screens:
> 2 are mirrored and one is not?  Then maybe you need to pay attention
> to the geometry to decide which parts of which screens overlap which
> others.  xrandr is so flexible that it's hard to make assumptions.
> 
> Anyway the main purpose of starting this thread is to talk about use
> cases for this stuff.  There are two I can think of: the most common
> might be if you want to write a presentation tool with Qt, you might
> want to know stuff about the screen in the presenter's laptop and
> whatever type of large screen is showing the presentation.  They might
> have different geometry (until recently, high-res projectors were too
> expensive, or you might even be using a composite TV output in the
> worst case).  They might be mirrored so that you can see what the
> audience is seeing, without having to turn around and look; or they
> might be separate, so that you can show a full-screen slide for the
> audience and use the laptop screen to see a slide sorter, to take
> notes, and otherwise plan and control the presentation.  I don't do
> much of this myself, so I'm not sure which way is most preferred at
> this time.

I think both of these are equally valuable.

> 
> The second use case I can think of is when you write an application
> that has multiple windows, with content in one window and floating (or
> dockable) windows for controls etc.  For example, the Gimp.  Suppose
> you have a Cintiq for the drawing, and a touchscreen which you would
> like to use for the rest of the controls (the toolbox, layers dialog
> and so on).  As long as you have a mouse, and both screens are part of
> the same virtual desktop, you can arrange the windows that way, so the
> Gimp doesn't really need to be aware, but maybe some applications
> would want to do this kind of smart layout to spare the user the need
> to do it manually.

It might need to know this if for remembering positioning on app startup.

>  Perhaps a vertical-market studio app for print
> layout or animation or movie editing.

A video editing or switching app that can display rendered output to (dumb) screens with different properties.
or a DAW where you might want the mixer window on one screen and touch controls transport widget on another. 

>   Also, if you have a Cintiq you
> really want to map the pen to just cover the area of the screen
> itself, pixel-for-pixel.  X11 by default will map it like a mouse,
> covering the whole virtual desktop, so small movements of the pen
> result in large movements of the cursor.  This can be fixed.  I have
> an early-model XGA cintiq at home, and managed to configure it so that
> the pen can cover just the Cintiq while the mouse can go anywhere.
> But I wonder if it would be useful for Qt apps to be able to get
> information about which input devices are available on which screens.
> If there was metadata for an input device analogous to that for
> QScreen, with a QRect geometry property, maybe that would be enough.
> Or maybe the QScreen ought to have a list of input devices which can
> go into that screen's space.  The need for this may become more
> ubiquitous with tablets: again you might write a presentation app for
> a tablet, which might have an HDMI output, but if the touchscreen is
> the only input device, then you do not want to put any interactive
> controls onto the HDMI screen.  But whether we can get this kind of
> information on every supported OS is questionable.

> What other use cases do you have in mind for this stuff?




Lorn Potter
Senior Software Engineer, QtSensors/QtSensorGestures







More information about the Development mailing list