[Development] Proposal: Allow contributors to +1 sanity review.

Alan Alpert 416365416c at gmail.com
Tue Aug 13 20:31:55 CEST 2013

On Tue, Aug 13, 2013 at 11:18 AM, Oswald Buddenhagen
<oswald.buddenhagen at digia.com> wrote:
> On Tue, Aug 13, 2013 at 10:26:42AM -0700, Alan Alpert wrote:
>> On Tue, Aug 13, 2013 at 2:22 AM, Oswald Buddenhagen
>> <oswald.buddenhagen at digia.com> wrote:
>> > On Mon, Aug 12, 2013 at 02:32:34PM -0700, Alan Alpert wrote:
>> >> So all that happens is that there will be cases where the bot gives a
>> >> false positive and contributors need to ping again for sanity review -
>> >> a needless step.
>> >>
>> > actually, it's a very useful step. people are trying to override the bot
>> > all the time because they don't get the intentions behind a particular
>> > report
>> Or they get that the intention of a heuristic it to guess and realize
>> that it's not always right?
> we hope they do. the likelyhood that they do is way higher when
> approvers make the decision.
>> > (or because they enjoy outsmarting the system just for the sake
>> > of it, like you apparently do)
>> We *define* the system. The fact that I have to work around it in
>> order to do my job properly shows that the system is defective and
>> needs to be rectified.
> or it means that you are not doing your job right, and are trying to
> find loopholes to outsmart the system.

If the bot can do my job better than I can, why don't we let it write
the code for us :P ?

The "system" in question is a script that you wrote. Based a a wiki
page which I believe you wrote (some history seems lost). Is some
outside scrutiny allowed, or can we not even question the "system"?

>> A more effective change which I recommend would be to have the bot not
>> give -1 on the heuristics which are known to give false positives more
>> frequently.
> at this point there are no heuristics which are known to give false
> positives often. when you often get a -1 for something you consider ok
> by defintion, then it's time you rethink your position and to look for
> reasons.

3 heuristics known to give "false positives" often (where a false
positive means that overriding is the correct decision, often it's
still valuable to be informed about the potential issue, and where
often means that I see it maybe once a month on average).

A) Adding nice images, which happens a lot whenever we update the QML
B) Mixing WS changes, which shows up in a lot of doc rewrites/fixes
and is not a conceptually separate change in that context.
C) Missing license header, due to the QML auto test data. This is
under a separate discussion as to whether the exemption should be

>> > having an approver (i.e., somebody who
>> > is expected to be familiar with the coding style, commit policy, etc.)
>> > take the decision to override the bot is very much part of the plan.
>> The point is that the approver has already decided to approve.
> nonsense. they approved the code and the text. they didn't necessarily
> think of the subtleties of the process and the formalities of what makes
> a good git commit, or are among the whitespace-blind themselves. to
> verify sanity reports many people need to change into an entirely
> different mindset. so it is a *good* thing that it needs to be done
> separately.

So this fits your workflow better. I don't think the proposed change
will invalidate your personal workflow. Is it possible for you to
imagine that other people think differently?

> i for one *always* do sanity overrides in a separate review, simply so
> it's clear that the comment i write (you know that you are required to
> add a plausible justification, right?) relates to the sanity review.

The case under review was one where the justification/discussion of
the sanity output came up immediately, was discussed specifically in
the comments, and was +1'd sanity review at that time. Then there were
several more patch sets not affecting the outcome of that discussion,
but they did undo the +1.

Alan Alpert

More information about the Development mailing list