[Development] Setting up time-based releases for the project

joao.abecasis at nokia.com joao.abecasis at nokia.com
Sat Aug 11 02:34:34 CEST 2012


Sven Anderson wrote:
> On 07.08.2012 13:09, joao.abecasis at nokia.com wrote:
>> While the two setups are very similar, almost isomorphs, they're not
>> exactly so. There are important practical consequences that
>> distinguish the two.
>> 
>>      - Releases happen on a fixed schedule
>>      - Minor versions have a defined lifetime
>>      - The number of patch releases is limited by default.
>> 
>> These give predictability and focus to everyone participating on the
>> project. It gives everyone something to align to.
> 
> I understand the advantages of these points and fully support them. I
> just wonder, why you need the parallel rolling branches for it. Can't
> we just establish the fixed scheduling in the classical branches?
> Instead of fixed merge-down-days we would have fixed branch-days.

We have tried to implement it this way in the past with little success
in terms of getting the release schedule closer to a 6-month cycle.

In practice, there's some overhead to having explicit decision steps and
a manual process (announcing "feature freeze", creating new branches,
setting up CI for those, having devs switch "focus branch") that are all
too easy to let skip. Eventually a small delay here and there quickly
compounds.

The parallel rolling branch tries to minimize decision points. And keeps
actions required to progress down to the bare minimum. For instance,
there is no need to announce a feature freeze, fire-hose is never in
feature freeze, all others are frozen except for the scheduled merges
from upstream.

On the other hand, for a virtual 5.1.1 branch to advance to the next
development stage only a merge is required. People tracking downstream
branches will have everything setup and ready to test, if not the new
features, at least all the rest that should still be working.

>> There are other practical consequences. As a developer, you don't
>> have to worry about *when* to do or merge a specific change. You get
>> it up to snuff and decide *where* to apply it (i.e., on which
>> branch).
>> 
>> The fact that the branches roll from release to release means anyone
>> tracking development branches decides how much pain they are willing
>> to take and stay the course. You don't have to wait for the next
>> branch to come along so you can jump to it.
> 
> Ok, here I see the point. Tracking a certain level of code quality is
> easier with rolling branches. OTOH it's probably easy to install
> automatic commit-aliases that track which ever branch currently has a
> specific quality status, like "beta" or "rc".

If we need to create specific branches for every minor or patch release
cycle, then that is additional work and decisions that have to be made.
For instance, when do we stop testing and accepting patches to the 5.0
branch?

>> "Quality" in a way jumps up and down with the merges, but I don't
>> think we can eliminate these jumps at the moment and in reality they
>> are not introduced by the proposed model.
> 
> Of course we can't eliminate the quality changes. That's why I asked
> if we shouldn't better use a model, that makes that fact explicit
> (transition focused rather than level focused) by branches that (more
> or less monotonically) increase in quality until end of life
> (alpha->beta->release->security-fixes). That would at least eliminate
> the jumps.

My opinion here is that bringing ("lower quality") code to wider
distribution and testing is what increases its quality (from additional
bug reports and more targeted bug fixes). The purpose of the merges is
to ensure code gets wider exposure. The purpose of the policy is to
limit the changes in the code.

I don't think we can bring the quality up by holding the code back
longer. So, in a way, bumps in quality are fundamental to bringing the
quality up, longer term.


João



More information about the Development mailing list