Continuous integration: Pipelines or jobs first?

Tito Sarrionandia
8 min readFeb 4, 2021

Recently I found myself wanting to express an opinion about continuous integration pipelines to my team, but I lacked the language to do it. I’ve been searching the web for days now, and asking on Twitter, and I still can’t find any good description of what I’m talking about. So I’m going to try to describe it here.

I’m kind of hoping that somebody sees this and then points out that I wasted my time because it’s already well described elsewhere, so I’ll finally have a resource to point at!

The thing I want to describe is a design difference between continuous integration systems. I’m going to call it pipeline-first vs job-first continuous integration.

Job-first continuous integration systems seem to be most common right now. Confusingly, they all claim to be pipeline based, but I think they mean something subtly different to the idea of “pipeline-first” that I’m trying to get across.

To a job first system, the whole world can be seen as a number of jobs, or tasks, or build stages, that you perform on a given change. Good examples are CircleCI, GitHub actions, and (I think) Azure Pipelines.

Job-first systems are probably more simple to get your head around. Every time a change is made, the codebase as it existed after that change is run through some jobs. Often, these jobs are in that exact same codebase, which makes everything lovely and repeatable for any given change.

There can be multiple jobs for each change, and those jobs can be run sequentially, in parallel, or some mixture of the two. If you string a few jobs together, you’ve got a pipeline.

Every subsequent change generates its own set of jobs, and these run completely independently from the last change.

This is different to a pipeline-first system. Pipeline-first systems appear similar: They also allow you to define a number of jobs in sequence and in parallel, they let you string these together into a pipeline, and they let you define these in the same codebase as the thing being built.

The crucial difference though, is that the pipeline is the first order object. That is, you have a single pipeline and all of your changes move down it sequentially — instead of every change having its own pipeline that runs in a sandbox. Examples of this pattern are Go-CD, AWS CodePipeline, and ConcourseCI.

The difference in behaviour is small, but I think it’s actually fairly important if you’re trying to optimise for flow. Either design might fit your situation better depending on what you are doing.

Worked example

I’m going to work through an example. In the example I’ll assume all changes are being committed straight to Trunk, but I think all systems support building on branches too. I’m doing this firstly because it’s way easier to draw, and secondly because it’s the way I like to build software.

A pipeline with these stages: Download src, build and test, release to staging environment, manual approval, release to prod

Imagine your build process looks like this. It’s a simple linear process made up of 5 steps.

  1. Download src: This is triggered automatically when you push a change to Git.
  2. Next, the downloaded source code goes through build & test. The code is compiled, and some tests are run against it.
  3. The code is then automatically released to a staging environment
  4. This stage is manual. A developer has to click a button to allow a change to go through.
  5. The final stage releases the code to production.

Now imagine that several commits are made to trunk in quick succession. Commit 1 is a refactor, commit 2 is a new feature, and commit 3 is a fix for a performance issue you are experiencing. Commit 2 unfortunately introduces a defect, which stops the tests from passing, so this problem is fixed in commit 4.

Job-First

On a job-first system, all three commits are running through their own pipeline separately, and you’ll end up with an output looking something like this. Green means complete, yellow means in progress, red means failed.

Four versions of the pipeline. Commits 1 and 4 have got to manual approval, commits 2 and 3 failed on build and test.

Commit 1 makes it all the way to manual approval. Commit 2 fails at build and test, so stops there. Commit 3 also contains the defect introduced in commit 2, so also stops there. Commit 4 fixes the defect, so makes it all the way through to manual approval.

At this point, you could choose to approve commit 1 or commit 4 to be production. You could also choose to release commit 1, then commit 4 to production.

Pipeline-First

It’s harder to draw what would happen in a pipeline-first situation, you need to think about how it would look at different times.

Here is the pipeline after commit 1 has got all the way to manual approval.

Note that there is now only 1 pipeline, and each stage tells you what it is currently building, or the last thing it did build.

Next, imagine you click manual approval, and just as your code makes it into production, commits 2 and 3 are merged to Trunk.

There is only one pipeline, and each job is dealing with a different commit.

Build and test is currently building commit 2. Download src has finished downloading commit 3. This is the next key difference: Commit 3 cannot go through build & test until commit 2 has finished. They are queued up, just like a factory conveyor belt.

Of course, we know commit 2 will fail to get through build and test. Lets imagine it did fail, at this point commit 3 will move in to build and test. Lets introduce a new colour: Orange means “currently building, but last build was a failure”.

Commit 2 is, at this point, history. On most systems you can click into a job and get it to run an older build if you want. This is sometimes useful if you think that you’re dealing with a flaky test. Commit 2 also cannot be released to staging or production, even if you want to. It won’t ever get there because you can’t skip stages in the pipeline. The factory analogy mostly holds here too: You can’t put the windows on the car unless you’ve put the doors in first.

Next, commit 3 will fail too, but commit 4 has just been downloaded.

Next, commit 4 moves into Build & Test…

Then finally, it passes and moves into manual approval.

Does it matter?

I think it does. I’m struggling to articulate exactly why, but this is my best shot at why I prefer the pipeline-first system:

  • Guaranteeing that one commit cannot pass through a stage before the previous one psychologically prepares me to think about just one change at a time. If I’m the author of commit 3, and a different person wrote commit 2, in the jobs first system I’m mostly watching my own commit. In the pipeline-first system I’m watching everyone’s commits move through the same pipeline and so I’m hyper-aware of the context of my incremental change. Of course I technically can get the same data from the job-first system by looking back at commit 2 and seeing that it introduced the defect, but I’m not constantly tracking how all the changes my team are making are doing.
  • I think it also psychologically sets me up better to practice Andon (everyone stops work when the build breaks and focusses on fixing it). If I’m looking at one single pipeline and watching peoples changes go through it, I might not even have made commit 3. I might have seen that the build went red for commit 3, and so joined the authors on fixing it so that I know that my commit is a change off of a reliably clean base. Easier to think about. Again, I technically could do this with a job-first system by having some sort of alerting on all commits, but the shared team view of a pipeline adds significant social pressure on me to behave this way.
  • If I’m aiming for continuous delivery, and looking to remove that manual phase from the pipeline so that every passing commit sails straight into production without human intervention, guaranteeing that the build phases only run in the intended order is actually pretty important. If a phase of my pipeline has a slightly unpredictable time (this often happens in UI tests that use timeouts and retries), I might end up with two commits racing to production. If the newest commit makes it first, then the older commit will be released on top of it.
    Of course, most job-first systems will let you do something clever to prevent this, but I find it takes up less brain space to just have a hard and fast queuing rule. When it comes to production changes, I want to make it as simple as possible to understand so that my dumb monkey brain has half a chance of not screwing it up.

That said, there are definite disadvantages to pipelines-first, notably:

  • If your commit traffic is high and your jobs take a while, a queue can really slow down your development feedback loop.
  • There’s some weirdness where in general the pipeline comes first, but the pipeline itself is defined in source so the structure of the whole pipeline is defined by the newest commit. So if you change something, it’s immediately applied across the pipeline, possibly including the definitions of the jobs that the older commits are still passing through! 🤯

This feels incredibly subjective — I’d love to know how other people have applied incremental change mindsets in either system. I’d also love to know if these patterns have real names and are actually documented somewhere!

--

--

Tito Sarrionandia

Head of frontend engineering at Babylon Health. Formerly ThoughtWorks, Made Tech, school teacher.