About a week ago,
@tomasino published a post on his contract-based dependency management idea (aka
CBDM), and I would be lying if I said I didn’t like it.
Not only does it provide a better model for dependency management than SemVer or any other versioning scheme, but it also:
- provides strong incentive for developers to maintain extensive test suites for their own software;
- provides strong incentive for developers to help developers of their project’s dependencies maintain extensive test suites, too;
- provides very clear and unambiguous information on whether or not some functionality or behaviour of the dependency is, in fact, officially supported by dependency’s developers, or not;
- provides very clear and unambiguous information if some functionality or behaviour of the dependency has changed;
- makes it very, very clear who done goofed if a dependency upgrade breaks a dependent project.
The basic idea boils down to this: when deciding if a given version of a given dependency is compatible with a dependent piece of software, instead of relying on version numbers – rely on tests that actually verify the functionality and behaviour that piece of software actually depends on.
In other words, when considering updating dependencies of a project, don’t look at version numbers, but look at tests of the dependency (and their results).
Tomasino’s post goes into more detail and is well-worth a read.
What’s wrong with version numbers?¶
Version numbers are are notoriously unreliable in predicting if something breaks after the upgrade. That’s the whole point of SemVer – to try to make them more reliable.
The problem is that it’s impossible to express, in a set of just few numbers, all the dimensions in which a piece of software might change. More importantly, certain changes might be considered irrelevant or minor by the developers, but might break projects that depend on some specific peculiarity.
Cue specifications, and endless debates whether or not a particular change breaks the specification or not.
How could CBDM work in practice?¶
Let’s say I’m developing a piece of software, call it
AProject. It depends on a library, say:
LibBee developers are Gentlefolk Scholars, and therefore
LibBee has quite extensive test coverage.
As the developer of
AProject I specify the dependency not as:
LibBee, (list of upstream tests I need to be unchanged, and to pass)
(Bear with me here and let’s, for the moment, wave away the question of how exactly this list of upstream tests is specified.)
This list does not need to contain all of
LibBee’s tests – in fact, it should not contain all of them as that would effectively pin the current exact version of
LibBee (assuming full coverage; we’ll get back to that). However, they should be tests that test all of
LibBee’s functionality and behaviour
AProject does rely on.
This set of tests becomes a contract. As long as this contract is fulfilled by any newer (or older) version of
LibBee I know it should be safe for it to be upgraded without breaking
What if a
LibBee upgrade breaks
I say “should”, because people make mistakes. If upgrading
AProject even though the contract is fulfilled (that is, all specified tests have not been modified, and are passing), there is basically only a single option:
AProject relied on some functionality or behaviour that was not in the contract.
That makes it very clear who is responsible for that unexpected breakage: I am. I failed to make sure the contract contained everything I needed. Thus a long and frustrating blame-game between myself and
LibBee’s developers is avoided. I add the information about the additional test to the contract, and deal with the breakage as in any other case of dependency breaking change.
AProject just got a better, more thorough dependency contract, and I didn’t waste any time (mine nor
LibBee developers’) blaming anyone for my own omission.
What if the needed upstream test does not exist?¶
If a test does not exist upstream for a particular functionality or behaviour of
LibBee that I rely on, it makes all the sense in the world for me to write it, and submit it as a merge request to
When that merge request gets accepted by
LibBee’s developers, it clearly means that functionality or behaviour is supported (and now also tested) upstream. I can now add that test to
AProject’s dependency contract.
LibBee just got an additional test contributed and has more extensive test coverage, for free. My project has a more complete contract and I can be less anxious about dependency upgrades.
What if the needed test is rejected?¶
LibBee developers reject my merge request, that is a very clear message that
AProject relies on some functionality or behaviour that is not officially supported.
I can either decide to roll with it, still add that test to the contract, and keep the test itself in
AProject to check each new version of
LibBee when upgrading; or I can decide that this is too risky, and re-write
AProject to not rely on that unsupported functionality or behaviour.
Either way, I know what I am getting into, and
LibBee’s developers know I won’t be blaming them if they change that particular aspect of the library – after all, I’ve been warned, and have a test to prove it.
You guessed it: win-win!
Abolish version numbers, then?¶
No, not at all. They’re still useful, even if just to know that a dependency has been upgraded. In fact, they probably should be used alongside a test-based dependency contract, allowing for a smooth transition from version-based dependency management to CBDM.
Version numbers work fine on a human level, and with SemVer they carry some reasonably well-defined information. They are just not expressive enough to rely on them for dependency management. Anyone who has ever maintained a large project with a lot of dependencies will agree.
Where’s the catch?¶
There’s always one, right?
The difficult part, I think, is figuring out three things:
- How does one “identify a test”?
- What does it mean that “a test has not changed”?
- How to “specify a test” in a dependency contract?
The answers to 1. and 2. will almost certainly depend on the programming language (and perhaps the testing framework used), and will almost certainly mostly define the answer to 3.
One rough idea would be:
- A test is identified by it’s name (basically every unit testing framework provides a way to “name” tests, often requiring them to be named).
- If the code of the test changes in any way, the test is deemed to have changed. Probably makes sense to consider some linting first, so that whitespace changes don’t invalidate the contracts of all dependent projects.
- If a test is identified by it’s name, using that name is the sanest.
I really think the idea has a lot of merit. Software development is becoming more and more test-driven (which is great!), why not use that to solve dependency hell too?