Modern Software Engineering: Doing What Works to Build Better Software Faster
by Dave Farley
Software engineering is about learning and discovery, so we need to become experts at learning to succeed.
The systems we build are often complex, so we need to become experts at managing complexity.
Five techniques to become experts at learning:
Five techniques to become experts at managing complexity:
- Separation of concerns
- Loose coupling
We use those 10 techniques to steer development. And we use these 5 ideas as rpactical tools to drive an effective strategy for development:
- Controlling the variables
- Continuous delivery
Engineering is not the code; code is the product of engineering. Engineering is about applying scientific rationalism to solving problems.
We don't really have great ways to measure our performance in software development. Metrics like Scrum's velocity are irrelevant, and metrics like lines of code or test coverage can be harmful (perverse incentives...).
So what can we use? In Accelerate, Jez Humble and others presented stability and throughput as metrics correlated with performance.
For example, practicing methods under continuous delivery would make you a high performance team. This makes sense: you have systems in place to handle frequent shipping of code.
Stability is tracked by change failure rate, which is the rate at which change introduces a defect at a particular point in the process, and recovery failure time, which denotes how long it takes to recover from a failure at a particular point in the process. It's an important measure, because it measures the quality of work done.
Throughput is tracked by lead time, which is a measure of the efficiency of the development process: how long for a single-line change to go from idea → working software, and frequency, which is a measure of spee: how often changes are deployed into production. This is a measure of your efficiency at delivering ideas in the form of working software.
If stability is about the quality of your work, this is about how fast you can produce and ship work of that quality.
You can have both. And when you do, you make more money.
When you are making decisions for whether to keep a practice, ask whether it
- helps increase the quality of software you create
- helps increase the efficiency with which you create software of that quality
If it negatively impacts these, then discard it. Otherwise, you can choose to keep it or not depending on its contribution.
Experts at learning
We need to become experts at learning. And science is the best problem solving technique known to man, so we will want to use it. Even better, we will tailor it to our needs.
There are five linked behaviors in this category:
- Working iteratively
- Employing fast, high-quality feedback
- Working incrementally
- Being experimental
- Being empirical
Iteration is one of the ways we can optimize for learning, especially when coupled with gathering feedback.
Like hill climbing: as long as you have a measure for whether your steps are better or worse, you can iterate your way towards the goal.
A practical way to work iteratively is to reduce batch size. This also limits the time horizon over which our assumptions need to hold.
Working in sprints is just one way of working iteratively. TDD and CI are others.
Work incrementally. Recall [[Gall's Law]]. Systems don’t emerge fully, they emerge incrementally. Built brick by brick.
Don’t think you can design the whole system in one go. It isn’t practical.
Pay attention to feedback. It's how you learn. Create fast [[Feedback Loop|Feedback Loops]].
Feedback allows us to establish a source of evidence for our decisions. Once we have such a source, the quality of our decisions is, inevitably, improved. It allows us to begin to separate myth from reality.
Prefer early feedback. When developing, a type system or LSP or compiler will probably be the fastest feedback. Then local tests (around what you’re working on, the function, class, etc.). Then full unit test suite. Then integration tests. Then acceptance… and so on.
All this is pretty fast. You want fast feedback.
We don't know whether the products we create are actually useful until we get feedback.
Telemetry is a good feedback mechanism. The data gathered can even be more valuable than the service offered, as it gives insights into customer behavior and desires. Even that which the customer isn’t aware of. But IMO there’s a fine line between data for feedback and unethical spying. Be careful not to cross it. Maybe not just fine…
Being experimental helps you know when to quit on an idea. Find a way to try it with minimal cost, so you can validate it.
Make sure you control the variables as much as possible during experiments.
Empiricism is scientific thinking. To emphasize the results of experimentation and utilize it in deduction.
Empiricism is essential to progress.
Use The Scientific Method for problem solving. Test your assumptions. Test your guesses. It’s not enough to just formulate a hypothesis and test it. Is it even the right hypothesis in the first place?
Experts at managing complexity
To manage complexity, we must divide up the problems such that we can reason about them, without getting lost.
There are many variables we can use to draw those lines. The problem you’re solving, tech you’re using, and even how smart you are. Yes, that matters. We often overestimate our ability to solve problems in code.
It’s best to assume your ideas are wrong and work from that assumption. Fun story, something related happened to me just today. I was resolving an issue that an user had filed, asking me to get universal links to podcast episodes. There were no external api, but I pieced a few things together and got part of the way. I could get the podcast page, as they used the iTunes IDs for that. However, I couldn’t seem to figure out their IDs for episodes, so I chalked it up to an external api and was just about to close the issue on those grounds. However, I tried to explain my reasoning. I ended up going so deep that I found a solution, while I tried to prove that there were no good solution. Turns out, their internal api was “exposed” and I could freely use it for the exact purpose, albeit I had to make a hacky implementation to get it right.
In the category of becoming experts at managing complexity, there are five ideas:
- Separation of concerns
- Information hiding / abstraction
The author believes that, by using Test-Driven Development (TDD), you can achieve these qualities in your code. Because if you don’t test, achieving it is up to the skill and experience of the individual developer.
Author makes the case that if our tests are easy to write, our code is of good quality. But if they’re difficult to write, it is of poor quality. But obviously not saying: TDD = good design.
To get the benefits high performance, feedback driven development with CI, you have to take testability and deployability of your system seriously. Example goal: to be able to deploy every hour. To create “releasable software” every hour.
Why should our code be modular? To manage complexity. So we can better keep the overview and understand the system we’re working on.
To do that, our systems should be modular, such that we can focus on a smaller part, without worrying about all other parts at the same time.
It’s like dividing and conquering the problem.
Kent Beck quote:
Pull the things that are unrelated further apart, and put the things that are related closer together.
And this is where cohesion comes in. It’s the degree to which things that are together in a module actually belong together.
Separation of concerns
This is one of the most powerful principles.
A function does one thing.
First, leaky abstractions, as defined by Joel Spolsky: "an abstraction that leaks details that it is supposed to abstract away." Spolsky later said "All non-trivial abstractions are leaky."
How much abstraction? Enough such that you can change your mind later.
You can’t eliminate coupling entirely. Otherwise your system components cannot communicate. But manage it.
Design over tools
We tend to undervalue the importance of good design. We focus on trivial matters, like fighting Emacs vs VIM wars. The reality is, it doesn’t really matter. Or: more like, it depends.
But focusing on the fundamentals: good design principles. That matters a great deal more than picking the right framework. E.g. Next vs Remix.
Levels.fyi built their massive business with Google Sheets as their backend. You’ll be fine using Postgres.
If your organization needs a “hero”, a particularly skilled individual, you have a problem. You need everyone to be capable of fixing things, not rely on such a hero. Would you pass the bus test? (If the hero gets hit, are you screwed?)
The hero shouldn’t be in firefighter mode, but rather actively work with others and share information to make the system more understandable.
See The Phoenix Project’s Brent Geller for reference.