
What a 1999 Software Book Gets Right About Modern Analytics
I have been raiding other engineering disciplines for ideas that analytics has been too slow to borrow.
The Pragmatic Programmer (Hunt and Thomas, 1999) has been a great find. It's a book about writing better software. But reading it felt like someone had written a decoder ring for every analytics setup I have ever walked into.
Here are the principles worth stealing.
Broken Windows. Sound familiar?
The book opens with a criminology concept: the Broken Window Theory. One unrepaired window signals abandonment. Then more break. Graffiti appears. Structural damage begins. Before long, the building is beyond saving.
The authors use it to describe software rot. I think it describes analytics rot almost perfectly.
It starts with one metric that's slightly off — maybe an old dashboard no one owns, a column that got renamed but the query wasn't updated, a report that diverges from the one finance uses by "just a rounding thing." Left unaddressed, it signals to everyone on the team that this is the standard here. Then someone builds a workaround. Then someone else builds a workaround for the workaround. Shadow data appears. Teams stop trusting the numbers. By the time leadership notices, the appetite to fix it has already gone.
This is a pattern I see repeatedly: analytics starts strong, slows down, breeds conflicting metrics, and loses trust. It almost always traces back to a few broken windows that nobody repaired.
The fix isn't a data warehouse migration. It's caring about the small stuff: ownership of data assets, validating outputs, naming things consistently. Culture before tooling.
Don't Repeat Yourself, and Don't Couple Everything Together
Orthogonality is the idea that components should be independent. Change in one shouldn't ripple into another. The book puts it simply: you can change the interface without affecting the database, and swap databases without changing the interface. A closely related principle is DRY: Don't Repeat Yourself. Every piece of knowledge must have a single, unambiguous, authoritative representation within a system.
Analytics architectures fail both constantly.
In reactive analytics, the pattern is: a stakeholder makes a request, an analyst builds a dataset for it, that dataset feeds one dashboard, and the whole thing lives and dies together. There is no defined interface between the data layer and the presentation layer. Business logic leaks into the dashboard tool. Transformation logic bleeds into ingestion. Renaming a field in a source system breaks thirty reports. Do it ten times and you have ten datasets, ten pipelines, ten things to maintain, many of them computing the same underlying logic slightly differently. Nobody planned for this. It just accumulated, one request at a time.
This is maximum coupling. Change anything upstream and you are chasing breakage across assets that were never designed to share anything.
The fix is a portfolio of reusable data assets built around shared semantic definitions. Revenue is defined once, in one place, and every dashboard that needs revenue draws from it. That single decision eliminates an entire class of problems: conflicting numbers, redundant pipelines, and the quiet drift that happens when the same concept gets recalculated twelve different ways by twelve different people.
Orthogonality and DRY reinforce each other here. You can't have five definitions of the same metric if five dashboards are pulling from the same asset. The architecture enforces the discipline.
The goal isn't fewer dashboards — it's fewer sources of truth.
Tracer Bullets vs. The Big Dashboard Launch
This is one of my favourite ideas in the book, and one of the most misunderstood in practice.
Tracer bullets are rounds loaded into an ammunition belt that leave a glowing trail when fired. They let gunners see exactly where their shots are landing and adjust in real time — instead of calculating the perfect trajectory and firing blind.
The authors contrast tracer code with the traditional engineering approach: divide everything into modules, build in isolation, combine at the end, and only then show the user. Sound familiar? It sounds exactly like how most dashboards get built.
The tracer code approach is different. Build something thin but end-to-end. Get it in front of users early. Let them tell you how close to the target you are. Adjust. Iterate.
The key distinction the book makes is worth repeating: prototyping generates disposable code; tracer code is lean but complete. It's not a rough mock-up you will throw away. It's the skeleton of the real thing. Every iteration makes it more complete.
In analytics terms: don't spend three months perfecting a data model before anyone has seen a single number. Get a working end-to-end view in front of stakeholders. Validate the logic. Adjust the definition. Then build it properly. The longer you wait to show something, the more expensive the corrections become.
Design by Contract: Anchor Every Analysis to a Decision
The book introduces Design by Contract — the idea that every function should have clear preconditions (what must be true before it runs), postconditions (what it guarantees when it's done), and invariants (what must remain true throughout).
In analytics, we skip contracts all the time. A stakeholder asks for a report. An analyst builds a report. Nobody asked what decision it's supposed to support.
The report gets built. The stakeholder looks at it, says "great, thanks," and files it away. The analyst moves on to the next request. No one ever found out whether it changed anything.
Requirement Engineering is essentially Design by Contract for analytics work. Before you touch an SQL query, you should be able to answer: what question does this analysis answer? What decision does that question support? What would a good or bad answer mean in practice?
That's the precondition. The postcondition is a validated output that someone with domain knowledge has confirmed is correct. The invariant is that these things stay true over time — that the asset is owned, maintained, and doesn't quietly drift out of date.
Analytics without contracts is just busy work dressed up as insight.
The Knowledge Portfolio: Keeping Analytics Teams Sharp
This one speaks directly to something I care about a lot — the people side of analytics.
The authors compare a programmer's skills to a financial portfolio. Invest regularly, even in small amounts. Diversify — the technology that's hot today may be irrelevant in three years. Manage risk. Buy low, sell high (learn emerging tools before they're mainstream). Review and rebalance periodically.
Analytics teams fail this in both directions. On one side, leaders who don't invest in skill development, expecting the same team to do the same work with the same tools indefinitely. On the other, teams that chase every new technology without building depth in anything.
The slow death I see most often isn't burnout — it's bore-out. Vanity projects, repetitive run work, invisible contributions. A team that is only asked to pull data and format slides is not developing a knowledge portfolio. They are depleting one.
The most motivated, highest-performing analytics teams I have worked with have three things in common: interesting problems that require genuine skill, visibility into the impact of their work, and regular space to learn.
Great Expectations: The Number Isn't Enough
The book ends with a chapter that surprised me. It's not about code. It's about expectations.
In an abstract sense, an application is successful if it correctly implements its specifications. Unfortunately, this pays only abstract bills.
This is analytics in miniature.
You can build a technically correct, beautifully engineered dashboard and have it ignored, or worse, distrusted, because it didn't match what the stakeholder imagined. The metric is right. The answer is right. But the expectation was never managed.
The most successful analytics teams I know don't just answer questions — they help stakeholders ask better ones. They manage expectations early, communicate clearly about what an analysis will and won't show, deliver on what they commit to, and occasionally surprise people with something genuinely useful that wasn't asked for.
Parting Thought
I didn't expect a 25-year-old software engineering book to be this relevant to analytics in 2025. And yet here we are, because the core challenge hasn't changed: how do you build something that delivers real value, earns trust, and holds up over time?
The patterns Hunt and Thomas describe; rot from neglect, duplication, undefined interfaces, work disconnected from business decisions, teams depleting rather than building capability; are not exclusively software problems. They are the problems of building any technical system that has to stay useful over time.
That is precisely the territory I cover in The Analytics Operating System; my book where I propose a framework for moving analytics from a reactive service to a durable institutional capability.
Join The Simplicity Stack
The unactionable newsletter. For people tired of doing everything.