Solution-level Thinking is Killing Us
How to drown yourself in spaghetti
It’s cheaper, faster, and easier than ever to produce solutions to various problems. Tell Claude what you want, and it is done; we have entered an era of abundance for software. The economics of this post-scarcity world1 are both enthusiasm- and anxiety-inducing in equal degree. But there’s a problem, a particular pet peeve of mine, that remains - and indeed, seems to be getting worse as we accelerate our solution delivery to previously unimagined levels.
Let’s have a look at what’s going on.
The Two Levels of Activity
In one of my first articles in this Substack, I wrote about the “Small Problem” and the “Big Problem”. The idea, while extremely simple, is the basis of my approach to everything in this business: we need to figure out how to solve these two problems at the same time.
Small Problem: how do we ensure that an individual project or solution delivers as impactful results as possible, as efficiently as possible?
Big Problem: how do we ensure that throughout the organization, all projects or solutions consistently deliver as impactful results as possible, as efficiently as possible?
“Small” and “Big” do not refer to importance, but scope: the “Small Problem” is project- or solution-level question, whereas the “Big Problem” is an enterprise-level question.
Solving the Small Problem means we deliver value in a particular business case2. Solving the Big Problem means we create circumstances under which we can keep delivering value consistently, throughout the organization3.
The thing is, I’m frankly not seeing as much effort on the latter as there should be, and it worries me. We are building siloes - and now, on an unprecedented scale.
Solution-level Myopia
Let’s be brutally honest for a moment. Here’s what your average data organization is doing right now:
Taking a ticket from an endless queue
Designing a solution that answers the request in the ticket
Building pipelines from various data sources into that solution
Moving on to the next ticket
When some effort is spared for “internal development” of improving tools and practices, it is generally this process of ticket-design-pipeline-solution that is being improved. Move to a cloud platform to make the pipelines run faster! Use AI to produce code faster! Build “semantic layers” so that AI can utilize the solution faster!
New technologies? Vendor innovation? Intended to increase speed and efficiency of solution delivery, of course. Now your engineers can deliver a solution 40% faster! Now you need 40% less engineers for solution delivery!
Solutions, solutions, solutions. The Small Problem.
An example: just a few days ago, I read about an exciting announcement from a technology vendor. A unified way to represent semantics for all kinds of use cases! Now your “Customer Lifetime Value” will always be the same, no matter if you’re querying the data directly, or via an AI agent! But, of course, what soon became clear was that “unified” and “always the same” only apply in the context of a single dataset.
What if - no, when - another team builds another solution that produces data on “Customer Lifetime Value”? Are they benefiting from the work done with the original solution, so that they don’t need to redefine what CLV is and how it is calculated? Are the users going to understand what difference there is between the two solutions, when the other team inevitably redefines CLV?
Silo Spaghetti and Its Consequences
Building an individual solution fast means that a certain amount of value is delivered fast. Building a ton of solutions fast, with little to no visibility over the big picture, means that the organization as a whole is going to drown in spaghetti architecture.
Spaghetti architecture means endless point-to-point integrations. The same data is taken from the same system to a bunch of solutions. The same transformations and combinatory logic are applied multiple times in different places. When something needs to be changed (due to a change in real-life business logic, for example), the same change needs to be applied several times by separate teams, not all of whom will even be informed of the need until later, by chance.
We all know building solutions in siloes is inefficient. Duplicate effort, duplicate running costs, duplicate maintenance, and eventually complete chaos from which the data consumer finds nothing and understands even less. From the organization’s perspective, this result is clearly suboptimal. Yet many organizations continue building their data & AI solutions in siloes. Why?
The answer is simple: from an individual solution’s perspective, this is considered optimal. Any interference from the enterprise level (such as “governance”!) is just slowing us down. Any non-coding work (such as “documentation” or “data modeling”!) should be avoided. After all, the business needs that dashboard yesterday4! So grab those raw tables and start building your pipeline.
Teams chasing the project-level local optimums - i.e. solving just the Small Problem - leads to a blatantly suboptimal result at the organization’s level. The Big Problem is not being solved.
“Ah but Juha”, you now say, “we no longer have that problem! We do all our development in our Fancy New Cloud Data Platform, where all the teams have a common playbook! No siloes - everything important is on the FNCDP!”
The Spaghetti Platform
Sure, you can set up a Fancy New Cloud Data Platform. Sure, you can turn it into an all-singing all-dancing modern toolbox of impressive data technologies. Sure, you can set up technical templates and examples, and ensure all your teams follow the same practices.
The problem has long since ceased to be the availability or capability of technology.
The problem is that inside that Fancy New Cloud Data Platform, everyone is still building siloes and point-to-point integrations.
It seems to me that the average data platform has a super cool technical architecture, but almost no logical architecture at all. “You can use this tool in that layer”, the diagram says, and the teams go and use this tool in that layer and proceed to locally optimize their solution development so that yet another silo is built. The diagram doesn’t say anything about what to build, or how to connect your work with that of others, or how to reuse anything5.
The average data platform seems to be mostly a technological scaffolding, a big container in which everyone can dump their spaghetti. You get a set of tools, a slice of storage, and off you go with your own projects! The overall situation in terms of duplicated work, lack of reusability, or overall chaos hasn’t really changed, but at least we now have cool names for different parts of the noodles.
And, of course, after five years, as the complexity of the spaghetti platform finally reaches a threshold where any new development is practically impossible, the IT department decides it’s time for a re-platforming project. You see, there’s this cool new thing all the LinkedIn influencers are talking about called “ramen”, and it comes in three flavours: shoyu, shio, and miso! With the ultra-modern SoupLake technology, it’ll only cost us one or two million bucks to get our miso layer up and running!
I’m not saying everyone’s data platform is like this, but I am saying far too many are.
(also, I have a few more things to say about Medallion, but I’ll save those for another article…)
Fighting Back!
The sorry state of data architecture in many organizations is not the fault of individual engineers. We need people obsessively focused on solution-level value delivery. The Small Problem needs to be solved; otherwise, success is impossible.
Excessive local optimization, to the detriment of the organization as a whole, is a leadership problem. It’s the leadership that needs to see both the Small and the Big Problem, and to figure out how to keep delivering while maintaining overall consistency and order.
This balancing act is not easy by any means. The competences required to design and execute consistent architecture while at the same time allowing for lightspeed innovation and constant delivery are rare and expensive. Vendor promises and 100x engineer influencer stories (“I developed 98 apps last month!”) affect our thinking, even when we consciously try to be careful and media-literate, and they can especially affect our bosses’ thinking!
It can also be a question of organizational courage. In an organization hell-bent on maximizing output6, it requires some guts to tell people we should hold back and sort out the spaghetti before it gets out of hand.
Many of us know all of this, of course. There’s nothing new in the term “spaghetti architecture”, there’s nothing new in trying to avoid siloes, there’s nothing new in the resulting mess that we “fix” with a re-platforming project. It’s just that far too often, we accept the sorry state of affairs as a fait accompli. “Sure, we should do all that, but…” is a comment that keeps popping up.
The question is, do you really want to accept this?
My advice is this:
Do not go gentle into that spaghetti.
Rage, rage against the dying of architecture.
More thoughts on how to fight back the wave of spaghetti will follow later. Subscribe to be notified of when those thoughts appear; until then - cheerio!
Post-scarcity in terms of software, that is. It feels sometimes that us techies hardly consider the parts of the world that are not expressed in code, and we’re very quick to proclaim the end of Work As We Know It when what we really mean is the work that we know. Sure, AI will change the world; but doctors, lawyers, and lumberjacks existed both before and after the dot-com boom. It’s the nature of their work that will change, for sure.
At least, hopefully we deliver value. It’s a long-standing problem of this business that we deliver a lot of solutions but tend to have very little understanding of what (if any) value was created.
And note that I’m consciously defining the Big Problem in a way that irrevocably links it to project-level delivery. Enterprise-level work is worthless on its own. Everything we do on the enterprise level must be aimed at having a direct impact on the project level. If something doesn’t, it should be mercilessly shut down!
Though we don’t often know why they need it, and if it should actually be a dashboard at all.
The only truly reused part seems to be raw data. Most (if not all) organizations seem to realize that it’s generally a bad idea to fetch one set of data from the same system twice. However, after that it’s a wild west.
Maximizing output over outcome, as is so often the case.






You’re speaking to my struggle and thank you for writing this to confirm them. I fight this with IT and leadership. I end up having to silo what I can control to avoid the spaghetti. This allows my team to scale over and deliver quickly.
It is even an organizational problem. On many different levels. Too much for a comment, but this much already: I think it cannot be healed by individual leaders alone, you have to widen the view even further. Who needs to work more closely with whom, what should be worked on and what not, how do we find out what is worth it and what is not. In my view, you have to turn several screws so that it doesn't even get to the point where you have to work through a huge mountain of tickets. That is, in my view, a symptom of doing too many things and above all of operating too granularly, which leads to that the view of the big picture is lost. Sure, somehow you have to decompose work, but in a different way in my opinion. I will write my next article about that. What you describe is a big problem and we will not solve it through individual leaders alone.