There’s a peculiar asymmetry in how humans handle their own incompetence. It reveals itself most starkly when you compare two scenarios: a cancer patient undergoing chemotherapy, and a project manager pushing delivery dates on a complex technology initiative.
Both involve life altering stakes. Both require deep expertise the decision maker doesn’t possess. Yet in one case, we defer completely. In the other, we somehow feel qualified to drive.
The Chemotherapy Paradox
Firstly, lets be clear: incompetence is contextual. Very few people are able to declare themselves as “universally incompetent”. What does this mean? Well just because you have low or no technology competence, it doesn’t mean you are without merit or purpose. The trick is to tie your competencies to the work you are involved in….
When someone receives a cancer diagnosis, something remarkable happens to their ego. They sit across from an oncologist who explains a treatment protocol involving cytotoxic agents that will poison their body in carefully calibrated doses. Their hair will fall out. They’ll experience chronic nausea. Their immune system will crater. The treatment itself might kill them. And they say: “Okay. When do we start?”
This isn’t weakness. It’s wisdom born of acknowledged ignorance. The patient knows they don’t understand the pharmacokinetics of cisplatin or the mechanisms of programmed cell death. They can’t evaluate whether the proposed regimen optimises for tumour response versus quality of life. They lack the fourteen years of training required to even parse the relevant literature.
So they yield. Completely. They ask questions to understand, not to challenge. They follow instructions precisely. They don’t suggest alternative dosing schedules based on something they read online.
This is how humans behave when they genuinely know they don’t know.
The Technology Incompetence Paradox
Now consider the enterprise technology project. A complex migration, perhaps, or a new trading platform. The stakes are high, reputational damage, regulatory exposure, hundreds of millions in potential losses.
The project manager or business sponsor sits across from a principal engineer who explains the technical approach. The engineer describes the challenges: distributed consensus problems, data consistency guarantees, failure mode analysis, performance characteristics under load.
The manager’s eyes glaze slightly. If pressed, they’ll readily admit: “I’m not technical.”
And then, in the very next breath, they’ll ask: “But surely it can’t be that hard? Can’t we just…?”
This is the incompetence paradox in its purest form. The same person who just acknowledged they don’t understand the domain immediately proceeds to:
- Push for aggressive delivery dates
- Propose “simple” solutions
- Question engineering estimates
- Mandate shortcuts they can’t evaluate
- Drive decisions they’re fundamentally unqualified to make
- Ship dates to senior business heads without any engineering validation
In the chemotherapy scenario, acknowledged incompetence produces deference. In the technology scenario, it somehow produces confidence.
Why the Difference?
Several factors drive this asymmetry, and none of them are flattering.
Visibility of consequences. The cancer patient sees the stakes viscerally. The tumour is in their body. The chemotherapy will make them physically ill. The consequences of getting it wrong are personal and immediate. Technology failures, by contrast, are abstract until they’re not. The distributed system that can’t maintain consistency under partition? That’s someone else’s problem until it becomes a P1 incident at 3am.
Illegibility of expertise. Medicine has successfully constructed barriers to amateur interference. White coats. Incomprehensible terminology. Decades of credentialing. Technology, despite being equally complex, has failed to establish similar deference boundaries. Everyone has an iPhone. Everyone has opinions about software.
The Dunning Kruger acceleration. A little knowledge is dangerous, and technology provides just enough surface familiarity to be catastrophically misleading. The manager has used Jira. They’ve seen a Gantt chart. They once wrote an Excel macro. This creates an illusion of adjacent competence that simply doesn’t exist when facing a PET scan.
Accountability diffusion. When chemotherapy fails, the consequences land on a single body. When a technology project fails, it becomes a distributed systems problem of its own, blame fragments across teams, timelines, “changing requirements,” and “unforeseen complexity.” The manager who pushed impossible dates never personally experiences the 4am production incident.
The Absence of Technical Empathy
What’s really missing in failing technology organisations is technical empathy, the capacity to understand, at a meaningful level, what trade offs are being made and why they matter.
When a doctor says “this treatment has a 30% chance of significant side effects,” the patient grasps that this is a trade off. They may not understand the mechanism, but they understand the structure of the decision: accepting known harm for potential benefit.
When an engineer says “if we skip the integration testing phase, we increase the probability of data corruption in production,” the non technical manager hears noise. They don’t have the context to evaluate severity. They don’t understand what “data corruption” actually means for the business. They certainly can’t weigh it against the abstract pressure of “the date.”
So they default to the only metric they can measure: the schedule.
The Project Management Dysfunction
Consider the role of the typical project manager in a failing technology initiative. Their tools are timelines, status reports and burn down charts. Their currency is dates.
When has a project manager ever walked into a steering committee and said: “We need to slow down. There’s too much risk accumulating in this product. The pace is creating technical debt that will compound into failure.”?
They don’t. They can’t. They lack the technical depth to identify the risk, and their incentive structure punishes such honesty even if they could.
Instead, when the date slips, they “rebaseline.” They “replan.” They produce a new Gantt chart that looks exactly like the old one, shifted right by six weeks.
This is treated as project management. It’s actually just administrative recording of failure in progress.
The phrase “we missed the date and are rebaselining” is presented as neutral status reporting. But it obscures a critical question: why did we miss the date? Was it:
- Scope creep from stakeholders who don’t understand impact?
- Technical debt from previous shortcuts coming due?
- Unrealistic estimates imposed by people unqualified to evaluate them?
- Architectural decisions that traded speed for fragility?
The rebaseline answers none of these questions. It simply moves the failure point further into the future, where it will be larger and more expensive.
The Trade Off Vacuum
Here’s a question that exposes the dysfunction: when did a generic manager last table a meaningful technical trade off?
Not “can we do X faster?” That’s not a trade off. That’s just pressure wearing a question mark.
A real trade off sounds like: “If we reduce the scope of automated testing from 80% coverage to 60% coverage, we save three weeks but increase production defect probability by roughly 40%. Given our risk tolerance and the cost of production incidents, is that a trade we want to make?”
This requires understanding what automated testing actually does. What coverage means. How defect probability correlates with test coverage. What production incidents cost.
Generic managers don’t table these trade offs because they can’t. They lack the technical vocabulary, the domain knowledge, and often the intellectual honesty to engage at this level. Instead, they ask: “Why does testing take so long? Can’t we just test the important bits?”
And engineers, exhausted by years of this, learn to either capitulate, obfuscate or buffer their estimates so grossly that the organisation ends up outsourcing the work to another company that is more than happy to create favourable lies about timelines. None of this serves the organisation.
Solving Organisational Problems with Technology (And Making Everything Worse)
There’s a particularly insidious failure mode that emerges from this partial knowledge problem: the instinct to solve organisational dysfunction with technology.
The logic seems sound on the surface. The current system is slow. Teams are frustrated. Data is inconsistent. Processes are manual. The obvious answer? A rewrite. A new platform. A transformation programme.
What follows is depressingly predictable.
The rewrite begins with enthusiasm. A new technology stack is selected, often chosen for its novelty rather than its fit. Kubernetes, because containers are modern. A graph database, because someone read an article. Event sourcing, because it sounds sophisticated. Microservices, because monoliths are unfashionable.
Each decision is wrapped in enough management noise to sound credible. Slide decks proliferate. Vendor presentations are scheduled. Architecture review boards convene. The language is confident: “cloud native,” “future proof,” “scalable by design.”
But anyone with genuine technical depth would immediately challenge the rationality of these decisions. Why do we need a graph database for what is fundamentally a relational problem? What operational capability do we have to run a Kubernetes cluster? Who will maintain this event sourcing infrastructure in three years when the contractors have left?
These questions don’t get asked, because the people making the decisions lack the technical vocabulary to even understand them. And the engineers who could ask them have learned that such questions are career limiting.
So the rewrite proceeds. And the organisation gets worse.
I’ve seen this pattern repeatedly. A legacy system, ugly, creaking, but fundamentally functional, is replaced by a modern platform that is architecturally elegant and operationally catastrophic. The new system requires specialists that don’t exist in the organisation. It has failure modes that nobody anticipated. It solves problems that weren’t actually problems while failing to address the issues that drove the rewrite in the first place.
The teams initially call out a “bedding in period.” The new platform just needs time. People need to adjust. There are teething problems. This is normal.
Months pass. The bedding in period extends. Workarounds accumulate. Shadow spreadsheets emerge. Users quietly route around the new system wherever possible.
Eventually, the inevitable emperor’s robes moment arrives. External specialists are called in, expensive consultants with genuine technical depth, and they deliver the verdict everyone already knew: the new platform is not fit for purpose. The technology choices were inappropriate. The architecture doesn’t match the organisation’s capabilities. The complexity is unjustified.
But by now, tens of millions have been spent. Careers have been built on the transformation. Admitting failure is organisationally impossible. So the platform staggers on, a monument to what happens when partial knowledge drives technology decisions.
The tragedy is that the original problems were often organisational, not technological. The legacy system was slow because processes were broken. Data was inconsistent because ownership was unclear. Teams were frustrated because communication was poor.
No amount of Kubernetes will fix a lack of clear data ownership. No event sourcing architecture will resolve dysfunctional team dynamics. No graph database will compensate for the absence of defined business processes.
But technology feels like action. It appears on roadmaps. It has milestones and deliverables. It can be purchased, installed, and demonstrated. Organisational change is messy, slow, and hard to measure. So we default to technology, and we make everything worse.
The vendors, of course, are delighted to help. They arrive with glossy presentations and reference architectures. They speak with confidence about “digital transformation” and “platform modernisation.” They don’t mention that their incentives are misaligned with yours—they profit from complexity, from licensing, from the ongoing support contracts that complex systems require.
Each unnecessary vendor, each cool but inappropriate technology, each unjustified architectural decision adds another layer of complexity. And complexity is not neutral. It requires expertise to manage. It creates failure modes. It slows everything down. It is, in essence, a tax on every future change.
The partially knowledgeable manager sees a vendor presentation and thinks “this could solve our problems.” The technically competent engineer sees the same presentation and thinks “this would create twelve new problems while solving none of the ones we actually have.”
But the engineer’s voice doesn’t carry. They’re “just technical.” They don’t understand “the business context.” They’re “resistant to change.”
And so the organisation lurches from one technology driven transformation to the next, never addressing the underlying dysfunction, always adding complexity, always wondering why things keep getting worse.
The “Tried and Tested” Fallacy
Here’s where it gets even more frustrating. The non technical leader doesn’t always swing toward shiny new technology. Sometimes they swing to the opposite extreme: “Let’s just use something tried and tested.”
This sounds like wisdom. It sounds like hard won experience tempering youthful enthusiasm. It sounds like the voice of reason.
It’s not. It’s the same dysfunction wearing different clothes.
“Tried and tested” is a lobotomised decision bootstrapped with a meaningless phrase. What does it actually mean? Tried by whom? Tested in what context? Tested against what requirements? Proven suitable for what scale, what failure modes, what operational constraints?
The phrase “tried and tested” is a conversation stopper disguised as a conversation ender. It signals: “We have no appetite for a discussion about technology choices in this technology project.”
Let that sink in. A technology project where the leadership has explicitly opted out of meaningful dialogue about technology choices. This is not conservatism. This is abdication.
The “cool new technology” failure and the “tried and tested” failure are mirror images of the same underlying problem: decisions made without genuine engagement with the technical trade offs.
When someone says “let’s use Kubernetes because it’s modern,” they’re not engaging with whether container orchestration solves any problem you actually have.
When someone says “let’s stick with Oracle because it’s tried and tested,” they’re not engaging with whether a proprietary database at £50,000 per CPU core is justified by your actual consistency and scaling requirements.
Both statements translate to the same thing: “I cannot evaluate this decision on its merits, so I’m using a heuristic that sounds defensible.”
The difference is that “cool technology” gets blamed when projects fail. “Tried and tested” rarely does. If you fail with a boring technology stack, it’s attributed to execution. If you fail with a modern stack, the technology choice itself becomes the scapegoat.
This asymmetry in blame creates a perverse incentive. Non technical leaders learn that “tried and tested” is the career safe choice, regardless of whether it’s the right choice. They’re not optimising for project success. They’re optimising for blame avoidance.
A genuine technology decision process looks nothing like either extreme. It starts with a clear articulation of requirements. It evaluates options against those requirements. It surfaces trade offs explicitly. It makes a choice that the team understands and owns.
“We chose PostgreSQL because our consistency requirements are strict, our scale is moderate, our team has deep expertise, and the operational model fits our on call capacity.”
That’s a decision. “Tried and tested” is not a decision. It’s a refusal to make one while pretending you have.
The Path to Success
The organisations that consistently deliver complex technology successfully share a common characteristic: deep, meaningful dialogue between business stakeholders and engineering teams.
This doesn’t mean business people becoming engineers. It means:
Genuine deference on technical matters. When the engineering team says something is hard, the first response is “help me understand why” rather than “surely it can’t be that hard.”
Trade offs surfaced and owned. When shortcuts are taken, everyone understands what’s being traded. The business explicitly accepts the risk rather than pretending it doesn’t exist.
Subject matter experts in the room. Decisions about architecture, timelines, and scope are made with engineers who understand the implications, not by managers shuffling dates on a chart.
Outcome accountability that includes quality. Project managers measured solely on date adherence will optimise for date adherence, quality be damned. Organisations that include defect rates, production stability, and technical debt in their success metrics get different behaviour.
Permission to slow down. Someone with standing and authority needs the ability to say “stop: we’re accumulating too much risk” and have that statement carry weight.
The Humility Test
There’s a simple test for whether an organisation has healthy technical business relationships. Ask a senior business stakeholder to explain, in their own words, the three most significant technical risks in the current programme.
Not “the timeline is aggressive.” That’s not a technical risk; that’s a schedule statement.
Actual technical risks: “We’re using eventual consistency which means during a failure scenario, customers might see stale data for up to thirty seconds. We’ve accepted this trade off because strong consistency would add four months to the timeline.”
If they can’t articulate anything at this level of specificity, they’re driving a car they don’t understand. And unlike a rental car, when this one crashes, it takes the whole organisation with it.
Conclusion
The cancer patient accepts chemotherapy because they know they don’t know. They yield to expertise. They follow guidance. They ask questions to understand rather than to challenge.
The technology manager pushes dates because they don’t know they don’t know. Their partial knowledge, enough to be dangerous, not enough to be useful, creates false confidence. They challenge without understanding. They drive without seeing the road.
The solution isn’t to make every business stakeholder into an engineer. It’s to cultivate the same humility that the cancer patient naturally possesses: a genuine acceptance that some domains require deference, that expertise matters, and that acknowledging your own incompetence is the first step toward not letting it kill the patient.
In this case, the patient is your programme. And the chemotherapy, the painful, slow, disciplined process of building quality software, is the only treatment that works.
Rebaselining isn’t treatment. It’s just rescheduling the funeral. There is no substitute for meaningful discussions. Planning is just regenerating a stuck thought, over a different timeline.
This is awesome.
The relatability tangible for me. One thing I did note is that as a C level, you have the unique ability to invoke change / shift direction based on your opinion and perform A/B testing live and thus strengthen your process validation cycle. Say there are 2 levels above me – and those individuals are somewhat relatable to some of the personas you mention with the same sort of …inefficiencies. What is one to do in order to invoke the required change as you know that the patient will die if nothing is to be done?
There is no solace for me in the fact that the dying patient or sinking ship just gets kicked down the road as the can – So I end up putting in an exhaustive amount of time to come to the inevitable “argument” with my ducks ready and loaded: Data backed evidence, proof of actual testing, industry best practice references in wide sprawling confluence docs, AWS senior solution architect reports and design backing up my proposed “decision”… The list goes on. Yet – the removal of my influence seems trivial and the leaders continue with whatever they were doing before hand in what I call a decision lead business (A business whereby leaders derive value simply by making decisions). Do I pack up shop and try and sell my wares elsewhere? Do I make a stand and risk losing my cool and subsequently losing my credibility as a leader? Do I just accept that the fact is, this is how the game works and I have no choice but to conform to it in order to survive it OR do I make my case and watch the ship burn as it has so many times before and keep the lesson for myself and my journey?
Like you conveyed, speaking sense seems to be punished. If the words of sense are to not conform to the suggested carrot and the carrot gets used for the most cardinal of carrot sins, punishment – what is there to do?
Andrew, as a CTO of a Reseller and Implementation partner, your article describes the “Incompetence Asymmetry” from perhaps the most dangerous vantage point of all.
While a Vendor CTO worries about their product’s reputation, and an Internal CTO worries about their organization’s stability, the Implementation CTO is the one caught in the “pincer movement” between a vendor who overpromises and a client who underestimates.
Here is my perspective from the lens of the “Professional Services” leader:
1. The “Middleman” Liability (The Proxy for Accountability)
Andrew, you note that in tech, blame fragments. For a reseller, however, blame often concentrates on us. We didn’t build the software (the vendor did), and we didn’t define the business need (the client did), but we are the “doctor” in the room when the surgery goes wrong.
The Implementation View: We often inherit the “Deference Delusion” from both sides. The vendor tells the client the tool is “plug and play” (visibility of consequences: low). The client then expects us to implement it in weeks, not months.
The Strategy: I have to act as the Deference Enforcer. My role is to bridge the “Illegibility of Expertise” by showing the client the literal “surgical plan”. We move away from “feature lists” and toward “dependency maps”. We make it clear that if the client doesn’t provide the “organs” (data quality, API access), the “body” (the project) will reject the transplant.
2. The “Box-Shifting” vs. “Clinical” Conflict
The article speaks about the “Dunning-Kruger acceleration”—where a manager who used Excel thinks they can manage a cloud migration. In the reseller world, this happens at the Sales level.
The Implementation View: Sales teams (both ours and the vendor’s) are often the ones “shuffling dates on a chart” without engineering in the room. This creates an Integrity Debt. By the time my implementation engineers arrive, the “patient” has already been promised a miracle cure with no side effects.
The Strategy: I must mandate Technical Pre-Sales. No contract is signed until an architect—someone who actually understands the “failure modes” you mention—has vetted the Statement of Work (SOW). We have to be willing to lose a sale to save our reputation.
3. The “Modernization” Trap (Elegant but Catastrophic)
Your closing point about replacing a “creaking but functional” legacy system with an “architecturally elegant but operationally catastrophic” one is the Reseller’s nightmare.
The Implementation View: We are often paid to be the “agents of change”. If we implement a complex Kubernetes-based microservices architecture for a client that only has the skills to manage a single Windows server, we have performed a successful “surgery” that will eventually kill the patient.
The Strategy: We must assess Operational Maturity, not just technical requirements. As a CTO, I push my team to ask: “Who is going to run this at 3 AM?” If the answer is “no one,” then the elegant solution is the wrong solution. Our job is to provide the “permission to slow down” or even the permission to stay “ugly and functional” if the client isn’t ready for the “chemotherapy” of modern tech.
4. Visibility of Consequences (The “Sunk Cost” Pressure)
The article mentions that technology failures are abstract until they aren’t. In implementations, the moment they become visceral is during UAT (User Acceptance Testing).
The Implementation View: By UAT, the budget is 90% spent, and the “delivery date” is tomorrow. This is where the “Incompetence Asymmetry” is most toxic. The client realises the system doesn’t do what they thought it would, but the Project Manager pushes to “go live and fix it later”.
The Strategy: I implement “Go/No-Go” Gates that are tied to technical health, not just calendar dates. I have to empower my Lead Consultants to be the “surgeons” who can stop the line. We frame this to the client as: “We can go live today and crash tomorrow, or we can delay two weeks and survive five years.”
5. Bridging the Expertise Gap (The White Coat Effect)
The reseller is often hired specifically because the client knows they are “incompetent” in a specific domain. Yet, as you point out, that acknowledgement often leads to questioning the expert’s estimates.
The Implementation View: The client hires us for our “White Coat,” then tells us how to wear it. They want our expertise but want to negotiate the “laws of physics” (e.g., “Why does data migration take three days? Can’t you just copy-paste it?”
The Strategy: Radical Transparency in Complexity. We don’t just give an estimate; we show the “why”. We use “Reference Architectures” and “Risk Logs” from previous failures to provide the visceral stakes you talk about. We tell the “war stories” of other companies that ignored the warnings, turning the abstract risk into a cautionary, visceral tale.
Final Summary
From the reseller/implementer perspective, your article is a manifesto for Consultative Integrity. We are the ones who must live with the client through the “physical illness” of the implementation. If we allow “Deference Delusion” to drive the project, we aren’t partners; we’re just accomplices in a delivery failure. My job as CTO is to ensure we have the “standing and authority” to be the doctor, even when the patient is demanding a shorter recovery time.
Great article!