Thereâs a peculiar asymmetry in how humans handle their own incompetence. It reveals itself most starkly when you compare two scenarios: a cancer patient undergoing chemotherapy, and a project manager pushing delivery dates on a complex technology initiative.
Both involve life altering stakes. Both require deep expertise the decision maker doesnât possess. Yet in one case, we defer completely. In the other, we somehow feel qualified to drive.
The Chemotherapy Paradox
Firstly, lets be clear: incompetence is contextual. Very few people are able to declare themselves as “universally incompetent”. What does this mean? Well just because you have low or no technology competence, it doesn’t mean you are without merit or purpose. The trick is to tie your competencies to the work you are involved in….
When someone receives a cancer diagnosis, something remarkable happens to their ego. They sit across from an oncologist who explains a treatment protocol involving cytotoxic agents that will poison their body in carefully calibrated doses. Their hair will fall out. Theyâll experience chronic nausea. Their immune system will crater. The treatment itself might kill them. And they say: âOkay. When do we start?â
This isnât weakness. Itâs wisdom born of acknowledged ignorance. The patient knows they donât understand the pharmacokinetics of cisplatin or the mechanisms of programmed cell death. They canât evaluate whether the proposed regimen optimises for tumour response versus quality of life. They lack the fourteen years of training required to even parse the relevant literature.
So they yield. Completely. They ask questions to understand, not to challenge. They follow instructions precisely. They donât suggest alternative dosing schedules based on something they read online.
This is how humans behave when they genuinely know they donât know.
The Technology Incompetence Paradox
Now consider the enterprise technology project. A complex migration, perhaps, or a new trading platform. The stakes are high, reputational damage, regulatory exposure, hundreds of millions in potential losses.
The project manager or business sponsor sits across from a principal engineer who explains the technical approach. The engineer describes the challenges: distributed consensus problems, data consistency guarantees, failure mode analysis, performance characteristics under load.
The managerâs eyes glaze slightly. If pressed, theyâll readily admit: âIâm not technical.â
And then, in the very next breath, theyâll ask: âBut surely it canât be that hard? Canât we justâŚ?â
This is the incompetence paradox in its purest form. The same person who just acknowledged they donât understand the domain immediately proceeds to:
- Push for aggressive delivery dates
- Propose âsimpleâ solutions
- Question engineering estimates
- Mandate shortcuts they canât evaluate
- Drive decisions theyâre fundamentally unqualified to make
- Ship dates to senior business heads without any engineering validation
In the chemotherapy scenario, acknowledged incompetence produces deference. In the technology scenario, it somehow produces confidence.
Why the Difference?
Several factors drive this asymmetry, and none of them are flattering.
Visibility of consequences. The cancer patient sees the stakes viscerally. The tumour is in their body. The chemotherapy will make them physically ill. The consequences of getting it wrong are personal and immediate. Technology failures, by contrast, are abstract until theyâre not. The distributed system that canât maintain consistency under partition? Thatâs someone elseâs problem until it becomes a P1 incident at 3am.
Illegibility of expertise. Medicine has successfully constructed barriers to amateur interference. White coats. Incomprehensible terminology. Decades of credentialing. Technology, despite being equally complex, has failed to establish similar deference boundaries. Everyone has an iPhone. Everyone has opinions about software.
The Dunning Kruger acceleration. A little knowledge is dangerous, and technology provides just enough surface familiarity to be catastrophically misleading. The manager has used Jira. Theyâve seen a Gantt chart. They once wrote an Excel macro. This creates an illusion of adjacent competence that simply doesnât exist when facing a PET scan.
Accountability diffusion. When chemotherapy fails, the consequences land on a single body. When a technology project fails, it becomes a distributed systems problem of its own, blame fragments across teams, timelines, âchanging requirements,â and âunforeseen complexity.â The manager who pushed impossible dates never personally experiences the 4am production incident.
The Absence of Technical Empathy
Whatâs really missing in failing technology organisations is technical empathy, the capacity to understand, at a meaningful level, what trade offs are being made and why they matter.
When a doctor says âthis treatment has a 30% chance of significant side effects,â the patient grasps that this is a trade off. They may not understand the mechanism, but they understand the structure of the decision: accepting known harm for potential benefit.
When an engineer says âif we skip the integration testing phase, we increase the probability of data corruption in production,â the non technical manager hears noise. They donât have the context to evaluate severity. They donât understand what âdata corruptionâ actually means for the business. They certainly canât weigh it against the abstract pressure of âthe date.â
So they default to the only metric they can measure: the schedule.
The Project Management Dysfunction
Consider the role of the typical project manager in a failing technology initiative. Their tools are timelines, status reports and burn down charts. Their currency is dates.
When has a project manager ever walked into a steering committee and said: âWe need to slow down. Thereâs too much risk accumulating in this product. The pace is creating technical debt that will compound into failure.â?
They donât. They canât. They lack the technical depth to identify the risk, and their incentive structure punishes such honesty even if they could.
Instead, when the date slips, they ârebaseline.â They âreplan.â They produce a new Gantt chart that looks exactly like the old one, shifted right by six weeks.
This is treated as project management. Itâs actually just administrative recording of failure in progress.
The phrase âwe missed the date and are rebaseliningâ is presented as neutral status reporting. But it obscures a critical question: why did we miss the date? Was it:
- Scope creep from stakeholders who donât understand impact?
- Technical debt from previous shortcuts coming due?
- Unrealistic estimates imposed by people unqualified to evaluate them?
- Architectural decisions that traded speed for fragility?
The rebaseline answers none of these questions. It simply moves the failure point further into the future, where it will be larger and more expensive.
The Trade Off Vacuum
Hereâs a question that exposes the dysfunction: when did a generic manager last table a meaningful technical trade off?
Not âcan we do X faster?â Thatâs not a trade off. Thatâs just pressure wearing a question mark.
A real trade off sounds like: âIf we reduce the scope of automated testing from 80% coverage to 60% coverage, we save three weeks but increase production defect probability by roughly 40%. Given our risk tolerance and the cost of production incidents, is that a trade we want to make?â
This requires understanding what automated testing actually does. What coverage means. How defect probability correlates with test coverage. What production incidents cost.
Generic managers donât table these trade offs because they canât. They lack the technical vocabulary, the domain knowledge, and often the intellectual honesty to engage at this level. Instead, they ask: âWhy does testing take so long? Canât we just test the important bits?â
And engineers, exhausted by years of this, learn to either capitulate, obfuscate or buffer their estimates so grossly that the organisation ends up outsourcing the work to another company that is more than happy to create favourable lies about timelines. None of this serves the organisation.
Solving Organisational Problems with Technology (And Making Everything Worse)
Thereâs a particularly insidious failure mode that emerges from this partial knowledge problem: the instinct to solve organisational dysfunction with technology.
The logic seems sound on the surface. The current system is slow. Teams are frustrated. Data is inconsistent. Processes are manual. The obvious answer? A rewrite. A new platform. A transformation programme.
What follows is depressingly predictable.
The rewrite begins with enthusiasm. A new technology stack is selected, often chosen for its novelty rather than its fit. Kubernetes, because containers are modern. A graph database, because someone read an article. Event sourcing, because it sounds sophisticated. Microservices, because monoliths are unfashionable.
Each decision is wrapped in enough management noise to sound credible. Slide decks proliferate. Vendor presentations are scheduled. Architecture review boards convene. The language is confident: âcloud native,â âfuture proof,â âscalable by design.â
But anyone with genuine technical depth would immediately challenge the rationality of these decisions. Why do we need a graph database for what is fundamentally a relational problem? What operational capability do we have to run a Kubernetes cluster? Who will maintain this event sourcing infrastructure in three years when the contractors have left?
These questions donât get asked, because the people making the decisions lack the technical vocabulary to even understand them. And the engineers who could ask them have learned that such questions are career limiting.
So the rewrite proceeds. And the organisation gets worse.
Iâve seen this pattern repeatedly. A legacy system, ugly, creaking, but fundamentally functional, is replaced by a modern platform that is architecturally elegant and operationally catastrophic. The new system requires specialists that donât exist in the organisation. It has failure modes that nobody anticipated. It solves problems that werenât actually problems while failing to address the issues that drove the rewrite in the first place.
The teams initially call out a âbedding in period.â The new platform just needs time. People need to adjust. There are teething problems. This is normal.
Months pass. The bedding in period extends. Workarounds accumulate. Shadow spreadsheets emerge. Users quietly route around the new system wherever possible.
Eventually, the inevitable emperorâs robes moment arrives. External specialists are called in, expensive consultants with genuine technical depth, and they deliver the verdict everyone already knew: the new platform is not fit for purpose. The technology choices were inappropriate. The architecture doesnât match the organisationâs capabilities. The complexity is unjustified.
But by now, tens of millions have been spent. Careers have been built on the transformation. Admitting failure is organisationally impossible. So the platform staggers on, a monument to what happens when partial knowledge drives technology decisions.
The tragedy is that the original problems were often organisational, not technological. The legacy system was slow because processes were broken. Data was inconsistent because ownership was unclear. Teams were frustrated because communication was poor.
No amount of Kubernetes will fix a lack of clear data ownership. No event sourcing architecture will resolve dysfunctional team dynamics. No graph database will compensate for the absence of defined business processes.
But technology feels like action. It appears on roadmaps. It has milestones and deliverables. It can be purchased, installed, and demonstrated. Organisational change is messy, slow, and hard to measure. So we default to technology, and we make everything worse.
The vendors, of course, are delighted to help. They arrive with glossy presentations and reference architectures. They speak with confidence about âdigital transformationâ and âplatform modernisation.â They donât mention that their incentives are misaligned with yoursâthey profit from complexity, from licensing, from the ongoing support contracts that complex systems require.
Each unnecessary vendor, each cool but inappropriate technology, each unjustified architectural decision adds another layer of complexity. And complexity is not neutral. It requires expertise to manage. It creates failure modes. It slows everything down. It is, in essence, a tax on every future change.
The partially knowledgeable manager sees a vendor presentation and thinks âthis could solve our problems.â The technically competent engineer sees the same presentation and thinks âthis would create twelve new problems while solving none of the ones we actually have.â
But the engineerâs voice doesnât carry. Theyâre âjust technical.â They donât understand âthe business context.â Theyâre âresistant to change.â
And so the organisation lurches from one technology driven transformation to the next, never addressing the underlying dysfunction, always adding complexity, always wondering why things keep getting worse.
The âTried and Testedâ Fallacy
Hereâs where it gets even more frustrating. The non technical leader doesnât always swing toward shiny new technology. Sometimes they swing to the opposite extreme: âLetâs just use something tried and tested.â
This sounds like wisdom. It sounds like hard won experience tempering youthful enthusiasm. It sounds like the voice of reason.
Itâs not. Itâs the same dysfunction wearing different clothes.
âTried and testedâ is a lobotomised decision bootstrapped with a meaningless phrase. What does it actually mean? Tried by whom? Tested in what context? Tested against what requirements? Proven suitable for what scale, what failure modes, what operational constraints?
The phrase âtried and testedâ is a conversation stopper disguised as a conversation ender. It signals: âWe have no appetite for a discussion about technology choices in this technology project.â
Let that sink in. A technology project where the leadership has explicitly opted out of meaningful dialogue about technology choices. This is not conservatism. This is abdication.
The âcool new technologyâ failure and the âtried and testedâ failure are mirror images of the same underlying problem: decisions made without genuine engagement with the technical trade offs.
When someone says âletâs use Kubernetes because itâs modern,â theyâre not engaging with whether container orchestration solves any problem you actually have.
When someone says âletâs stick with Oracle because itâs tried and tested,â theyâre not engaging with whether a proprietary database at ÂŁ50,000 per CPU core is justified by your actual consistency and scaling requirements.
Both statements translate to the same thing: âI cannot evaluate this decision on its merits, so Iâm using a heuristic that sounds defensible.â
The difference is that âcool technologyâ gets blamed when projects fail. âTried and testedâ rarely does. If you fail with a boring technology stack, itâs attributed to execution. If you fail with a modern stack, the technology choice itself becomes the scapegoat.
This asymmetry in blame creates a perverse incentive. Non technical leaders learn that âtried and testedâ is the career safe choice, regardless of whether itâs the right choice. Theyâre not optimising for project success. Theyâre optimising for blame avoidance.
A genuine technology decision process looks nothing like either extreme. It starts with a clear articulation of requirements. It evaluates options against those requirements. It surfaces trade offs explicitly. It makes a choice that the team understands and owns.
âWe chose PostgreSQL because our consistency requirements are strict, our scale is moderate, our team has deep expertise, and the operational model fits our on call capacity.â
Thatâs a decision. âTried and testedâ is not a decision. Itâs a refusal to make one while pretending you have.
The Path to Success
The organisations that consistently deliver complex technology successfully share a common characteristic: deep, meaningful dialogue between business stakeholders and engineering teams.
This doesnât mean business people becoming engineers. It means:
Genuine deference on technical matters. When the engineering team says something is hard, the first response is âhelp me understand whyâ rather than âsurely it canât be that hard.â
Trade offs surfaced and owned. When shortcuts are taken, everyone understands whatâs being traded. The business explicitly accepts the risk rather than pretending it doesnât exist.
Subject matter experts in the room. Decisions about architecture, timelines, and scope are made with engineers who understand the implications, not by managers shuffling dates on a chart.
Outcome accountability that includes quality. Project managers measured solely on date adherence will optimise for date adherence, quality be damned. Organisations that include defect rates, production stability, and technical debt in their success metrics get different behaviour.
Permission to slow down. Someone with standing and authority needs the ability to say âstop: weâre accumulating too much riskâ and have that statement carry weight.
The Humility Test
Thereâs a simple test for whether an organisation has healthy technical business relationships. Ask a senior business stakeholder to explain, in their own words, the three most significant technical risks in the current programme.
Not âthe timeline is aggressive.â Thatâs not a technical risk; thatâs a schedule statement.
Actual technical risks: âWeâre using eventual consistency which means during a failure scenario, customers might see stale data for up to thirty seconds. Weâve accepted this trade off because strong consistency would add four months to the timeline.â
If they canât articulate anything at this level of specificity, theyâre driving a car they donât understand. And unlike a rental car, when this one crashes, it takes the whole organisation with it.
Conclusion
The cancer patient accepts chemotherapy because they know they donât know. They yield to expertise. They follow guidance. They ask questions to understand rather than to challenge.
The technology manager pushes dates because they donât know they donât know. Their partial knowledge, enough to be dangerous, not enough to be useful, creates false confidence. They challenge without understanding. They drive without seeing the road.
The solution isnât to make every business stakeholder into an engineer. Itâs to cultivate the same humility that the cancer patient naturally possesses: a genuine acceptance that some domains require deference, that expertise matters, and that acknowledging your own incompetence is the first step toward not letting it kill the patient.
In this case, the patient is your programme. And the chemotherapy, the painful, slow, disciplined process of building quality software, is the only treatment that works.
Rebaselining isnât treatment. Itâs just rescheduling the funeral. There is no substitute for meaningful discussions. Planning is just regenerating a stuck thought, over a different timeline.
This is awesome.
The relatability tangible for me. One thing I did note is that as a C level, you have the unique ability to invoke change / shift direction based on your opinion and perform A/B testing live and thus strengthen your process validation cycle. Say there are 2 levels above me – and those individuals are somewhat relatable to some of the personas you mention with the same sort of …inefficiencies. What is one to do in order to invoke the required change as you know that the patient will die if nothing is to be done?
There is no solace for me in the fact that the dying patient or sinking ship just gets kicked down the road as the can – So I end up putting in an exhaustive amount of time to come to the inevitable “argument” with my ducks ready and loaded: Data backed evidence, proof of actual testing, industry best practice references in wide sprawling confluence docs, AWS senior solution architect reports and design backing up my proposed “decision”… The list goes on. Yet – the removal of my influence seems trivial and the leaders continue with whatever they were doing before hand in what I call a decision lead business (A business whereby leaders derive value simply by making decisions). Do I pack up shop and try and sell my wares elsewhere? Do I make a stand and risk losing my cool and subsequently losing my credibility as a leader? Do I just accept that the fact is, this is how the game works and I have no choice but to conform to it in order to survive it OR do I make my case and watch the ship burn as it has so many times before and keep the lesson for myself and my journey?
Like you conveyed, speaking sense seems to be punished. If the words of sense are to not conform to the suggested carrot and the carrot gets used for the most cardinal of carrot sins, punishment – what is there to do?
Andrew, as a CTO of a Reseller and Implementation partner, your article describes the “Incompetence Asymmetry” from perhaps the most dangerous vantage point of all.
While a Vendor CTO worries about their productâs reputation, and an Internal CTO worries about their organization’s stability, the Implementation CTO is the one caught in the “pincer movement” between a vendor who overpromises and a client who underestimates.
Here is my perspective from the lens of the “Professional Services” leader:
1. The “Middleman” Liability (The Proxy for Accountability)
Andrew, you note that in tech, blame fragments. For a reseller, however, blame often concentrates on us. We didn’t build the software (the vendor did), and we didn’t define the business need (the client did), but we are the “doctor” in the room when the surgery goes wrong.
The Implementation View: We often inherit the “Deference Delusion” from both sides. The vendor tells the client the tool is “plug and play” (visibility of consequences: low). The client then expects us to implement it in weeks, not months.
The Strategy: I have to act as the Deference Enforcer. My role is to bridge the “Illegibility of Expertise” by showing the client the literal “surgical plan”. We move away from “feature lists” and toward “dependency maps”. We make it clear that if the client doesn’t provide the “organs” (data quality, API access), the “body” (the project) will reject the transplant.
2. The “Box-Shifting” vs. “Clinical” Conflict
The article speaks about the “Dunning-Kruger acceleration”âwhere a manager who used Excel thinks they can manage a cloud migration. In the reseller world, this happens at the Sales level.
The Implementation View: Sales teams (both ours and the vendorâs) are often the ones “shuffling dates on a chart” without engineering in the room. This creates an Integrity Debt. By the time my implementation engineers arrive, the “patient” has already been promised a miracle cure with no side effects.
The Strategy: I must mandate Technical Pre-Sales. No contract is signed until an architectâsomeone who actually understands the “failure modes” you mentionâhas vetted the Statement of Work (SOW). We have to be willing to lose a sale to save our reputation.
3. The “Modernization” Trap (Elegant but Catastrophic)
Your closing point about replacing a “creaking but functional” legacy system with an “architecturally elegant but operationally catastrophic” one is the Resellerâs nightmare.
The Implementation View: We are often paid to be the “agents of change”. If we implement a complex Kubernetes-based microservices architecture for a client that only has the skills to manage a single Windows server, we have performed a successful “surgery” that will eventually kill the patient.
The Strategy: We must assess Operational Maturity, not just technical requirements. As a CTO, I push my team to ask: “Who is going to run this at 3 AM?” If the answer is “no one,” then the elegant solution is the wrong solution. Our job is to provide the “permission to slow down” or even the permission to stay “ugly and functional” if the client isn’t ready for the “chemotherapy” of modern tech.
4. Visibility of Consequences (The “Sunk Cost” Pressure)
The article mentions that technology failures are abstract until they aren’t. In implementations, the moment they become visceral is during UAT (User Acceptance Testing).
The Implementation View: By UAT, the budget is 90% spent, and the “delivery date” is tomorrow. This is where the “Incompetence Asymmetry” is most toxic. The client realises the system doesn’t do what they thought it would, but the Project Manager pushes to “go live and fix it later”.
The Strategy: I implement “Go/No-Go” Gates that are tied to technical health, not just calendar dates. I have to empower my Lead Consultants to be the “surgeons” who can stop the line. We frame this to the client as: “We can go live today and crash tomorrow, or we can delay two weeks and survive five years.”
5. Bridging the Expertise Gap (The White Coat Effect)
The reseller is often hired specifically because the client knows they are “incompetent” in a specific domain. Yet, as you point out, that acknowledgement often leads to questioning the expert’s estimates.
The Implementation View: The client hires us for our “White Coat,” then tells us how to wear it. They want our expertise but want to negotiate the “laws of physics” (e.g., “Why does data migration take three days? Can’t you just copy-paste it?”
The Strategy: Radical Transparency in Complexity. We don’t just give an estimate; we show the “why”. We use “Reference Architectures” and “Risk Logs” from previous failures to provide the visceral stakes you talk about. We tell the “war stories” of other companies that ignored the warnings, turning the abstract risk into a cautionary, visceral tale.
Final Summary
From the reseller/implementer perspective, your article is a manifesto for Consultative Integrity. We are the ones who must live with the client through the “physical illness” of the implementation. If we allow “Deference Delusion” to drive the project, we aren’t partners; we’re just accomplices in a delivery failure. My job as CTO is to ensure we have the “standing and authority” to be the doctor, even when the patient is demanding a shorter recovery time.
Great article!