Why Promoting Your Best Manager Will Not Give You Great Leaders

Management and leadership sound so similar that they are often used interchangeably. True, they are both a sport. But details matter when you want to win.

Nowhere is this more visible than in financial services. Banks today do not compete inside a known, stable competitive set. The threat landscape runs from a two-person fintech with a single brilliant product insight to a global technology platform with distribution at a scale no bank has ever matched. The competitive horizon is not quarterly. It shifts continuously, and the organisations that navigate it successfully are not the ones with the best execution machinery. They are the ones with genuine leadership capability, people who can see around corners, reorient the organisation before the corner becomes a wall, and build the kind of teams that remain effective when conditions change faster than any plan anticipated. In that environment the distinction between leadership and management is not an academic one. It is the difference between an organisation that adapts and one that optimises itself into irrelevance.

Most organisations have not built leadership capability. They have built something that resembles it closely enough to feel satisfied with, a hybrid of leadership and management that does the job of neither particularly well and leaves everyone quietly frustrated without being able to name why.

The frustration is real and it is widespread. Organisations promote their best managers into leadership roles and then watch, puzzled, as the same problems keep recurring. The promoted individual works harder than ever, attends every meeting, answers every escalation, drives every initiative forward with visible energy and effort, and somehow the organisation does not change. It moves. It does not change. There is a difference and it matters enormously.

The reason sits in a truth that is simple to state and genuinely difficult to act on. You cannot grow people by over nurturing them. Growth requires tension. It requires ambiguity. It requires the specific discomfort of being in a situation that exceeds your current capability and having to find your way through it without someone removing the difficulty on your behalf. Management, at its most well intentioned, does exactly that. It removes the difficulty. It resolves the tension. It fills the ambiguity with instruction. And in doing so it produces people who are entirely dependent on the continuation of those conditions, people who have been cared for rather than developed. Leadership understands this. It holds the tension deliberately. It resists the impulse to resolve what should remain unresolved long enough to do its work. Everything that follows is an attempt to explain what that difference looks like in practice and why so few organisations manage to sustain it.

1. The Rudder and the Propeller

A ship needs two things to go somewhere useful. It needs propulsion and it needs direction. The propeller pushes. The rudder steers. You can have an extraordinarily powerful propeller and still end up somewhere you never intended to go if the rudder is unattended. You can have perfect steering and go nowhere if the propeller stops. Both matter. They are not the same thing.

Management is the propeller. It converts energy into forward motion. It coordinates, it prioritises, it holds people accountable, it ensures that the agreed plan is executed with sufficient rigour that it actually happens. Good management is genuinely valuable and organisations that romanticise leadership while tolerating poor management usually pay a significant price for that indulgence.

Leadership is the rudder. It determines direction. It reads the water ahead. It makes the quiet, continuous adjustments that keep the ship on course as conditions change. And here is the problem that conflating the two creates: you cannot effectively tend the rudder while you are down in the engine room. The two activities require different positions on the ship, different attention, different awareness of what is happening around you.

Most organisations have unwittingly pulled their rudder operators into the engine room and are now wondering why the ship keeps drifting.

2. What Management Actually Is

It is worth being precise about this because management has been unfairly maligned in an era that fetishises leadership. Management is not a lesser discipline. It is a distinct one.

Management is predominantly task oriented work. It answers the question of what needs to happen next and who is going to do it and by when. It thrives on clarity, structure, and decisiveness. A good manager removes ambiguity from execution. They tell people what they need to do, ensure people have what they need to do it, and follow up when things fall short. This is not trivial. Organisations that lack management discipline leave talented people flailing in confusion, duplicating effort, and losing momentum on things that genuinely mattered.

But management operates from authority. The manager’s ability to direct is derived from their position in the hierarchy. When a manager says do this, the implied complement is because I am responsible for this area and I am telling you it needs to happen. That authority is legitimate and necessary. Without it, coordination collapses.

The limitation of authority is that it cannot create genuine understanding. It can create compliance. It can create movement. It cannot create the kind of shared comprehension that allows an organisation to navigate genuinely novel problems without constant direction from above. An organisation that runs entirely on authority is an organisation that stops thinking the moment the authority figure leaves the room.

There is one more quality of management that is worth naming clearly because it is often misread as a weakness when it is actually a feature. Management is contextual and isolated. A management decision that is entirely correct for one team in one situation may be completely wrong for a different team facing a superficially similar one. Management does not need to be universal. It needs to be right here, right now, for these people, on this problem. That specificity is what makes it effective. The mistake is assuming that because management works locally it can be scaled into something that substitutes for the broader, travelling thinking that leadership is supposed to provide.

There is also a gravitational relationship between management and the status quo that deserves to be named because it is almost never discussed honestly. Management naturally favours what already exists. It optimises the current system. It indexes heavily on how things feel in the present moment, on whether people are comfortable, whether the temperature in the room is manageable, whether the current equilibrium can be preserved. This is not incompetence. It is rational behaviour within the management frame, because management’s job is to make the current state work. The problem is that this instinct, left unchecked, produces precisely the outcome it was trying to prevent. Teams managed entirely for current comfort develop no tolerance for disruption. They are not safe. They are anaesthetised. There is a meaningful difference between the two and organisations that mistake one for the other pay for it the moment reality stops being polite.

3. What Leadership Actually Is

Leadership operates from content rather than authority. A leader’s ability to influence is derived from what they understand, what they can articulate, and what they can help others see that they could not see before. When a leader speaks, the implied complement is not because I said so but because here is how I understand this situation and here is why I think this matters and here is the question I think we are not yet asking.

This is a fundamentally different kind of influence and it requires a fundamentally different kind of work to produce.

Leadership requires thought. It requires reflection. It requires the willingness to sit with a problem long enough that you develop a genuine point of view about it rather than simply inheriting the conventional wisdom of the organisation. It requires reading, conversation, observation, and the kind of quiet processing time that does not appear productive in any visible sense but is in fact where leadership capacity is actually built.

Telling someone what to do is management. Helping someone think differently is leadership. Giving all the answers is management. Helping someone look in different places for better answers is leadership. Pushing is management. Changing the direction people are oriented toward is leadership.

The distinction sounds clean on paper. In practice it is enormously difficult to maintain because organisations are structured to reward the former in ways that are immediate and visible, while the returns on the latter are slow, diffuse, and hard to attribute.

There is a further dimension to leadership that management does not require and cannot replicate. A leader has to be able to speak with multiple voices. Not multiple personalities, multiple registers. The same understanding of a situation has to be translatable into the language of the engineer, the language of the commercial team, the language of the board, the language of the person who is frightened about what the change means for them specifically. This is not spin. It is the opposite of spin. It is the discipline of caring enough about your audience to meet them where they are rather than expecting them to come to where you are.

This requires empathy as a functional capability, not as a personality trait. A leader who can only articulate an idea in one way is a leader who can only include one kind of person in the narrative they are building. Everyone else is expected to translate for themselves, and many will not bother. Empathy in leadership is the work of genuinely understanding what someone else needs to hear in order to be included in a direction of travel that you already understand. It is the bridge between the leader’s comprehension and the organisation’s participation. Without it, leadership becomes a broadcast that most people receive as noise.

There is a discipline of sequencing that sits underneath all of this that separates leaders who genuinely develop thinking from those who merely perform the appearance of listening. Leaders go last. Not because their view is less important but because the moment a leader speaks first, the room recalibrates around that position. People who disagree quietly revise their answers. People who were uncertain find sudden agreement. The quality of thinking in the room collapses toward the leader’s starting point and the leader learns nothing they did not already know. Going last keeps the room honest. It surfaces the actual distribution of thinking rather than a polished reflection of the leader’s own.

But going last does not mean speaking softly when you finally do speak. When the leader contributes, the contribution should lead and correct. It should take what has been said, name what is being avoided, challenge the assumption underneath the apparent consensus, reorient the group toward a question nobody has asked yet. This is fundamentally different from censoring or instructing. Censoring shuts thinking down. Instructing replaces it. Leading and correcting extends it, challenges it, and points it somewhere more honest. A leader who goes last and then merely summarises what the room already said has wasted the only moment that required genuine leadership.

4. The Asymmetry

There is an important asymmetry here that most organisations get backwards. Managers can lead. Leaders should never manage.

A manager who, in a particular moment, helps a team member think differently about a problem, who asks a question instead of giving an instruction, who creates space instead of filling it, is doing something valuable and healthy. They are exercising a leadership instinct within a management role and the organisation is better for it. The direction of travel from management toward leadership is always available and always welcome.

The reverse is not true. A leader who collapses into management is not being helpful in a pinch. They are making a structural error that compounds quietly over time. Every time a leader answers the question that the organisation should have answered for itself, they are making the organisation slightly more dependent and slightly less capable. Every time a leader steps into a management vacuum rather than asking why the vacuum exists, they are deferring the real problem. The real problem is never the specific decision that needed making. It is the condition that produced an organisation unable to make it.

This is why the phrase servant leadership, popular as it is, requires careful handling. Service to an organisation does not mean doing whatever the organisation asks of you. Sometimes the most important service a leader can render is the refusal to rescue. The willingness to let the organisation sit with its own difficulty long enough to develop the capacity to resolve it.

Here is what actually happens when you promote a strong manager into a leadership role. For the first few weeks, perhaps the first few months, they attempt to operate at the level the role requires. They think about strategy. They have the bigger conversations. They ask questions rather than giving answers.

Then something breaks. An escalation arrives that nobody else is handling. A decision gets stuck in a committee that needs someone to break the deadlock. A team member is struggling and needs specific, concrete guidance right now. A project is drifting and the deadline is real. Each of these is legitimate. Each of these genuinely needs attention. And so the newly promoted leader handles it, because that is what they are good at, because it feels responsible, because the organisation visibly benefits in the short term, and because sitting with the discomfort of not intervening requires a kind of tolerance for organisational inertia that very few people have been taught to develop.

The organisation has natural inertia. Left alone, problems often resolve themselves, teams often find their own equilibrium, decisions often get made by the people closest to the information. Leaders who cannot sit with that inertia, who reach instinctively for management tools every time the organisation slows down, are not leading. They are preventing the organisation from developing the leadership capacity it needs below them.

This is the gravitational pull. Every escalation, every urgent decision, every visible problem is pulling the leader back toward the engine room. And the engine room is comfortable. It is familiar. It produces immediate, attributable results. It feels like doing something. Reflection feels like doing nothing, right up until the moment the ship runs aground.

5. The Organisation’s Role in All of This

It would be convenient if this were purely a personal failure, a matter of individual leaders lacking the discipline to stay at the rudder. The reality is more uncomfortable. Most organisations actively construct the conditions that produce it and then spend considerable energy being puzzled by the results.

The pattern is consistent and almost universal. Organisations ask their leaders to lead, once they have finished all the management tasks that are perceived to need doing by the most competent person in the room. The senior leader is the most capable, therefore the most urgent problems flow to them, therefore their time is consumed before the leadership work begins, therefore the leadership work never begins. The organisation has not asked someone to lead. It has asked someone to manage everything first and lead with whatever is left, which is nothing.

This is not malice. It is the logical output of low trust combined with high complexity. Organisations that do not trust the layers below them to handle difficulty will always route that difficulty upward. And the higher it routes, the more completely it consumes the people who should be doing something else entirely.

The consequences are visible to everyone and diagnosed by almost no one. Strategy disconnects from execution. The leadership appears preoccupied and remote. Initiatives stall at the point where senior judgment is required because senior judgment is perpetually occupied elsewhere. People mistake the symptom, a leader who seems unavailable or strategically vague, for a character problem rather than a structural one.

The organisation has sent someone to a football match in ice hockey gear. Even if they are an outstanding footballer, you will not see it in their performance. The kit is wrong, the conditions are wrong, and everyone is standing on the sideline wondering why the football looks so disappointing. The answer is not a better footballer. It is the right conditions for the one you already have.

6. The Gift of Silence

Leadership must embed discomfort. Not tolerate it. Not manage it. Embed it as a structural feature of how the team operates, because discomfort is the medium through which genuine growth moves. A leader who is not personally comfortable with discomfort cannot create the conditions for it in others. They will flinch at the last moment, soften the edge, rescue the feeling, and the team will learn from that flinching far more than from anything said explicitly. The leader’s relationship with discomfort is contagious in both directions.

There is one leadership capability that is almost impossible to develop inside an organisation that has confused leadership with management, and it is the one that builds the most durable teams. The ability to stay silent when everything in you wants to speak.

When a team loses badly, and every team loses badly eventually, the managerial instinct is to intervene immediately. To reframe the loss as a learning opportunity. To identify the positives. To motivate. To protect people from sitting too long in the discomfort of having failed. This feels like care and sometimes it is dressed as care. What it actually is, most of the time, is the leader’s own discomfort being managed at the team’s expense.

If you lose a game ten nil, it is entirely appropriate to be sad. It is appropriate to be frustrated. It is appropriate to sit with the specific, named feeling of having been comprehensively beaten and to let that feeling do its work. That feeling is not a problem to be solved. It is information. It is the emotional data that, if you allow it to land and be processed rather than immediately smoothing it away, produces the kind of honest assessment that actually improves performance.

Leaders who rescue feelings produce fragile teams. The team learns, not from any explicit instruction but from repeated experience, that difficulty will always be softened before it becomes too uncomfortable. They develop an expectation of rescue. They lose the tolerance for sitting with ambiguity, with failure, with the particular tension of a problem that has no clean resolution yet. When the rescue does not come, because eventually it cannot always come, they shatter rather than flex.

Leadership that allows people to feel what they feel, including the difficult and the dark, produces something entirely different. It produces teams that have an honest relationship with their own performance. Teams that can look at a ten nil loss without needing to explain it away, identify exactly where they fell short, carry the discomfort of that honestly, and then use it as fuel. Those teams are durable in a way that protected teams never are, because their resilience was built in conditions that resembled the conditions they will face again, rather than in a carefully maintained environment of managed comfort.

The gift of silence is not indifference. It is the deepest form of respect a leader can offer. It says I believe you are capable of holding this. It says I am not going to take this away from you because it is yours and it matters. It says I trust you to come through this without my intervention. That trust, communicated not through words but through the deliberate absence of rescue, is one of the most powerful things a leader can give.

7. Reflection as a Discipline

The reason most leaders end up doing management work is that reflection is not treated as a professional discipline in most organisations. It is treated as a luxury, something you do when you have cleared the backlog, which means you never do it because the backlog never clears.

Genuine leadership requires protected thinking time. It requires the willingness to disappear from the urgent for long enough to understand the important. It requires reading things that are not directly related to the current quarter’s objectives. It requires conversations with people outside your organisation, outside your industry, outside your usual frame of reference. It requires the occasional long walk with no agenda.

None of this appears on any project plan. None of it generates a status update. All of it is what allows a leader to develop the content that leadership actually runs on. A leader with no content has nothing to lead with. They fall back on authority, which means they are now managing, which means the organisation loses the directional thinking it actually needed from them.

The leaders who maintain genuine leadership capacity are the ones who treat reflection not as self indulgence but as a core professional responsibility. They protect the time. They explain why they are protecting it. They model, for the people around them, what it looks like to think before you act rather than acting to avoid having to think.

8. What This Confusion Costs You

The cost is not visible on any dashboard. It shows up slowly, in the texture of organisational life, in the kinds of problems that keep recurring, in the conversations that never quite happen.

Organisations where leaders have been pulled into management have leaders who know a great deal about what is happening and very little about why it keeps happening. They have sophisticated execution machinery pointed in directions that were chosen years ago and never genuinely revisited. They have talented people who have learned to wait to be told what to do next because the leader always arrives with the answer before anyone has had a chance to think. They have a kind of institutional learned helplessness that looks, from the outside, like a culture problem but is really a leadership problem.

Above everything else, they have fragile teams. Teams that have been protected from difficulty so consistently that they have never developed the emotional and cognitive calluses that durable performance requires. When something genuinely hard arrives, and it always does, these teams look to the leader for the rescue that has always come before. When the leader cannot provide it at the scale the moment requires, the whole system wobbles in ways that seem disproportionate to the triggering event. The fragility was always there. It was just never stress tested until it mattered most.

This is the deepest irony of all this and it is worth sitting with. Management optimises for safety. It protects people from discomfort, shields teams from hard feedback, smooths over the friction of failure, keeps the emotional temperature in a manageable range. And in doing all of that, it produces the least safe teams imaginable. Teams with no exposure to the conditions they will inevitably face. Teams whose apparent stability is entirely dependent on the continued presence of the person managing it. Remove the manager, change the conditions, introduce genuine adversity, and the safety turns out to have been a performance all along. Leadership that embraces discomfort, that builds unsafe teams in the deliberate sense, produces something that survives contact with reality because it was trained in conditions that resembled it.

They also have leaders who are exhausted. Doing two jobs simultaneously, neither as well as either deserves, is genuinely depleting. This trap does not just cost the organisation. It costs the individual carrying it.

9. Can You Actually Cultivate Leadership Culture?

This is the question underneath everything else and it deserves a straight answer rather than another framework.

The instinctive organisational response to a leadership deficit is to layer. Bring in more senior people. Create new leadership tiers. Assemble a broader set of voices in the hope that the aggregate of many opinions will somehow produce the clarity that individual leaders have failed to generate. It rarely works, and the reason is everything this piece has already argued. If the conditions that prevent leadership from operating are structural, adding more people to those conditions produces more people caught in the same trap. You have not grown leadership culture. You have grown the management overhead required to coordinate a larger group of people who are too busy managing to lead.

The honest answer is that you cannot install leadership culture. You cannot buy it, hire it in bulk, or mandate it into existence through a values framework on a wall. You can only grow it, and growing it requires creating the conditions in which it can develop and then trusting that the people you already have are capable of rising into those conditions if you stop filling all the space.

This means building organisations that tolerate silence after failure rather than rushing to reframe it. It means protecting thinking time as a non-negotiable rather than a reward for clearing the backlog. It means routing fewer problems upward and sitting with the discomfort of watching layers below work through difficulty rather than resolving it on their behalf. It means hiring for the capacity to think rather than the capacity to execute and then actually letting people think. It means going last when you speak and meaning something when you do.

For banks specifically, this is not optional. The bootstrapped fintech does not have a management hierarchy to navigate. The global platform does not ask permission before it enters your market. The competitive asymmetry is real and it is accelerating. The only meaningful response is an organisation that can see clearly, adapt quickly, and develop the next layer of leadership before it needs it rather than after. That organisation is not built through better management. It is built by leaders who understand the difference, protect the conditions that make it possible, and resist the gravitational pull of the engine room long enough to actually steer.

The ship will keep moving. Ships with good engines tend to do that. The question is whether anyone is steering, and whether the organisation has created the conditions for someone to learn how.

The JATO Organisation: Why Bolting AI onto Your Existing Structure Is a Darwin Award in Progress

There is an old urban legend, immortalised as one of the original Darwin Award nominations, about a man who bolted a JATO unit to a 1967 Chevrolet Impala. JATO stands for Jet Assisted Take Off. It is a solid fuel rocket designed to give heavy military transport aircraft the extra thrust they need to leave a short runway. The story goes that he drove out into the Arizona desert, found a long straight road, and fired it. The car reached speeds in excess of 350 miles per hour within seconds. The brakes melted. The tyres disintegrated. The car became airborne for over a mile and impacted a cliff face at 125 feet, leaving a crater three feet deep in the rock. The remains were not recoverable.

The legend is almost certainly fictional. The lesson it contains is not.

1. Amazon Reached for the Brakes

TechRadar reported this week that Amazon has responded to a series of high-profile outages by mandating that AI-assisted code changes receive sign-off from senior engineers before deployment. A six-hour disruption to its main ecommerce platform. Attributed internally to what communications described as “Gen-AI assisted changes.” Amazon SVP Dave Treadwell acknowledged that site availability had “not been good recently.”

The response is understandable. It is also the wrong answer.

Adding human sign-off to AI-generated code is not a governance strategy. It is a reflex. And like most reflexes, it feels right in the moment and solves the wrong problem. The driver reached for the brakes. The brakes had already melted.

2. You Can Only Tune a Go-Kart So Far

Think about what it actually means to optimise an existing organisation for AI. You add tooling. You write policies. You create centres of excellence. You require approvals. Each of these interventions makes you feel like you are responding to the challenge. Some of them even work, up to a point.

But there is a ceiling. Every go-kart has one.

You can tune the engine, lower the chassis, upgrade the tyres and find a better driver. You will go faster. At some point, however, you have extracted everything the vehicle was designed to give. The frame was never engineered for these speeds. The steering geometry was never intended for this kind of load. The braking system was sized for a completely different performance envelope. You are not tuning the vehicle anymore. You are fighting its fundamental architecture.

If you want to break the sound barrier, the Impala is the wrong starting point. It was never designed for this. Starting from it is not a constraint you can engineer around. It is the problem.

Most organisations adopting AI are doing exactly this. They are bolting a JATO unit to an organisational structure built for human-paced software delivery, human-scale code review, and human-readable output. The structure has approval gates built for humans. Governance processes built for humans. Risk frameworks built for humans. Quality assurance functions staffed by humans operating at human speed. And then they fire the rocket.

3. The Sign-Off Illusion

Here is the specific failure mode that Amazon’s response illustrates.

When AI is generating code at scale, the volume and complexity of that output quickly exceeds what any human reviewer can meaningfully evaluate. A senior engineer reviewing an AI-assisted pull request is not really reviewing it. They are scanning it. They are applying pattern recognition. They are looking for things that look wrong, which is a very different cognitive task from understanding what the code actually does and whether it is correct.

This matters enormously when the code is AI-generated. AI-generated code does not fail in the ways human-generated code fails. Human engineers make mistakes that are recognisable to other human engineers. The errors have shapes that experienced reviewers have seen before. AI-generated errors are structurally different. They can be syntactically perfect, pass linting, pass unit tests, and still encode a subtle misunderstanding of the problem domain that only surfaces under specific production conditions. Exactly the conditions that caused a six-hour outage.

Requiring a senior engineer to sign off on a 30,000-line AI-generated pull request is not oversight. It is the performance of oversight. Nobody in that review chain actually understands what the AI has done. They are approving it anyway. Because what else can they do. The rocket is already firing. The brakes are ornamental.

4. The Disconnection Risk and the Context Window Problem

4.1 The Atrophy Risk

There is a second failure mode that is slower, quieter and more dangerous than throughput. It is disconnection.

Senior engineers carry something that no AI model currently has. They carry a wide context window built from years of operating the system they are reviewing. Not the code in the PR. The system. They know why the retry logic in the payments service was written the way it was. They know what happens to that message queue at month end under peak load. They know the three things you must never do with that database connection pool, because two of them caused incidents that they personally stayed up until 3am to resolve. That knowledge is not written down anywhere. It lives in the engineer.

When AI writes the code and humans only scan the output, that knowledge stops being exercised. It atrophies. Slowly at first. Then faster. And the organisation does not notice until the moment it needs that knowledge most, which is the moment the system is on fire and nobody in the room can explain why.

4.2 The Context Window

Here is something worth understanding about the difference between how humans and AI reason about code. AI has a token window. It is large and getting larger. But it is still a window over the text of the code itself. It does not have a window over the operational history of the system, the incident reports, the architectural decisions that were made and reversed, the subtle coupling between services that was never documented because everyone who built it already knew.

Humans have that window. A senior engineer reviewing a change to the payments flow is not just reading the diff. They are reading it against a mental model of everything that system has ever done wrong. That mental model is irreplaceable. It is also fragile. Use it or lose it.

When AI generates the code and humans only approve the output, the mental model stops being updated. Engineers drift from their systems. The context window narrows. And when idempotence violations, race conditions and cascading failures eventually surface in the RCA, the people in the room are reading the evidence without the intuition needed to interpret it.

4.3 Compound Complexity

AI makes errors. This is not a criticism. It is a fact that any honest assessment of current AI coding capability has to start from. The errors are not random noise. They are systematic. They reflect misunderstandings of intent, of operational context, of the constraints that exist outside the code itself. And they compound.

A block of AI-generated code that handles retries makes an assumption about idempotence. Another block that handles concurrency makes a different assumption about state. Each block is locally plausible. Each would pass a unit test. At the integration seam, the assumptions conflict, and the failure mode is invisible until the system is under the specific combination of load and timing that exposes it. You do not find this in a code review. You find it in a production incident at 2am.

Defending against compound complexity requires testing that is specifically designed to find the failures that live at integration boundaries. Not unit tests. Not happy path integration tests. Provocative, adversarial, edge to edge tests that assume AI has made plausible errors in every block and attempt to trigger the interactions between them. This test suite has to fire on every checkin. It has to be treated as a first-class engineering product. It is the only defence you have against a system that nobody fully understands being assembled from components that each individually looked fine.

4.4 The RCA You Cannot Read

Senior engineers have a context window that AI does not. It is built from years of watching the system fail. They know the race conditions that were fixed in 2019. They know why that service cannot be called twice on the same transaction. They know the failure modes that live in the gaps between components, not inside them.

When AI writes the code and humans only review the output, that knowledge stops being exercised. It atrophies. The context window narrows. And the engineers do not know it is happening until the RCA, when they are staring at a cascade they cannot explain because they have not been close enough to the system to see it coming.

AI does not see integration risk. It sees the block in front of it. The errors it makes are plausible in isolation and catastrophic in combination. Idempotence violations. Race conditions. Thundering herds triggered by a single timeout. You cannot find these by reading code. You find them with comprehensive, adversarial automated testing that fires on every checkin and is specifically designed to trigger the failures that live at the seams.

4.5 Protect the Context Window

The most valuable thing a senior engineer brings to a code review is not their ability to read code. It is their ability to read code in the context of everything they know about the system it is entering. Those are completely different skills. The first can be replicated. The second cannot, at least not yet, and not cheaply.

AI assembles code from a window over the text in front of it. A senior engineer reviews code from a window over years of operational history, incident reports, architectural regrets and hard-won intuitions about where this particular system fails under pressure. The human context window is wider, deeper and enormously more valuable than it looks from the outside.

It is also the first thing to go when engineers stop being close to their systems. Replace writing with reviewing. Replace reasoning with scanning. Do it long enough and the wide context window collapses into a narrow one. The engineer is still senior in title. They are no longer senior in the way that actually matters at 2am when something has gone wrong and nobody can explain why the cascade started where it did.

Protect the context window. Keep senior engineers close to their systems. Treat their deep operational knowledge as the risk control it actually is, not as background context for an approval process. And build automated testing that is adversarial enough to find the compound errors that AI will inevitably introduce at integration boundaries, because the human intuition that used to catch those errors is the thing you are most at risk of losing.

5. Governance Cannot Run at Rocket Speed

The deeper problem is one of tempo. Human governance processes were designed for human delivery tempos. When a team ships once a fortnight, a review board can function. When a team ships multiple times a day, the review board becomes a queue. When AI agents are generating and deploying code continuously, the review board becomes an illusion. A compliance checkbox that adds latency without adding safety.

This is not a criticism of the people involved. It is a systems problem.

You cannot solve a throughput mismatch by asking the slower component to work harder. You can ask senior engineers to approve more PRs per day. They will try. The quality of each review will degrade proportionally. This is not a failure of diligence. It is mathematics.

The organisations that understand this are not adding more humans to the approval chain. They are asking a more uncomfortable question. If the output is moving too fast for humans to govern, what can govern it?

6. Only AI Can Stabilise AI

The answer is not comfortable for people who believe that meaningful oversight must be human. But the logic is unavoidable. If AI is your accelerant, humans cannot be your brake. The physics do not work. You need a brake that operates at the same speed as the engine.

The antagonist muscle has to be AI itself. Automated testing at a scale and depth that matches AI-generated output. AI-powered quality assurance that can actually read and reason about what an AI agent has produced. Continuous evaluation frameworks that catch behavioural drift in production before it becomes an outage. Canary deployments and automated rollback systems that do not wait for a human to notice something is wrong.

None of this replaces human judgment. Humans set the standards, define the acceptance criteria, interpret the metrics and make the strategic decisions. But the execution of quality assurance at the speed of AI-generated delivery has to be automated. There is no alternative that is not either a bottleneck or a fiction.

The tension underneath all of this is real. Developers do not want to let go of control, and that instinct is not irrational. We have spent decades building cultures where humans are the quality gate. Seniority means you earned the right to be the last set of eyes. Asking engineers to cede that role to automated systems feels like removing the safety net, even when the safety net was never actually catching what you thought it was catching.
The shift is not removing human judgment. It is relocating it. Instead of humans governing individual code changes, humans govern the systems that govern code changes. You define what good looks like. You build the tests. You tune the evaluation framework. You set the rollback thresholds. You stay close enough to the system to know when the metrics are lying. That is harder work and more interesting work than approving a pull request.
Automated linting is a useful starting point for any organisation trying to understand what this looks like in practice. For anyone reading this outside of engineering: linting is roughly the equivalent of spell-check for code. It flags errors, style violations, and potential bugs before a human ever sees the output. It runs in milliseconds. It catches whole classes of problem that humans miss, not because they are inattentive but because humans are not pattern matchers at that granularity. No meetings required. No approval queues. No bottlenecks. Just signal, immediately, at the point where it is cheapest to act on.​​​​​​​​​​​​​​​​

7. The Real Darwin Award

Amazon is a sophisticated technology organisation and they will work through this. They have the engineering talent, the operational discipline and the financial resources to find a better answer than mandatory sign-off. The companies that concern me are the ones that do not have those resources and are adopting AI at the same pace without asking any of these questions.

The JATO award goes to the organisation that invests heavily in AI as an accelerant, bolts it to their existing delivery structure, adds a sign-off process to feel responsible, and then discovers eighteen months later that they have a production environment that nobody fully understands, an incident rate that is climbing, and a governance process that never actually worked.

The moment of discovery is the cliff face.

Responsible adoption of AI is not slow adoption. Speed is not the problem. The problem is asymmetry. Investing heavily in the accelerant and almost nothing in the braking system. Every dollar your organisation spends on AI-generated code commits should be matched by investment in automated testing, quality metrics, A/B evaluation frameworks, behavioural monitoring and rollback capability. Not because regulators require it. Because the alternative is a crater.

8. Start From the Right Vehicle

The organisations that will navigate this well are not the ones that slow down AI adoption. They are the ones that redesign the vehicle before they fire the rocket. They ask what an engineering organisation looks like when AI is a first-class participant rather than a tool used by humans. They rebuild their quality assurance function from the ground up with automation at its core. They define what good looks like in machine-readable terms, not just human-readable ones. They treat the testing and evaluation pipeline as a product in its own right, not an afterthought.

This is harder than adding a sign-off step. It requires accepting that the existing structure was not designed for this and cannot be tuned to cope. It requires building something new rather than patching something old. It requires the kind of uncomfortable organisational honesty that most companies find very difficult.

But the alternative is the Impala. Firing the JATO unit on a vehicle built for a different world. Watching the brakes melt. Hoping that someone in the approval chain noticed something in that 30,000-line PR.

They did not. Nobody could have. The crater is already in the cliff.

9. References

  1. TechRadar, Craig Hale, 11 March 2026. “Amazon is making even senior engineers get code signed off following multiple recent outages.” https://www.techradar.com/pro/amazon-is-making-even-senior-engineers-get-code-signed-off-following-multiple-recent-outages
  2. Wikipedia. “JATO Rocket Car.” https://en.wikipedia.org/wiki/JATO_Rocket_Car

Captive Gratitude: How Product Team Structures Inflate Engineer Performance Ratings

There is a product team performance bias hiding in plain sight inside every organisation that has moved to product aligned engineering, except that it does not show up as a number on a dashboard, a flag in a talent calibration session, or a red line in an engagement survey. It accumulates quietly, year on year, in the gap between what a performance grade is supposed to mean and what a manager is actually capable of knowing. I call it captive gratitude, and once you understand the structural conditions that produce it, you will find it extraordinarily difficult to unsee.

1. The Captivity Problem

When you embed technologists inside product teams, you are making a deliberate organisational choice. You want engineers, designers, and QA professionals to feel the pull of the customer problem, to sit alongside product managers, and to be accountable to outcomes rather than outputs. That logic is sound and the commercial benefits are well documented. The unintended consequence is that your UX designer now reports to a product lead who has exactly one UX designer in their world. Your two Java engineers are the only Java engineers that product lead will ever manage. Your frontend engineer exists, from that leader’s perspective, as a singular and irreplaceable atom in a very small universe.

When appraisal season arrives, that product lead faces a structurally impossible task. They are being asked to rank and rate people for whom they have no comparative basis of evaluation. They know whether their Java engineers shipped on time. They know whether the UX designer was pleasant to work with and whether the designs looked good to them as a non-designer. What they cannot know, because they have never managed anyone else doing the same job, is whether those same people sit in the top quartile, the median, or the bottom third of their profession.

The result is captive gratitude. Because the product lead depends entirely on this one person to do this one specialised thing, they are unconsciously biased toward protecting them, affirming them, and rating them generously. This is not corruption. It is not favouritism in the traditional sense. It is the entirely rational response of a manager who knows that losing their only frontend engineer would be operationally catastrophic, and whose brain has quietly decided that the safest way to retain them is to signal, repeatedly and sincerely, that they are excellent.

2. How Dependency Inflates Ratings

The inflation that captive gratitude produces is not symmetrical across all managers, and that asymmetry is precisely what makes it so difficult to correct at an organisational level. Product heads and technology leaders bring fundamentally different distortions to the appraisal table, and when you embed technologists inside product teams, you hand the appraisal pen to the product head by default.

Product leaders tend to overrate speed and agreeableness. The engineer who shipped fast, who said yes readily, who never created friction in the standup, who made the product manager’s life easier at every turn, that person reads as excellent. The engineer who pushed back on timelines, who raised technical debt concerns before agreeing to a feature request, who occasionally slowed the team down in pursuit of something more architecturally durable, that person reads as difficult, even when they are objectively the stronger technologist. The product leader is not being dishonest. They are applying the only lens available to them, and that lens produces inflated ratings for the wrong reasons.

Technology leaders, by contrast, tend to bias toward quality and operational excellence. They notice the engineer who refactors before adding features, who writes tests that actually catch regressions six months later, who thinks carefully about the person who will be on call at two in the morning when something breaks in production. They can see the difference between code that works today and code that will continue to work as the system scales, and they weight that distinction in ways that product leaders structurally cannot.

Neither lens is complete on its own. A technology leader who ignores business outcomes is just as dangerous as a product leader who ignores technical craft. But when only the product lens is applied to a technologist’s performance review, ratings inflate for agreeableness and speed while the deeper and more durable qualities of engineering excellence go unmeasured and unrewarded. Your strongest engineers are typically the first to notice that the grading curve is rewarding the wrong things, and that observation sits with them.

3. Why Functional Pools See More Clearly

The traditional alternative to product embedding is the functional pool: all your Java engineers in one chapter, guild, or centre of excellence, managed by someone who has spent a career around Java engineers. The critique of this model is well rehearsed and largely fair. Pooled engineers can feel distant from the customer problem, optimise for technical elegance over business outcome, and become a shared services backlog that product teams grow to resent.

But functional pools have one structural capability that embedded models destroy almost entirely, and that capability is the ability to compare honestly. When a technology leader manages twelve Java engineers, they develop a clear and calibrated sense of the distribution across the cohort. They know who writes the clearest and most maintainable code, who mentors most effectively, who handles ambiguity with the most composure, and who grows the fastest when placed under genuine pressure. That comparative baseline is not a luxury or an incidental benefit of the pooled model. It is the foundational requirement for a performance conversation that is accurate, developmentally useful, and fair to the individual on the other side of the table.

Without that baseline, you are not conducting performance management. You are conducting performance theatre, where both parties go through the motions of a rigorous evaluation that was, in structural terms, impossible before it began.

4. The Rotation Prescription

The solution is not to abandon product alignment and return to functional silos. The embedded model produces real and compounding benefits, faster feedback loops, stronger product intuition in engineering, and a shared accountability for outcomes that purely pooled models rarely achieve. Reversing it wholesale would simply trade one set of problems for a different and arguably worse set. The solution is to deliberately and systematically restore the comparative lens that product embedding removes, by moving people across team boundaries in structured, lightweight, and repeated ways.

The mechanism is rotation. Not reporting line changes, not full reorganisations, and not secondments that last a year and feel to the individual like internal exile. The rotation prescription calls for short cycle, intentional swaps where your UX designer spends three weeks contributing to another product team’s project, or your senior Java engineer participates in a communities of practice initiative that places them alongside their peers from across the organisation. The rotation does not need to be full time to be effective. Two days a week for a quarter, or a single project-based assignment with clear deliverables, is sufficient to generate the comparative signal that your appraisal process is currently missing entirely.

Technology leaders have a specific and non-delegable role to play here. The rotation calendar cannot be left to goodwill negotiations between product heads, because product heads have every incentive to resist it. The technology leadership layer must own the rotation mandate, fund the communities of practice with real capacity rather than aspirational goodwill, and treat cross-team exposure as a governance requirement rather than a development gesture. Without that structural ownership, the rotation prescription will be absorbed and neutralised by delivery pressure within a single quarter.

5. What Rotation Actually Produces

When your UX designer works alongside the UX designer from another product team, both managers suddenly have something they lacked before: a reference point against which to calibrate their own instincts. When your Java engineer presents at a community of practice and fields detailed technical questions from their peers across the organisation, their technology lead sees them operating in a context that product delivery alone never provides. The comparison that results is not punitive or threatening. It is informative, for the manager, for the individual, and for the organisation’s understanding of where genuine technical strength actually lives.

The objection to rotation will be immediate and entirely predictable. Product managers will argue they cannot afford to lose their people even part time. Delivery schedules will be cited with great conviction. Committed roadmap items will be invoked as if they represent physical laws rather than negotiated plans. This objection deserves to be taken seriously and then firmly overruled, because the cost of inflated and inaccurate ratings compounds across years in ways that are far more expensive than any short rotation. The engineer who discovers through lateral exposure that their skills are genuinely in the top quartile of their profession will perform differently, develop more intentionally, and stay longer. The engineer who receives inflated ratings for the wrong reasons will eventually sense the hollowness of that feedback and seek an honest signal from the market instead.

The practical implementation requires three things working together. First, a rotation calendar owned at the technology leadership level and treated as a standing operating requirement rather than an annual aspiration. Second, communities of practice that are genuinely resourced, with time protected in capacity plans rather than relegated to something that happens after hours if people happen to feel like it. Third, appraisal processes that explicitly require input from managers and technical leads outside the individual’s home team, so that cross-team exposure is not merely a development gesture but a prerequisite for a complete and credible evaluation.

6. Cross Organisation Calibration

Another structural problem appears when organisations stop comparing technologists across the entire technology estate. Product teams often rate their engineers relative to the local environment of that team, not against the broader technical organisation. The result is predictable. A business technology team that has had a difficult year will suddenly produce a cluster of “exceptional” ratings as managers attempt to compensate for morale or prior failures, while a central platform or infrastructure team that quietly delivered reliability all year nominates only a handful of exceptional performers. Without a central technology view, these grading curves drift apart and the system loses credibility.

Technology leadership must therefore treat performance grading the same way we treat architecture standards or production reliability: as an organisation wide discipline. Cross team calibration matters because performance ratings are only meaningful when the same bar is applied everywhere. If product teams nominate twenty percent exceptional engineers after a year of instability while central teams nominate single digits after a year of operational excellence, the system is already broken. The role of central technology leadership is not to dictate outcomes but to maintain a consistent view of engineering quality across the organisation. When teams perform badly, whether they sit in product units or central platforms, that reality must arrive in the grading distribution. Otherwise the organisation quietly teaches everyone the wrong lesson: outcomes do not matter, narrative does.

7. The Cost of Getting This Wrong

The goal of everything described above is not to create instability or to undermine the coherence and velocity of product teams. It is to ensure that when you sit across from someone in their annual performance conversation and tell them where they stand, you are working from an accurate picture rather than a gratitude inflated one.

The engineers who are most harmed by this bias are rarely the ones you would expect. They are not the weaker performers, who tend to benefit from the generous comparative vacuum that captive gratitude creates. They are the strongest ones, the people who know their craft deeply enough to recognise when their evaluation does not reflect it, who are perceptive enough to understand the structural reasons why, and who have enough market optionality to act on that understanding when the frustration accumulates past a certain threshold. Captive gratitude feels kind in the moment it is expressed. In the long run, it fails the very people it is trying to protect, because it denies them the honest developmental signal they need to grow further, and it denies your organisation the accurate picture it needs to invest wisely in the people who are genuinely building its future.

Gratitude that is genuinely earned does not need to be captive. It is strong enough to survive the comparison.

Transcripts from the Meeting Where Core Banking was Invented (A Faithful Reconstruction)

A companion piece to Core Banking Is a Terrible Idea. It Always Was.

It is 1972. A group of very serious men in very wide ties are gathered in a very beige conference room. They are about to make decisions that will haunt your change advisory board fifty years from now. The following is a faithful reconstruction of that meeting, because clearly someone needed to write it down.

CHAIRMAN: Gentlemen, we need to computerise the bank. The IBM salesman is outside. He’s been there since Tuesday. Security has tried to remove him twice. He seems to feed on rejection.

HEAD OF TECHNOLOGY (there is only one of him, and he is wearing a short-sleeved shirt, which everyone agrees is suspicious): We need a system that handles everything. Accounts, transactions, interest, fees, reporting. Everything.

CHAIRMAN: Everything?

HEAD OF TECHNOLOGY: Everything. In one place. One machine. One vendor.

CHAIRMAN: Should we perhaps have two vendors? For resilience?

HEAD OF TECHNOLOGY: Absolutely not. We want one vendor. Ideally one who makes hardware that only runs their software, so that if we ever want to leave we have to physically replace the building. That’s what I call commitment.

COMPLIANCE OFFICER: Will this system be easy to change when regulations evolve?

HEAD OF TECHNOLOGY: Change? Why would we change it? We’re going to write it in a language that reads like English was translated into German and then back into English by someone who had only ever read a tax return. That will ensure only a very specific kind of person can maintain it, and that person will be irreplaceable. That’s job security for everyone, really.

COMPLIANCE OFFICER: Visionary.

HEAD OF TECHNOLOGY: We’re going to run everything on a single box. All products. All customers. All transactions. Payments, lending, savings, reporting: one box, all of it, one throat to choke.

OPERATIONS MANAGER: What if the box falls over?

HEAD OF TECHNOLOGY: Then we have a disaster recovery plan.

OPERATIONS MANAGER: How long will recovery take?

HEAD OF TECHNOLOGY: Several hours. Possibly a day. We’re still working on the documentation. The recovery procedure will require a specialist who we will train exactly once and who will subsequently leave for a competitor. His successor will have the manual, which will be wrong by then, but written with such confidence that no one will question it until the actual disaster.

OPERATIONS MANAGER: And we need to test this?

HEAD OF TECHNOLOGY: We will test it once, during the original implementation, and then assume it still works forever. Testing it again would require a change freeze, three committees, a consultant from the vendor, and eight months. So: once.

CHAIRMAN: What about releases? How often will we update this system?

HEAD OF TECHNOLOGY: As rarely as possible. I’m thinking: annually. Maybe biennially if we can get away with it. Every release will be a full programme. Full regression testing across every function. Army of testers. Army of project managers managing the army of testers. A war room. Probably a dedicated floor.

FINANCE DIRECTOR: That sounds expensive.

HEAD OF TECHNOLOGY: It’s not expensive, it’s thorough. The release will take between six and eighteen months. We will begin change freeze approximately four months before the release date, which means the business cannot ship anything new for the better part of a year. This is a feature. It keeps everyone focused.

FINANCE DIRECTOR: Focused on what?

HEAD OF TECHNOLOGY: On not breaking anything. Which is the same as progress, if you think about it correctly.

CHAIRMAN: What do our customers get out of this release?

Silence.

HEAD OF TECHNOLOGY: Better MIS reports.

CHAIRMAN: They won’t see those.

HEAD OF TECHNOLOGY: No, but we will, and they are very clean reports. Very clean. Some of the cleanest reports you’ll ever see. Worth every penny of the hundred million we’re spending.

OPERATIONS MANAGER: How will the operators interact with this system?

HEAD OF TECHNOLOGY: Through a screen. One screen. The screen will have approximately four hundred fields. Many of them will be unlabelled, for security. The operator will learn which combinations of field values correspond to which operations through a combination of formal training, informal knowledge transfer, and trial and error with real money. Experienced operators will develop an almost mystical intuition for it. New operators will occasionally initiate a full principal repayment when they meant to process an interest charge, but that’s a training issue, not a system issue.

COMPLIANCE OFFICER: And there’s no confirmation step?

HEAD OF TECHNOLOGY: There’s a button. The button says OK. It always says OK. It says OK whether you’re creating a savings account or accidentally wiring nine hundred million dollars to the wrong counterparties. We felt a consistent user experience was important.

HEAD OF TECHNOLOGY: Now, about scaling. This system cannot scale horizontally. If we need more capacity we buy a bigger box. When the box reaches its limit we buy the biggest box IBM makes. When we exceed that box, we have a different kind of conversation.

OPERATIONS MANAGER: What kind of conversation?

HEAD OF TECHNOLOGY: The kind where we explain to the board that we need to run batch jobs overnight because we’ve run out of intraday capacity, and that customers cannot see their real balances until morning, and that this is normal and expected and completely fine. The batch run will begin at midnight. If it’s not finished by opening, we delay opening. This will never be a problem because it’s 1972 and banks open at ten.

CHAIRMAN: What happens in fifty years when banks operate around the clock and customers expect real time balances and instant payments from their pocket computers?

Long pause.

HEAD OF TECHNOLOGY: I’m going to stop you there. That is an unreasonable hypothetical and I think you should apologise for raising it.

FINANCE DIRECTOR: How long will implementation take?

HEAD OF TECHNOLOGY: Three years, minimum. Probably five if we want to do it properly.

FINANCE DIRECTOR: And what does ‘doing it properly’ deliver?

HEAD OF TECHNOLOGY: A working system. Same products as before. Same prices as before. Same service model as before. Customers will notice nothing has changed.

FINANCE DIRECTOR: That’s the success case?

HEAD OF TECHNOLOGY: That is the dream. If nobody notices, we’ve done it perfectly. If customers call in to say things are different, something has gone wrong.

FINANCE DIRECTOR: And when will we need to replace this system?

HEAD OF TECHNOLOGY: Never. This is the last system we’ll ever need.

Another long pause.

HEAD OF TECHNOLOGY: Or in about fifteen years, when the business has changed enough that this system can no longer accommodate it, and we’ll need to select a new vendor and begin a new three to five year programme that will produce the same products at the same prices that customers will not notice have changed.

CHAIRMAN: And then?

HEAD OF TECHNOLOGY: And then we’ll do it again. And then again. Each time, we’ll write a requirements document that captures everything the old system did plus everything the business has always wanted, and we’ll select the new vendor who covers the most requirements. And each time, we will have purchased a slightly more modern version of the same architectural mistake.

CHAIRMAN: That sounds like a treadmill.

HEAD OF TECHNOLOGY: I prefer the term upgrade cycle. Much more professional.

COMPLIANCE OFFICER: One final question. Could we instead build separate systems for each domain: payments, lending, identity: each independently deployable, each owning its own data, able to scale on its own terms and change without disrupting everything else?

The room goes very quiet.

HEAD OF TECHNOLOGY: That’s not how banking works.

COMPLIANCE OFFICER: Why not?

HEAD OF TECHNOLOGY: Because banking is complex. And regulated. And the vendors tell us it’s impossible. And frankly if it were possible someone would have done it already.

Forty-five years later, Monzo does exactly this with a team a fraction of the size. But that’s a different meeting.

CHAIRMAN: Very good. Let the IBM man in.

The IBM man has apparently already let himself in. He has been sitting at the head of the table for the last twenty minutes. Nobody is sure when he arrived.

IBM SALESMAN: Gentlemen. I understand you want one vendor, one box, one contract, a language only specialists can read, releases that take eighteen months, a user interface that requires interpretive experience, disaster recovery nobody has tested since 2003, and a licensing model that ensures leaving us is economically indistinguishable from burning the bank to the ground.

He opens his briefcase.

IBM SALESMAN: I have just the thing.

And that, more or less, is how we got here.

The remarkable thing is not that this meeting happened in 1972. The remarkable thing is that some version of it is still happening today, in banks that have had fifty years to notice the pattern, conducted by people clever enough to know better, producing requirements documents that run to hundreds of pages and conclude, with great confidence, that what the bank needs is a newer version of the same decision.

The neobanks walked in, ignored the IBM salesman entirely, and built banks that work. The architecture was never the mystery. The willingness to walk out of the meeting was.

Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the infinite patience required to watch the same mistake happen in slow motion at andrewbaker.ninja.

How to Manage Technologists If You Don’t Know Anything About Technology

Health warning: This article may not make you feel happy, it may not suit you to read this article. I am not even sure I necessarily believe everything I am saying here – but I do believe in personally reflecting on the challenging questions being posed in this article to try make myself a better leader. The article is simply asking “what is the value of you, in the context of leading skilled engineering teams?”

There is a particular kind of executive confidence that appears in technology organisations. It usually sounds like this: “I don’t need to understand the tech. I manage outcomes.” It is normally followed by a transformation programme, several reorganisations, collapsing morale, and a very expensive consultancy engagement that promises clarity and delivers polished slideware.

Let’s be direct. Managing technologists without understanding technology is not a neutral handicap; it is an active risk multiplier. The more complex the environment, the more damaging the ignorance becomes.

Consider the keep fit instructor who is visibly overweight and hasn’t exercised in years. They may possess a certification, a job title, and a timetable full of classes. But they cannot teach what they do not know or do not believe in. Their clients sense it immediately. The credibility is gone before a single word is spoken. Technology leadership is no different. You cannot guide people through terrain you have never traversed, and you cannot inspire standards you cannot demonstrate.

So here is the guide you asked for.

1. Start by Accepting You Are Blind

If you do not understand software architecture, distributed systems, infrastructure, security models, delivery pipelines, data structures, and operational constraints, then you are blind to the shape of the terrain. You cannot properly see tradeoffs, shortcuts, fragility, or when someone is bluffing. Technology is not like sales or marketing where outcomes are often decoupled from deep domain mechanics. In technology, the mechanics are the outcome. Architecture decisions made in a whiteboard session today will determine scalability, cost, resilience, and regulatory exposure five years from now.

If you do not understand that dynamic, you are not steering the organisation. You are simply sitting in the passenger seat, while pretending to hold a wheel.

2. Stop Pretending Delivery Is Just Project Management

Non technical leaders often default to process pageantry because it is visible and legible. They add more standups, more dashboards, more governance forums, and more colour coded status reports in the belief that visibility equals control. None of these artefacts fix poor architecture, reduce technical debt, compensate for a misaligned data model, or create good engineers.

When you cannot evaluate technical quality directly, you over index on visible artefacts. Documentation begins to look like competence and velocity charts start to look like value, while entropy compounds quietly underneath the surface. The system appears orderly right up until it fails.

3. You Will Optimise for the Wrong Things

If you do not understand technology, you will optimise for what you can easily measure. Feature count, story points, headcount, burn rate, vendor promises, and analyst positioning become proxies for progress because they are tangible and easy to present upward.

Technologists, however, optimise for variables that are less visible but far more consequential: latency, throughput, failure domains, blast radius, observability, coupling, and long term maintainability. These variables are not intuitive if you have never built and operated systems, so you unintentionally pressure teams to move in directions that make the system worse while appearing more productive. You celebrate feature velocity while quietly accumulating architectural collapse.

4. You Will Reward Confidence Over Competence

In technical environments, there are engineers who explain complexity cautiously and engineers who promise simplification confidently. If you cannot evaluate the substance behind those positions, you will reward the confidence because it is easier to understand and more comforting to hear. The loud architect who claims “this is easy” will often outrank the quiet engineer who warns that the proposal will create long term fragility.

Over time, bad decisions institutionalise themselves. Real builders leave because their judgement is repeatedly overruled by narrative. Political performers remain because they are aligned with what leadership can recognise. The technical centre of gravity shifts from engineering to performance and display, and once that shift occurs it is extremely difficult to reverse.

4.1 Architecture Is Not Blockchain. Stop Voting on It.

There is a dangerous instinct among non technical leaders to democratise technology decisions. It feels fair. It feels inclusive. It feels like good governance. It is none of these things.

Technology architecture decisions are not like blockchain. There is no distributed consensus protocol that produces good system design. You cannot put an architecture to a vote and expect the result to be sound. Consensus in architecture does not produce quality. It produces compromise, and compromise in system design is how you end up with a monolith wearing a microservices costume.

Good architecture sits with the few. It always has. The people who can see failure modes before they materialise, who understand how coupling decisions made today will constrain optionality in three years, who can hold the full system topology in their heads while evaluating a proposed change. These people are rare. They are not the majority, and they should never need to be.

If you cannot immediately discern who these people are, that is not an excuse to default to democracy. It is a problem you must solve. You can look at track record. Who built the things that actually work? Who predicted the failures that eventually materialised? Who do the other strong engineers defer to when it matters? You can ask the people who seem to know. Genuine technical talent recognises other genuine technical talent with remarkable consistency. The engineers who understand the system will tell you who else understands the system if you ask them honestly and listen without filtering their answer through your own preferences.

What you cannot do is use “who do I get on with” as a proxy for technical authority. Rapport is not architecture. The person whose company you enjoy at lunch is not necessarily the person who should be making database partitioning decisions. In fact, the odds are reasonable that the person you need to trust with these decisions is someone you find difficult. They may be blunt. They may lack patience for ambiguity. They may not perform enthusiasm on demand or soften their assessments to make the room comfortable. That is not a deficiency. That is frequently what deep technical clarity looks like when it has not been sanded down by corporate socialisation.

Your job is not to find someone who makes architecture decisions and is also easy to manage. Your job is to find the person who makes the right architecture decisions and then do the leadership work around them. That means helping them evolve how they communicate without requiring them to dilute what they communicate. It means cushioning how their assessments land with the people you probably get on better with, the ones who find directness confronting. It means translating their clarity into language the room can absorb without asking them to do the translating themselves, because the moment you make that their job, you have redirected their energy from engineering to diplomacy, and you will get less of both.

The moment you make it the job of those who know to convince those who do not, you have inverted the burden of proof. You are asking the surgeon to justify the incision to the waiting room. The engineer who sees the correct path must now spend their energy selling it to people who lack the context to evaluate it, navigating politics, building slide decks, softening language, and managing egos. Their actual job, building things that work, becomes secondary to the performance of persuasion.

This is how you build an idiocracy. Not through malice, but through process. The smart people do not leave because they are angry. They leave because they are tired. Tired of explaining things that should not need explaining. Tired of watching inferior decisions win because they were presented more palatably. Tired of carrying the cognitive load of the system while simultaneously carrying the emotional load of convincing people who will never understand it.

And when they leave, they do not come back. The institutional knowledge walks out with them. What remains is a leadership structure perfectly optimised for consensus and utterly incapable of producing anything architecturally coherent.

So if you find yourself in a room where architecture is being decided by a show of hands, you have already failed. Your job is not to count votes. Your job is to identify the people who actually know, give them authority, back their judgement, and manage everyone else around that judgement. Not the other way around.

The few who know should be protected and empowered. The many who do not should be managed, guided, and kept from diluting decisions they are not equipped to make. That is not elitism. That is engineering.

5. You Will Torture Your Best People

When a non technical leader takes over a technology team, they will almost always find A players. Their track record is documented, their peers defer to them, their output is measurable, and their understanding of the system is encyclopaedic.

Management culture tells you to grow people, stretch them, and challenge them. That works when you understand the craft. When you do not, it becomes interference dressed up as development.

If you do not understand what your A players do, you have two options. You can support them and back them, which means protecting their time, removing obstacles, trusting their judgement on matters you cannot evaluate, and quietly taking credit while they build things that matter. Or you can second guess their architecture, impose frameworks on their process, redirect their priorities based on something you skimmed in a blog post, require justification for decisions you cannot interrogate, and surround them with governance rituals that treat excellence as a compliance risk. You will not improve them. You will exhaust them.

The strongest leaders recognise that their job with exceptional engineers is not to improve them but to protect them from everything that would prevent them from doing what they are already exceptional at, including protecting them from unnecessary leadership interference.

5.1 The Messi Test

If you were managing Lionel Messi at the peak of his career, would you try to make him a better footballer? Would you sit him down and say, “I think you should score more goals,” or “You should do cooler dances after goals,” or “You should do more of that overhead scissors stuff, it looks great on my Instagram”? Of course not. You would never say this, because you understand exactly what Messi would think: “what is planet are you on?

You would not attempt to coach the best player in the world on how to kick a ball. That would be delusion masquerading as leadership.

But you might help him in other ways. You might shield him from media noise so he can focus on performance. You might connect him with world class tax advisors so he does not learn about compliance through public scandal. You might create an environment where he can speak honestly about pressure, fear, and expectations without reputational risk. You might remove friction from his life so that his talent can compound. That is leadership.

Technologists are your Messis, I dont mean they are expensive “rock stars” that should be worshipped. But they’re highly skilled, highly trained engineers that are operating at the edge of complexity most executives cannot see, let alone master. The moment you start telling them how to “score more goals” in their domain, you lose credibility. The moment you start removing obstacles, clarifying intent, protecting focus, and supporting their growth as humans, you become useful.

Leadership is not about demonstrating your value. It is about increasing theirs.

6. Learn to See the World Through Their Eyes

A significant proportion of high performing technologists are wired for precision, depth, and pattern recognition in ways that do not always align neatly with corporate culture. Some sit somewhere on the autism spectrum. Many process imperfection as persistent cognitive noise. A brittle workaround in a codebase, a decision that feels architecturally wrong, a governance process quietly ignored, all of it remains present in their thinking. They are not being difficult. They are being accurate.

Corporate ambiguity, political signalling, and performative enthusiasm do not create alignment for these engineers. They create anxiety. Mixed messages do not feel strategic. They feel incoherent.

Good leaders regulate the room. They absorb noise, reduce ambiguity, speak plainly, and provide calm clarity. A simple, credible “we have this” from someone who understands the system can settle a mind that has been carrying too much context alone.

Poor leaders amplify the noise. They respond with more process, more reporting, more governance. The engineer leaves more dysregulated than they arrived.

Your job is not to fix them. It is to connect with them. Be explicit. Mean what you say. Offer precise recognition for precise contributions. Treat their way of thinking as an asset rather than a personality flaw. That is not a programme. It is leadership.

7. The Rub: Management Is Not Parenting

Here is the rub. Every one of us has had a boss. From the day we were born we were trained to be told what to do. Parents, teachers, coaches. We learned compliance before we learned autonomy. So in a strange way, everyone believes they understand management because everyone has been managed.

But there is a structural flaw in that analogy.

When you tell a child what to do, it is because you genuinely know better. You are larger, more experienced, more informed. The power gradient is justified. Authority is protective. Instruction is developmental. The child benefits precisely because the adult has superior context.

When you “manage” a technologist and you do not understand the domain, that gradient disappears. You are not the informed authority in the room. You are, in many ways, naked in the relationship. And naked authority is dangerous. It creates insecurity. Insecurity creates compensating behaviour. Some leaders respond by asserting dominance, prescribing solutions, forcing direction, or manufacturing certainty to soothe their own internal sense of being an imposter. Do not do this.

The moment you compensate for ignorance with control, you infantilise an adult expert. The relationship subtly shifts from adult to adult into adult to child. And technologists can feel it immediately. Respect erodes. Candour drops. Performance follows.

Instead, treat the relationship as adult to adult. That requires humility. Real humility, not performative modesty. Humility that says: “You know more than I do about this domain. My job is not to override you. My job is to create the conditions where your expertise compounds.”

Most corporates inadvertently filter out humble leaders because humility is harder to spot in an interview. It does not posture. It does not dominate airtime. It does not radiate artificial certainty. It can even be misread as weakness. It is not weakness. It is a superpower.

In complex technical environments, humility is the only posture that preserves credibility, unlocks trust, and allows expertise to surface without fear.

8. You Cannot Challenge Risk Without Understanding It

Technology is an infinite game. There is no finished state and no moment when risk disappears. Engineers need to be challenged on the risks they are taking, avoiding, and ignoring.

But you can only challenge meaningfully on risks you understand. Asking whether something is secure is not risk management. Asking what happens to the blast radius if a critical dependency fails before decoupling it is risk management. The difference is fluency.

Technology teams need to be taught more than they need to be managed. The best leaders challenge from credibility.

8.1 Scaling Is Not Shovelling Coal

The reflex when things are going slowly is to hire more people, on the theory that more engineers typing at more keyboards produces more output. This is the coal and steam engine model of technology leadership: if you want more steam, shovel more coal. It is almost entirely wrong.

Almost every meaningful slowdown in a technology organisation is structural rather than headcount related. The system is badly architected, the deployment process is a labyrinth, teams are coupled to each other in ways nobody has fully mapped, and three approvals are required from people who are never available simultaneously. The platform was designed for a company one tenth the current size and nobody has rearchitected it. Adding more engineers to this environment does not accelerate delivery; it adds more coordination surfaces, more communication overhead, and more people who need to understand a system that was never properly documented in the first place.

Fred Brooks established this in 1975 in The Mythical Man Month, observing that adding people to a late software project makes it later. Fifty years on, organisations still have not internalised it.

Almost all of my meaningful productivity gains across a career have come from three activities: simplifying, rearchitecting, and decommissioning. In several engagements I have reduced team sizes by eighty to ninety percent through focused engineering effort, not through redundancy rounds, but because the complexity that justified those team sizes no longer existed. The work evaporated because the waste was removed, not because more people arrived to carry it.

Business leaders rarely reach for any of these three levers, because none of them are visible in the way that hiring is. Simplification produces no announcement. Rearchitecting takes time before it pays off. Decommissioning feels like destroying value even when the system being decommissioned is the thing burning the most of it. Hiring, by contrast, feels decisive and produces a headcount number that rises, a team that grows, and a credible impression of action.

The result is bloat, not in the pejorative sense of laziness or incompetence, but structural bloat. Layers of middle management are added to coordinate the people hired to solve problems that better engineering would have eliminated. Small pools of engineers are assigned to each layer and an elaborate coordination dance begins, with teams attempting to place assets into production across boundaries they did not design, through processes they did not write, requiring sign offs from people who were not part of the original conversation. The system slows further, more managers are added to explain the slowdown, the slides get denser, and the actual engineers spend progressively less time building anything.

There is nothing wrong with the people in this structure. The structure is the problem.

There is also a bias worth naming directly. Ask a business leader whether they would be comfortable reporting into a technologist and watch the nervous laugh. It surfaces something honest: the default assumption in most organisations is that technology is a support function, a delivery vehicle for business ideas, something to be managed rather than something that leads. That assumption shapes every resourcing decision that follows. If you believe technology is an execution arm, you staff it like one. If you understand that technology is the product, the risk surface, the cost structure, and increasingly the competitive differentiator, the entire calculus changes.

The most expensive thing in many technology organisations is not the engineers. It is the coordination overhead constructed to manage them, most of which exists because the underlying architecture was never properly simplified in the first place.

8.2 A Seating Move Is Not Progress

There is one thing I have never seen accompany a push to federate technology into business units, and that is a business case. Not a real one. Not one that commits to reducing the headcount of the central technology function as teams move out, improving product quality in measurable terms, accelerating time to market, or delivering a better client experience. Those outcomes are sometimes gestured at in conversation but they are never written down with numbers attached, never stress tested, and never tracked after the fact.

What actually happens is a long, sustained lobbying effort. Business leaders work on executives over months, sometimes years, making the case in corridors and leadership offsites and one on ones that they just want ownership, that they could move faster if they were not dependent on a central team, that their domain is unique enough to justify its own capability. The argument is almost always framed around autonomy and alignment rather than outcomes, because outcomes would require accountability and accountability would require the business case that nobody wants to write. Eventually the lobbying reaches a threshold and the seating move happens. Org charts are redrawn. Teams are transferred. Announcements are made about empowerment and closer alignment to the business.

Then the outages start. The platform that looked simple from the outside turns out to have dependencies that the embedded team did not fully understand. The shared services that the central function provided quietly and reliably are now either duplicated at significant cost or quietly still consumed while the team claims independence. The senior engineers who did not want to move find reasons to leave. Junior engineers discover that their new reporting line has no meaningful technology leadership above them. The business head who lobbied hardest for the change is notably quiet during the incidents, because the conversation has shifted from strategy to operations and that is not where they are most comfortable.

The people who argued loudest for federation are rarely held accountable when it does not deliver what they promised, partly because they never promised anything specific enough to be held to. A seating move that comes with no business case produces no basis for evaluation, and that is a feature of the approach, not an oversight.

8.3 The Centralised vs Federated Dance

Alongside the headcount reflex sits a structural one that operates on a longer cycle, roughly three to five years in most organisations, and it is just as predictable. It is the oscillation between centralised technology functions and federated ones embedded inside business units, and poor performing companies do it repeatedly without ever asking why they keep arriving back at the same problems from the opposite direction.

When technology is federated, the symptoms accumulate gradually and then all at once. Headcount expands because each business unit builds its own capability without reference to what anyone else is building. Delivery slows because teams are solving the same problems in parallel and nobody is accountable for the shared infrastructure underneath. Product intellectual property fragments across a dozen slightly different implementations. Outages begin to correlate in ways nobody predicted because the underlying platforms were never properly standardised. Eventually the organisation reaches a pain threshold and a decision is made: centralise. Put technology back together, eliminate duplication, create a shared platform, and impose some coherence on the chaos.

And then, after a few years of that, a different set of symptoms accumulates. The centralised function is accused of being slow, unresponsive, and too far from the business to understand what the business actually needs. Business leaders begin to argue, with genuine conviction, that they just want to own technology themselves so they can build a team aligned to their own priorities, responsive to their own roadmap, and invested in their own outcomes rather than a shared queue managed by someone who does not really understand their domain. The language of empowerment enters the conversation. Autonomy is positioned as the solution. And so the cycle turns again.

What neither state acknowledges is that both of them are wrong, or more precisely, that neither of them is the real problem. The real problem is that technology product teams sitting inside business units are almost never well looked after, well understood, or well led. The business leader who asked for them does not have the technical depth to develop them, challenge them, or protect them from the work that will slowly reduce them to order takers. The senior technologists in those embedded teams typically feel it within a year or two and want to move back to a technology reporting structure where they will be compared against peers, stretched by people who understand the craft, and given a career trajectory that makes sense. The weaker technologists, by contrast, are often quite comfortable in the federated model precisely because the lack of comparison works in their favour, and their performance tends to set their direction eventually regardless of their preference.

The leaders of those embedded teams occupy a particularly comfortable position that is worth examining honestly. Sitting inside a business unit, away from a central technology function, they are largely insulated from scrutiny about what good engineering actually looks like. There is no peer group holding up a mirror. There is no principal architect asking difficult questions about their design decisions. The business head they report to is usually grateful for the relationship and not equipped to push back on the technical substance. That comfort is real, but it comes at a cost that falls mostly on the junior technologists underneath them, who are poorly directed, working in a narrow domain with limited exposure to broader engineering practice, and facing a career runway that shortens the longer they stay.

The honest answer is that technology product teams should sit close to the business, but closeness is not the same as ownership, and ownership is not the same as being well led. The cycle will keep turning until organisations stop treating the reporting line as the variable that needs fixing and start asking the harder question about whether the people leading those teams, wherever they sit, actually understand what they are leading.

9. You Will Build a Human ETL Layer

When leaders cannot understand technology directly, they compensate by inserting translation layers. Middle management expands. Engineers are divided into smaller execution pools overseen by coordinators and programme managers whose primary function is to translate engineering reality into executive language and back again.

You create a human ETL pipeline. Engineers produce signal. Middle management extracts it, transforms it into narratives, and loads it into reporting decks, governance packs, quarterly reviews, and risk registers. The same underlying data is repackaged repeatedly, often at the last minute, into slightly different formats for different audiences.

A status update becomes a slide. The slide becomes a summary. The summary becomes a dashboard. The dashboard becomes a talking point. Each transformation distorts meaning.

Leadership overhead can approach the entirety of an engineer’s day. There are just enough managers to guarantee standstill, but also just enough structure to produce a convincing explanation for why five minute tasks take months. The slides appear dense with activity, yet they are often incoherent. If you trace a single initiative from idea to production, the drywall cracks and the house of cards becomes visible.

Movement replaces progress. Coordination replaces coherence.

10. You Will Reach for Redis

Eventually a performance issue will surface. Without technical depth, the reflex is to add something modern and powerful. Often that something is Redis.

A cache feels decisive. Add it, declare the issue addressed, move on.

Never do this blindly.

In fragile environments layered with historical hacks, adding another cache compounds opacity. Someone likely solved a similar problem years ago with an undocumented optimisation. Now you have multiple layers of state, unclear invalidation logic, and outages that are less frequent but more mysterious.

Performance issues are often structural. Poor data models, missing indexes, excessive coupling, and architectural shortcuts create systemic friction. Caching over structural weakness hides symptoms while deepening fragility.

I am speaking to you from a future world where mankind was destroyed by Redis caches. Not because Redis is flawed, but because leaders layered fixes onto systems they did not understand.

11. The HR Performance Management Trap

The most corrosive pattern appears when ignorance meets rigid HR systems. Deep engineering work is compressed into quarterly objectives as though innovation follows a payroll calendar. Goals are signed off ceremonially. Alignment is declared.

Within weeks, priorities shift. Engineers are told to pivot immediately.

Months later, those same goals reappear in reviews as if nothing changed. Leaders who have not read them in half a year use them as instruments of judgement. Engineers are assessed against objectives invalidated in the first week after signing.

You demand agility in delivery and rigidity in evaluation.

Then comes the request for detailed activity lists because leadership is not close to the details and needs to fight their case. Engineers reconstruct narratives to fit templates. Intellectual capital creation is replaced by artefact production.

12. Practical Do and Don’t Guide

If you are not technically fluent, the pattern is predictable. The table below summarises the behaviours that separate responsible leadership from destructive interference.

DoDon’tWhy
Learn enough to understand system design, failure modes, and architectural tradeoffs.Announce that you are not technical as if it is neutral.Ignorance in complex systems leads to misaligned incentives and fragile decisions.
Protect A players by removing noise and shielding their time.“Develop” your best engineers by interfering in work you do not understand.Elite performers need space and cover, not amateur coaching.
Identify technical authority by track record and peer recognition, then back their judgement.Use personal rapport as a proxy for who should make architecture decisions.The person you get on with is not necessarily the person who should be making partitioning decisions.
Help your strongest technical minds evolve their communication style and cushion how their directness lands.Make it the job of those who know to convince those who do not.You redirect engineering energy into diplomacy and create an idiocracy where the smart people die off.
Keep priority changes rare and explicit, and update goals when reality changes.Pivot constantly and then measure people against obsolete objectives.You cannot demand agility in execution and rigidity in evaluation.
Stay close enough to the work to understand reality.Build a human ETL of middle managers to translate everything into slide decks.Translation layers create motion without progress and distort truth.
Fix performance problems at the root.Add Redis or another cache reflexively.Additive fixes on top of structural weakness increase opacity and fragility.
Be explicit, direct, and consistent in communication.Rely on ambiguity and political signalling.Precision wired engineers interpret ambiguity as incoherence.
Install real technical authority if you lack fluency.Appoint ceremonial technical leaders without power.Architecture by committee produces incoherent systems.
Give architecture decisions to the few who know and manage everyone else around that judgement.Put architecture to a vote or seek consensus across people who lack the context to evaluate it.Consensus does not produce good architecture. It produces compromise that compounds into structural incoherence.
Create the conditions for excellence.Mistake intervention for leadership.In complex systems, unnecessary intervention is usually negative value.

13. Consultants Will Smell You

When leadership cannot interrogate architecture, consultants shape the narrative. Platforms are sold instead of problems solved. Roadmaps are purchased instead of capability built. Without internal fluency, you cannot distinguish elegance from illusion.

14. Culture Will Decay

Technologists do not need their leaders to be the best engineers in the room, but they do need them to recognise quality. When leaders cannot distinguish good from bad engineering, excellence is not protected and mediocrity is not corrected.

High performers disengage first. The rest follow.

15. So What Should You Do

You have three options.

Learn. Build real fluency and challenge from credibility.

Install genuine technical authority and listen to it.

Or do not take the role.

The honest answer to how to manage technologists if you do not understand technology is simple.

You do not.

16. Conclusion: Don’t

Leadership is not domain agnostic. You would not manage surgeons without understanding anatomy or pilots without understanding aviation risk. You would not hire an unfit keep fit instructor and expect the class to improve.

Software runs banks, hospitals, logistics networks, and defence systems. Technology teams do not need more managers. They need leaders who can teach, challenge intelligently, and provide cover for the right risks.

If you do not understand technology and do not intend to learn, the most responsible decision you can make is not to lead technologists.

In conclusion, don’t.

The Pilot Trap: Why Your AI Project Will Never See Production

Gartner says 40% of agentic AI projects will fail by 2027. I think they’re being optimistic.

Walk into almost any large enterprise right now and you’ll find the same scene: a glossy AI pilot, a proud press release, a steering committee meeting monthly to “track progress,” and an absolutely zero percent chance that any of it ever reaches production at scale. The pilot looks great in the boardroom deck. It just never seems to cross the finish line.

This isn’t bad luck. It’s a pattern. And it’s being driven by a perfect storm of vendor hype, institutional cowardice, and the oldest mistake in enterprise IT: automating a broken process and calling it transformation.

Let’s be honest about what’s actually happening.

1. The Vendors Are Misleading You

Not maliciously. Just commercially.

Every major cloud vendor, every AI platform company, every systems integrator with a freshly minted “AI practice” is telling you the same thing: their platform makes it easy to go from pilot to production. The demos are slick. The reference architectures look clean. The case studies are compelling, carefully selected, professionally written, and almost entirely devoid of the parts where things went wrong.

What they don’t tell you is that their platform is the easy part. The hard part is your organisation. And no vendor has a product that fixes that.

The AI pilot industrial complex has a vested interest in keeping you buying. Every pilot that doesn’t reach production is a renewal conversation, a new use case to explore, another workshop to run. The meter keeps running whether you ship or not. Meanwhile your actual security posture, your actual operational efficiency, your actual competitive position, none of that improves while you’re still running proof of concepts.

I’ve seen organisations spend two years and seven figures “exploring” AI capabilities that their competitors deployed in four months and a fraction of the budget. The gap between those two organisations isn’t technical. It’s not the model, it’s not the platform, it’s not the data. It’s the decision to actually finish something.

2. Your Governance Process Is Designed to Prevent Shipping

I want to be careful here because governance matters. In a regulated industry it matters a lot. But there is a version of enterprise governance that exists not to manage risk but to distribute blame, and it is absolutely lethal to getting AI into production.

You know the signs. The steering committee that meets fortnightly but can’t make a decision without a subcommittee review. The risk framework that was written for a different era of technology and gets applied wholesale to AI systems without any attempt to calibrate it to the actual risk profile. The legal team that blocks a deployment because nobody has specifically approved this use case before, even though the underlying risk is lower than a dozen things already running in production. The architecture review board that wants to discuss whether this is the right foundational model before they’ll sign off, as if model selection is more important than shipping.

These structures aren’t protecting your organisation. They’re protecting the people inside them. There is a meaningful difference between those two things.

Real governance asks: what are the actual risks here, what controls do we need, and how do we move forward safely? Performative governance asks: who else needs to be in this meeting before anyone can be held accountable for a decision? One of those gets AI into production. The other one generates excellent meeting minutes.

The organisations that are shipping AI at speed have not abandoned governance. They’ve redesigned it to match the pace of what they’re building. They have clear ownership, tight decision rights, and a bias toward controlled production deployment over extended piloting. They treat a well-instrumented production system as better risk management than an endlessly extended POC, because it is. You learn more about real risks from running something in production with proper monitoring than you ever will from a sandbox.

3. You’re Automating the Wrong Thing

This one is the most uncomfortable, because it’s an internal failure rather than something you can blame on a vendor or a governance committee.

The single most common reason AI pilots don’t reach production is that they were solving the wrong problem to begin with. Someone identified a process that looked automatable, stood up a pilot, got impressive demo results, and then discovered that the process was never well-defined enough to actually run without constant human intervention. Or the edge cases, which are trivial for a human and catastrophic for an agent, turn out to represent 30% of real-world volume. Or the data that looked clean in the pilot environment is a mess in production. Or the workflow the agent was designed for hasn’t been the actual workflow for six months, because it was already informally replaced by something else and nobody updated the documentation.

AI agents are brutally good at exposing process debt. Every vague step, every undocumented exception, every “we just know” piece of institutional knowledge, the agent will find it, fail on it, and wait for a human to tell it what to do. If your process isn’t clean before you automate it, you’re not building an AI system. You’re building an extremely expensive way to discover that your process is broken.

The pilots that work are built on processes that someone has already done the hard work of defining clearly. Not processes that seem like they should be automatable, but processes that actually are, because someone sat down and mapped every step, every exception, every decision point, before a single line of agent code was written.

At Capitec, the AI systems we’ve shipped into production weren’t picked because they were exciting. They were picked because the underlying process was well understood, the success criteria were unambiguous, and we knew exactly what good looked like before we started building. Boring criteria. Effective filter.

4. What Targeting Production Actually Looks Like

We made a deliberate choice to target production assets, not sandboxes. Not “innovation labs.” Not proof of concepts that live forever in a demo environment. Production assets. Real systems. Real clients.

We run realtime pen testing against our Cloudflare APIs in production, including chaining of API calls to test attack sequences the way an actual adversary would construct them, not just isolated endpoint checks. We do UX regression testing across thousands of mobile device configurations using Playwright MC, BrowserStack and Claude. So we know with “confidence” when a release breaks something on a real device in the real world before a client finds it. We scan app telemetry in realtime when a client calls in, so the call center agent who picks up your call, knows before they say hello what the problem on the account is likely to be and what to do about it. The client experience changes completely when the person helping you already understands your situation.

None of this is exotic technology. All of it required a genuine commitment to integrating AI into the way we actually deliver products, not the way we talk about delivering products. We had to change our entire persistance architecture to support realtime read offloading to all the AI framework realtime access to production data without blocking write traffic.

That is the distinction most organisations are missing. They are treating AI as a capability to be evaluated, when it is actually a structural change to how you build and operate. You don’t add AI to your existing delivery model and get the benefit. You have to reset how you work, how your teams are organised, how your processes run, and how your people think about what they’re building. That reset is uncomfortable. It requires people to let go of patterns that have worked for years. It requires leaders to be genuinely open to operating differently, not just open to the idea of it in principle.

5. The Cost of Staying in Pilot

Here’s what the pilot forever strategy is actually costing you, in concrete terms.

Every month your AI security tooling stays in pilot is another month your security team is doing manually what could be running continuously and automatically. Every endpoint not being continuously tested is a potential gap in your posture. Every compromised client device that takes hours to detect instead of seconds is a window where real money can move.

The competitive arithmetic is straightforward and it isn’t in your favour. The organisations that shipped six months ago are now running second generation systems, refining models on production data, building operational muscle around how to work with AI agents effectively. You’re still in the steering committee meeting. The gap isn’t staying constant. It’s compounding.

There’s also a talent cost that doesn’t appear on any project budget. Your best engineers know the difference between an organisation that ships and one that pilots. They are watching. The ones who want to build real systems, and those are exactly the ones you most want to keep, will eventually conclude that they can build more interesting things somewhere else. A culture of perpetual piloting is a slow way to lose the people who would have helped you get out of it.

And there is a credibility cost. Every AI initiative that gets announced, piloted, and quietly shelved makes the next one harder to fund, harder to staff, and harder to get through governance. You are spending credibility you will eventually need.

6. What Actually Gets You to Production

Stop piloting things you’re not committed to shipping. This sounds obvious. It isn’t, apparently.

Before you start a pilot, answer three questions with actual specificity. What does production look like, what system, what scale, what integration points, what go live date? What would cause you not to ship, name the actual criteria, not vague concerns about risk? Who owns the production decision and what do they need to see to make it?

If you can’t answer those questions before you start, you don’t have a pilot. You have a research project with a vendor’s billing address attached to it.

Fix your governance before you start your next pilot, not during it. Define who makes the production decision. Define what they need to see. Define the timeline. Write it down before anyone writes a line of code. If your governance process can’t accommodate a production decision in under three months for a well scoped AI system, the governance process is the problem.

And be honest with yourself about whether you’re in pilot because the technology isn’t ready or because your organisation isn’t ready. Those are different problems. The first one has a technical solution. The second one requires someone with authority to make a decision that probably makes some people uncomfortable.

Washing your AI capability through governance theatre and letting it degrade into RPA 2.0 is not risk management. It’s a choice to waste one of the most significant technological shifts in a generation. There is an IP goldmine sitting inside every organisation that has real data, real processes, and real clients. Most are burying it under committee reviews and vendor dependency.

AI is not a toy. It is not a vendor’s gift. It is not a feature you add to your product. It is a structural change to the way you build and deliver. Until you understand that and reset accordingly, you will keep piloting. You will keep presenting. And your competitors who figured it out will keep shipping.

Go study. Go deliver.

Andrew Baker is CIO at Capitec Bank. He writes about enterprise architecture, cloud infrastructure, banking technology, and the gap between how technology is talked about and how it actually gets built.

Stop Selling Hampers: Why Enterprise Software Tiering Is a Self-Defeating Strategy

By Andrew Baker, CIO at Capitec Bank

There is a category of enterprise technology vendor whose approach to pricing is so fundamentally at odds with how purchasing decisions actually get made that it borders on self-defeating. Their commercial model is built on access gates, bundled tiers, and a deeply held belief that controlling what a customer can see before they pay is a form of leverage. It is not. It is anti-sales dressed up as a pricing strategy, and a generational reckoning is coming for every vendor that has not figured this out yet.

1. The Legacy Pricing Trap

You know the vendors I mean even if I will not name them here. Their pricing model is built on tiers, each one separated from the next by a gap that costs companies millions of dollars to cross. The product catalogue is divided up such that anything genuinely interesting sits in a tier that requires a significant commercial commitment before you can touch it. You cannot experiment with it. You cannot build intuition about it. You cannot develop the informed advocacy that would eventually lead your organisation to invest in it properly. The gate comes before the experience, and the gate is expensive.

2. The Bundle Illusion

2.1 The Christmas Hamper Problem

Buying enterprise software is nothing like buying a Christmas hamper. With a hamper you are happy saying you are buying “stuff”. You do not need to know what is in it or whether you will use every item. Enterprise technology does not work like that. You cannot tell your board you are buying “stuff” from vendor X, and you cannot tell your CISO that the solution to your identity management problem is definitely somewhere in the tier you just committed millions to. Every product needs to be evaluated, understood, and justified against a specific problem.

Yet the hamper model is precisely what tiered pricing enforces. Inside the bundle are products you actively use, products you decided were not fit for purpose, products you never knew existed until after signing, and products duplicated by something you already own. The vendor’s position is that the bundle is worth the price regardless of how much you consume. They are so magnificent, apparently, that clients should simply pay for the option to use the software.

2.2 The Duplicate Purchase Problem

I have watched this play out repeatedly. Organisations on a pricing tier that included a fully capable security product, who had gone out and bought the same capability from a competitor anyway, because nobody knew the bundled product was there, or had no basis for trusting it when a real problem arrived. The free tier philosophy exists precisely to prevent this. If engineers can experience a product before a commercial commitment is made, they build the knowledge and trust that makes the bundled product the obvious choice rather than the invisible one.

2.3 The Tier Reclassification Trap

It gets worse. These same vendors have a habit of quietly moving products that sit in your current tier up into a higher tier at contract renewal. The feature set you budgeted for and built processes around is now in the next tier up, and the message is clear: pay more or lose capability. The commercial logic from the vendor’s side is understandable in the narrow short-term sense. The strategic damage is severe and largely invisible until it is too late to reverse.

The practical consequence is that customers become genuinely reluctant to adopt new products within their current tier, even when those products are included and theoretically free to use. The rational response to a vendor that periodically reclassifies features upward is to avoid becoming dependent on anything you are not paying explicitly to retain. So the products sit unused. The integration work does not happen. The institutional knowledge does not develop. And the vendor wonders why adoption of their broader portfolio is lower than the addressable opportunity suggests it should be.

2.4 The Refresh Con

The vendor’s response to low adoption of bundled products is not to ask why included products go unused or what that says about product accessibility. It is to restructure the bundle, move items between tiers, throw out the tins of expired spam, and keep the price the same.

The hamper is still worth the same, apparently.

3. The Opposite Problem: Infinite SKU Proliferation

If the hamper vendor’s sin is bundling everything together and hiding it behind a price wall, there is a mirror image failure mode that deserves equal scrutiny: vendors who partition their product so aggressively that the act of purchasing it becomes a liability.

3.1 Everything Is a New Product

Some vendors have discovered that any incremental capability, no matter how basic, can be packaged as a distinct product with its own SKU, its own login portal, its own contract, and its own renewal cycle. Add basic monitoring dashboards to your core platform? That is a new product. Ship a critical security feature that any reasonable customer would consider table stakes? That is an optional add-on. Need audit logging? That will be a separate line item. Need API access to your own data? That is the enterprise tier.

The motivation is understandable from a short-term revenue perspective. Every new product creates a new upsell conversation. Every capability withheld is a future negotiation. But the cumulative effect is a catalogue so fragmented that no single person in your organisation, including the vendor’s own account team, fully understands what a customer has and has not purchased at any given point.

3.2 The Open Source Repackaging Con

A particularly cynical variant of this approach involves vendors who take well-maintained open source projects, wrap a thin commercial layer around them, and sell them back to enterprise customers as premium products. The underlying technology is freely available on GitHub. The vendor has added a logo, a billing system, and perhaps a support contract of debatable quality. The customer, often a non-technical procurement team acting on a vendor briefing, has no idea they are paying a significant annual fee for software they could have deployed themselves.

This is not innovation. It is arbitrage on organisational complexity, and it relies entirely on the purchasing side lacking the technical depth to identify what they are actually buying. When the technical teams find out, the reputational damage to the vendor is significant and difficult to recover from. Trust in enterprise software relationships, once broken by this kind of discovery, rarely fully repairs.

3.3 Harmful Product Combinations

The most operationally dangerous consequence of extreme product partitioning is that it becomes possible for customers to purchase combinations of products that are genuinely inadequate for the problem they are trying to solve. This is not a theoretical risk. It happens routinely when vendors slice their offering finely enough that no single package provides end-to-end coverage of a real-world use case.

This problem reaches its peak severity in the security sector. A customer buys a security product from a well-known vendor. They do a reasonable job of evaluating it. They sign a contract. They deploy it. And then something bad happens, and it turns out that the detection capability, the response capability, and the alerting capability they assumed were part of what they bought are actually three separate products, two of which they did not purchase.

The client will, entirely reasonably, point out that they believed they had bought protection. The vendor will point to the contract language. Both are correct and neither is useful. The outcome is a breach that the product was theoretically capable of preventing but practically did not, because the customer was sold a component when they needed a system.

Security vendors bear a particular responsibility here because the asymmetry of consequence is so severe. A misconfigured analytics platform costs you insight. A misconfigured or incomplete security product costs you everything. Vendors who knowingly design their packaging such that customers can inadvertently purchase inadequate protection are not just making a commercial miscalculation. They are making an ethical one.

3.4 The Fragmentation Tax

Beyond the security risk, extreme product partitioning imposes a significant and largely invisible operational burden on the customer. Multiple contracts. Multiple renewal dates. Multiple login portals. Multiple support relationships. Multiple sets of administrators. Multiple training requirements. The total cost of ownership of a fragmented product suite is consistently higher than any single-vendor analysis will suggest, because the integration costs, the context switching costs, and the organisational overhead of managing dozens of micro-relationships with the same vendor are distributed across departments and never appear in a single budget line.

The vendor sees this as customer stickiness. The customer eventually sees it as a hostage situation.

4. A Generational Reckoning

The generational problem this creates is profound and slow-moving enough that most of these vendors will not feel it until it is structurally very difficult to address. The technology leaders who bought into these platforms did so in an era where the vendor had sufficient market leverage to make the tier-based model stick. Those leaders are gradually retiring. The generation replacing them grew up with AWS free tier, with Cloudflare free tier, with open source everything, with the expectation that you experience a product before you commit to it. They have spent their formative professional years building instincts on platforms that trusted them with real capability before asking for money.

When those leaders sit across a procurement table from a vendor whose pitch begins with a multi-million dollar tier commitment required just to evaluate the relevant product set, the cultural mismatch is immediate and significant. It is not just a price objection. It is a philosophical incompatibility with how they believe technology decisions should be made. And unlike the previous generation of buyers who perhaps had fewer alternatives, this generation has grown up with genuinely competitive options that do not impose the same barriers.

The vendors who have built their commercial model on tier-based access restrictions have a window to adapt. That window is not permanent. Every year that passes without meaningful change to how they allow potential customers to experience their products is another cohort of future decision makers building their instincts and loyalties elsewhere.

The root cause of this strategic blindness is that there is no metric for the depth of understanding your product has in the marketplace. You cannot put it in a board report. You cannot trend it quarter on quarter. You cannot attribute it to a campaign or a sales motion. And because it is unmeasurable, short-sighted technology companies convince themselves the gap can be bridged through other means. So they invest in snappy Gartner acronyms that reframe existing capability as visionary innovation, and they deploy fleets of well-heeled sales teams whose job is to manufacture urgency and compress evaluation cycles before the prospect has time to develop genuine product intuition. It works, right up until it does not. The deal closes but the understanding never develops, and without understanding there is no organic advocacy, no internal champion who truly believes in the platform, and no resilience when the product disappoints.

Technology companies run by engineers tend to understand this instinctively. Engineers know what it means to learn by doing. They know the difference between reading documentation and actually building something. They know that genuine conviction about a technology comes from hands-on experience and cannot be manufactured by a sales process however well resourced. When engineers run product strategy, accessible pricing and free tier investment make intuitive sense because they have lived the experience themselves. When the company is run primarily by people whose mental model of selling is about controlling access and extracting maximum value at each gate, the free tier looks like revenue left on the table rather than pipeline being built. That framing error is expensive, and it compounds over time in ways that do not show up in any dashboard until the generational shift is already well underway.

5. Conclusion

The technology industry has a persistent tendency to over-invest in the proxies for value rather than value itself. Brand recognition, analyst rankings, conference presence, and content marketing are all proxies. They describe a product from a distance. They cannot substitute for the experience of building with it, debugging it at midnight, watching it absorb an attack, or navigating an outage with enough architectural understanding to maintain composure.

The vendors still operating on the hamper model are not just leaving pipeline on the table. They are actively training the next generation of decision makers to build loyalty elsewhere. The vendors operating on the infinite SKU model are doing something arguably worse: they are selling customers the illusion of capability without the substance of it, and in the security domain that distinction carries consequences that no contract clause can fully mitigate.

The ask is simple. Stop selling hampers. Stop selling fragments. Start selling products. Price them in a way that lets people touch them before they commit to them. Package them in a way that ensures a customer who buys your security product actually has security. Trust that what you have built is good enough to demonstrate its own value. If it is not, that is the more important problem to solve.

Andrew Baker is the Chief Information Officer at Capitec Bank in South Africa. He writes about enterprise architecture, cloud infrastructure, banking technology, and leadership at andrewbaker.ninja.

The Quiet Power of Free Tier: Why Cloudflare Gets It Right

By Andrew Baker, CIO at Capitec Bank

There is a truth that most technology vendors either do not understand or choose to ignore: the best sales pitch you will ever make is letting someone use your product for free. Not a watered-down demo, not a 14-day trial that expires before anyone has figured out the interface, but a genuinely generous free tier that lets people build real things and solve real problems. Cloudflare understands this better than almost anyone in the industry right now, and it has made me a genuine advocate in a way that no amount of marketing spend ever could.

1. How I Found Cloudflare and Almost Lost It

My journey with Cloudflare did not begin with enthusiasm. It began at Capitec, where I was evaluating infrastructure and security platforms at institutional scale. My initial view of Cloudflare was limited: it was a CDN with an API gateway capability, useful, but not architecturally differentiated in any meaningful way from competing options. My awareness of what genuinely set it apart was low.

The concerns I had at that stage were squarely enterprise concerns. The lack of private peering between Cloudflare and AWS in South Africa was a meaningful issue for Capitec specifically. For a major retail bank operating in this market, network latency and peering and routing issues are not abstract considerations. They are hard requirements. The absence of a direct peering arrangement had me questioning whether Cloudflare could credibly serve the needs of a bank with millions of active customers.

Then came a series of outages in 2025. Any one of those incidents in isolation might have been forgivable, but cumulatively they put Cloudflare in a difficult position. For a platform whose core value proposition is reliability and availability, sustained turbulence shakes confidence.

What changed my perspective was not a sales conversation or an analyst briefing. It was personal experimentation. I started using Cloudflare for andrewbaker.ninja, my personal blog, after joining Capitec. That hands-on use opened up a completely different view of the platform. What I had evaluated as a CDN with an API gateway was actually something far more capable. I discovered R2, Cloudflare’s object storage offering. I worked through Workers in depth. I started building real functionality at the edge, not just routing traffic through it. Most significantly, our team began using Cloudflare Workers to create custom malware signals and block traffic based on behavioural patterns, turning what I had thought of as a passive network layer into an active security enforcement point.

That is the moment the evaluation changed. The peering concerns and the stability questions remained live issues, but I now had genuine product depth that allowed me to weigh them against a much clearer picture of Cloudflare’s architectural differentiation. That picture came entirely from free tier experimentation on a personal blog. It could not have come from a sales deck.

2. What Cloudflare Actually Gives You for Free

The Cloudflare free tier is, frankly, extraordinary. When I first started using it for andrewbaker.ninja, I expected the usual pattern: enough capability to see the shape of the product, but with enough gates and limits to push you toward a paid plan. What I found instead was a comprehensive platform that covers almost every dimension of modern web security and performance at zero cost.

2.1 Security and Performance at the Edge

The foundation of the free tier is unmetered DDoS mitigation. Not capped, not throttled after a threshold, unmetered. For a personal blog or small business site, volumetric attacks are existential threats, and the fact that Cloudflare absorbs them at no cost is a remarkable statement of confidence in their own network scale. Sitting on top of that is a global CDN spanning over 300 cities, with free tier users on the same edge infrastructure as enterprise customers. SSL is automated, free, and renews without any manual intervention, making the secure default the effortless default. Five managed WAF rules covering the most critical OWASP categories are included, along with basic bot protection that handles the constant noise floor of scrapers, credential stuffers, and scanning bots that any public site attracts.

Caching deserves particular attention because for anyone running on a low end AWS instance type, and most personal blogs do exactly that, it is not a nice to have. It is life or death for the origin server. A t3.micro or t4g.small running WordPress has a hard ceiling. Under normal traffic patterns it holds up, but a post shared on LinkedIn with any momentum or picked up by a newsletter will send concurrent requests that a small instance simply cannot absorb. With Cloudflare caching absorbing the majority of that traffic, the origin barely notices the spike. I have watched this play out against andrewbaker.ninja more than once. The cache hit ratio in the analytics dashboard tells the story clearly: the origin handles a fraction of total requests while Cloudflare absorbs the rest. That is an availability and cost story simultaneously. Cache rules, custom TTLs, per-URL purging, and intelligent handling of query strings and cookies are all available on the free tier, giving you a degree of control that is not normally associated with a free offering.

2.2 Developer Capability and Operational Visibility

Beyond security and performance, the free tier extends into territory that genuinely surprises. Workers gives you serverless compute at the edge with 100,000 requests per day included, which is more than enough to build meaningful functionality: request transformation, custom authentication flows, A/B testing, and API proxying. In our case, it became a platform for building custom malware detection signals and traffic blocking logic that goes well beyond what a conventional WAF configuration could achieve. Cloudflare Pages adds free static site hosting with unlimited bandwidth and up to 500 builds per month, competitive with the best JAMstack platforms. DNS management sits on infrastructure widely regarded as the fastest authoritative DNS in the world, with DNSSEC and a clean management interface included at no cost.

The analytics layer is where Cloudflare makes a particularly interesting choice. Rather than gating visibility behind paid plans to obscure the value being delivered, the free tier shows you everything: requests, bandwidth, cache hit ratios, threats blocked by type, geographic traffic distribution, and real user Web Vitals data including Largest Contentful Paint and Cumulative Layout Shift from actual visitor sessions. For andrewbaker.ninja, the geographic breakdown alone was genuinely new information that shaped content decisions. Seeing threats blocked in real time makes the protection layer concrete rather than theoretical. Zero Trust Access rounds out the free offering with up to 50 users, giving hands-on experience with a ZTNA model that enterprise vendors charge significant per-user premiums to access.

One area where I would encourage Cloudflare to go further is 404 error tracking, which currently sits behind paid plans. A limited version tracking errors for just a handful of pages would cost them very little while giving free tier users a direct experience of the capability. The broader principle I would advocate is that every service in the Cloudflare catalogue should have at least a small free window. Exposure drives understanding, understanding drives advocacy, and advocacy drives enterprise pipeline far more reliably than any campaign.

3. The Strategic Value of Free Tier as a Leadership Development Tool

Let me be direct about what actually happened here. Cloudflare was already on my radar at Capitec, evaluated cautiously and with real reservations. What the free tier did was deepen my product knowledge far beyond what any enterprise evaluation process produces. I moved from understanding Cloudflare as a CDN with an API gateway to understanding it as a programmable edge platform with genuine security enforcement capability. That shift happened entirely through personal experimentation, at zero cost to Cloudflare beyond the infrastructure they were already running.

No sales team call produced that outcome. No analyst briefing, no conference sponsorship, no whitepaper. A free tier account for a personal blog did.

This is not a coincidence or a lucky edge case. It is the mechanism by which free tier compounds in value over time in ways that are almost impossible to model but entirely real. The person experimenting with your product on a side project today is accumulating product knowledge that travels with them across every context in which they operate, personal and professional simultaneously. When that person holds senior leadership responsibility, the intuitions built through free tier experimentation inform how they frame requirements, assess vendor claims, and evaluate architectural trade-offs. Crucially, that knowledge also provides resilience when a platform goes through a difficult period. I stayed with Cloudflare through the 2025 stability issues not because of a reassuring account manager call but because my own hands-on depth gave me enough architectural confidence to make an informed judgment rather than a reactive one.

The same pattern holds with AWS. My understanding of AWS architecture was built significantly through free tier experimentation. The 12 months of free tier access that AWS provides across a substantial catalogue of services is one of the smartest investments they have made in their developer ecosystem. My seven AWS certifications represent formal validation of knowledge that was built largely through hands-on experimentation the free tier enabled. When I evaluate AWS proposals at Capitec or advocate for specific AWS architectural patterns, that credibility traces back to free tier experience. No marketing budget produces that outcome.

Free tier products are, in effect, a leadership development programme that technology vendors run at their own expense. Every future CIO, CTO, or technology decision maker working their way up through an organisation is building instincts and preferences right now through the products they can access and experiment with freely. The vendors who understand this invest in those experiences. The vendors who do not are optimising for short-term revenue extraction at the cost of long-term pipeline development.

4. The Slack Cautionary Tale

Slack represents the opposite lesson, and it is worth examining honestly.

I used Slack’s free tier heavily for years. Across multiple communities, interest groups, and peer networks, Slack was the default platform precisely because the free tier was generous enough to make it viable for groups that could not or would not pay. It was through this extensive free tier use that I developed deep familiarity with the product, its integrations, its workflow automation capabilities, and its organisational model. That familiarity translated directly into Slack advocacy in enterprise contexts.

Then came a series of changes to the free tier. Message history limits became more restrictive. Integration constraints tightened. The experience of being a free tier user shifted from feeling like a valued participant in the platform ecosystem to feeling like someone being actively nudged toward payment.

The result was not that the communities I participated in upgraded to paid Slack. The result was that those communities moved to other platforms. Discord absorbed many of them. Some moved to Microsoft Teams. Others fragmented across different tools. In most cases the community did not reconstitute on Slack at a paid tier. It simply left.

The downstream consequence for Salesforce, which acquired Slack for approximately 27.7 billion dollars, is a meaningful erosion of exactly the pipeline that free tier usage was building. Every community organiser, technology professional, and business leader who built their Slack intuitions through free tier usage and then migrated to an alternative platform is now building comparable depth of knowledge on a competing product. The future enterprise purchasing decisions of those individuals will reflect that. Slack did not just lose free tier users. It cut off future sales pipeline development at the roots.

This is a cautionary tale that should sit prominently in the strategic planning conversations of any technology company considering changes to their free tier offering. The immediate revenue signal from restricting free tier is misleading. The long-term signal, which is harder to measure and slower to manifest, is the erosion of informed advocacy and the diversion of future decision makers toward alternatives.

5. Rethinking the Marketing Mix

I hold a view that is probably uncomfortable for most marketing organisations: technology companies should meaningfully reduce marketing spend in favour of free tier investment.

I understand why this is a hard argument to make internally. Marketing spend produces attributable metrics. Pipeline influenced, leads generated, impressions delivered. Free tier investment produces outcomes that are diffuse, long horizon, and resistant to attribution. The CIO who advocates for your platform in a 2028 procurement decision because they built something meaningful with your free tier in 2024 is almost impossible to trace back to that original free tier investment in any marketing analytics framework.

But the influence is real and it is durable in a way that no campaign achieves. You can say anything you want about a product through marketing. You can claim reliability, performance, security posture, developer experience, and operational simplicity until every available channel is saturated. None of it carries the weight of having used the product yourself, watched it perform under real conditions, seen it recover from real failures, and built genuine intuition about its architectural strengths and constraints.

There is also a fundamental misunderstanding embedded in how many enterprise technology vendors think about who actually buys their products. Most enterprise software is not bought by lawyers or sourcing teams. It is bought by engineers. Sourcing teams negotiate contracts and lawyers review them, but the decision about which platform gets shortlisted, which architecture gets proposed to leadership, and which vendor gets championed internally is made by the technical people who will live with the choice. Those people make their recommendations based on product knowledge, hands-on experience, and the intuition that comes from having actually built something with the technology. Embedding that knowledge in the market is not a nice to have. It is the primary sales motion, whether vendors recognise it or not. Every engineer who has meaningful free tier experience with your product is a potential internal champion in a future procurement cycle. Every engineer who has never touched your product, because the access gate was too high, is not.

Cloudflare has clearly internalised this. Their free tier is not a reluctant concession to market norms. It is a deliberate investment in developing the next generation of platform advocates. The breadth of capability they make available at no cost, spanning network security, edge compute, DNS, analytics, and Zero Trust access, reflects a confidence that the product will demonstrate its own value to the people who use it. That confidence is justified. It worked on me, though not in the way a typical marketing funnel would predict or model.

6. Conclusion

Free tier products close the distance between description and experience. They are the most honest form of marketing because they are not marketing at all. They are just the product, made accessible.

For Cloudflare, the free tier fundamentally changed how I understand the platform. I came in seeing a CDN with an API gateway. Personal experimentation with Workers, R2, and custom edge security logic revealed an architecture that is genuinely differentiated. The enterprise concerns around peering and the 2025 stability issues remained real, but the product depth I had built through free tier use meant those concerns could be weighed against a much clearer picture of what Cloudflare actually is at a platform level. That is a completely different evaluation from the one I would have made without it.

For Slack, the contraction of free tier generosity has had the opposite effect, redirecting communities and the professional development of their members toward competing platforms in ways that will compound as career trajectories advance.

The lesson is straightforward even if the organisational will to act on it is not. Invest in free tiers. Invest generously. The future pipeline you are building is less visible than the one your sales team can point to today, but it is deeper, more durable, and ultimately more valuable. Let people experience your product. Trust that it is good enough to speak for itself. If it is not, that is the more important problem to solve.


Andrew Baker is the Chief Information Officer at Capitec Bank in South Africa. He writes about enterprise architecture, cloud infrastructure, banking technology, and leadership at andrewbaker.ninja.

The Futility of Corporate Heckling

There is a peculiar sport played in large organisations. It looks like leadership and sounds like governance, hiding behind frameworks, maturity models, and operating rhythms. But in reality it is something far less noble. It is corporate heckling.

Corporate heckling is what happens when a function narrates from the sidelines with low context and high confidence. It is the art of describing how everyone else should work, without ever carrying the burden of delivering anything meaningful yourself. It is commentary at scale, divorced from the trade offs, constraints, technical debt, regulatory nuance, customer friction, and immovable deadlines that real teams wrestle with every single day, and it achieves nothing.

1. The Sideline Narrator

In every enterprise there are functions that slowly drift from enablement into commentary. They attend steering forums, publish decks, rate “maturity,” explain what good looks like, and diagnose other teams from a safe distance.

The pattern is predictable. They have all the answers. The delivery teams are immature. The architecture is fragmented. The engineers lack discipline. The product managers do not think strategically enough. What is rarely true is that these narrators have taken the time to sit inside those teams and understand why things look the way they do. They have not traced the historical decisions, absorbed the regulatory constraints, sat through production incidents at midnight, or tried to ship a feature under a hard date with incomplete information and an unpredictable dependency. Without context, ideas sound elegant. With context, they often collapse, because heckling is easy and delivery is not.

2. Enterprise Architecture and the Illusion of Superiority

Enterprise architecture is particularly susceptible to this disease.

At its best, architecture creates clarity of domain boundaries, coherent principles, and composable building blocks that make delivery faster and safer. It reduces cognitive load, prevents duplication, and enables teams to move independently without creating chaos. It is an accelerant.

At its worst, it becomes a commentary layer. A slideware practice that declares teams immature while producing nothing that reduces friction. A group that standardises vocabulary but never simplifies reality. A function that critiques design decisions without being present when those decisions were constrained into existence.

If architecture only appears to say no, redraw diagrams, and demand alignment to abstract models, it is heckling. If architects are not embedded in discovery, not present in solution design, and not accountable for operability and performance once the system is live, then they are spectators with strong opinions. Maturity is not declared in a deck. It is built in code, in platforms, in patterns that actually make delivery easier, and it is built by people who carry outcomes rather than narratives.

3. UX/CX Without Skin in the Game

Design is not a handover ceremony. It is not a file labelled “approved” thrown over a wall to developers who are then expected to faithfully reproduce a static imagination inside a living system. If UX is not debating constraints with engineers during build, not tensioning performance against animation, accessibility against layout, or clarity of copy, then it is participating in theatre.

When the shipped experience differs from the prototype, it is rarely because engineers are careless. It is usually because trade offs were made under time, performance, or integration pressure, and if design is not present when those trade offs are made, it forfeits the right to be surprised by the outcome. Teams solve problems every day while constraints shift, dependencies fail, budgets tighten, and deadlines loom. Without skin in the game, without a seat in the room, without a detailed understanding of constraints and trade offs, commentary becomes heckling, and heckling improves nothing.

But there is a more damaging pattern than simply being absent. It is the pattern of swooping in after the fact, armed with a single anecdote, and redirecting an entire engineering team away from work that actually matters.

Picture this. A team is three weeks into resolving a critical payment processing defect affecting tens of thousands of transactions daily. The fix is complex, the root cause is buried deep in an integration layer, and the engineers are making real progress. Then a screenshot lands in a Slack channel. A CX function has spotted that a button label on a low traffic screen reads “Confirm” instead of “Submit.” Someone’s cousin noticed it. A director mentioned it in passing. Suddenly there is a deck, a meeting invite, and an escalation. The heckling function has found its moment. It has something visible, something it can point to, something that proves it is watching.

Within forty eight hours, two senior engineers are pulled off the payment defect to address the button. The fix takes four hours. The showcase takes longer. The payment defect, now understaffed, slips another sprint. The downstream consequences of that delay; the failed reconciliations, the customer complaints, the operational overhead will never be attributed to the button. They will be attributed to the engineering team’s velocity. This highlights the classic managerial anti-pattern: authority without responsibility. The heckler wielded enough influence to redirect the room, but will bear none of the accountability for what slipped as a result.

This is how heckling destroys value invisibly. The lost time on critical issues is an unseen consequence. The heckler never sees it because they were never close enough to understand what was being worked on in the first place.

What makes CX and UX heckling particularly insidious is its almost total absence of data. It does not arrive with conversion metrics, drop off rates, session recordings, accessibility audit scores, or usability test results. It arrives as general anxiety. Vague concerns about the “feel” of the experience. Assertions that something “doesn’t look right” or “isn’t what we agreed.” It is almost always singular and anecdotal. a complaint from one customer, an observation from one stakeholder, a personal preference dressed as a design principle.

And it never acknowledges what went well. The onboarding flow that reduced friction by thirty percent goes unremarked. The form redesign that halved completion time receives no comment. The accessibility improvements that passed external audit attract no praise. Heckling functions are structurally incapable of celebrating progress because celebration does not justify their existence. Their value proposition depends on finding problems, which means problems must always be found, regardless of whether they are the most important ones.

The result is a team that learns to manage the heckler rather than solve the real problems. Engineers start anticipating the screenshot. Product managers start pre emptively polishing low traffic screens to avoid the distraction. The entire team optimises for the optics of the experience rather than the outcomes of the product. Visible cosmetic quality becomes a shield against interference, and invisible systemic quality, the kind that prevents outages, protects data, and scales under load, gets quietly deprioritised because no one is heckling about it.

Real CX and UX influence looks nothing like this. It looks like embedded designers who sit in sprint reviews and absorb constraints as they emerge. It looks like research that quantifies the actual impact of experience decisions on conversion, retention, and support volume. It looks like a function that can say “we know drop off increases by twelve percent at this step because we measured it across eighty thousand sessions, and here is our proposed fix that we have already tested with engineering for feasibility.” That is influence. That is value.

A screenshot of a button and a strong feeling is not design. It is heckling with a Figma licence.

4. Risk That Never Risks Anything

Risk functions often drift into the same trap. It is easy to publish a list of findings, state what is non compliant, and point out gaps. It is much harder to sit in sprint reviews and guide decisions as they form, design controls that preserve velocity, or propose practical alternatives that balance safety with usability. Risk that lives in documents is narrative. Risk that lives in the room is influence.

If a risk function does not join the meetings where critical decisions are made, if it does not shape options before they solidify, it becomes another backlog item fighting for relevance. Another ticket to be negotiated, another control to be interpreted creatively under pressure. Real risk work produces insight that changes behaviour before the problem materialises. It does not simply annotate the aftermath and declare the team immature.

5. Cyber as a Backlog Generator

Cyber and security teams can drift into the same pattern. A penetration test report is not value. A spreadsheet of findings is not protection. A quarterly email with severity ratings is not security. If the same issues reappear release after release, the problem is not developer immaturity. It is systemic enablement.

A cyber team that works with platform teams to harden pipelines, embeds guardrails into CI CD, provides reusable components to prevent entire classes of vulnerability, and supplies consumable building blocks which make the secure path the easiest path is a team creating value. If instead you hand delivery teams a list of findings and walk away, you are not reducing risk, you are generating a backlog item competing for priority against revenue, customer experience, and regulatory change. Give teams tools that solve real problems and they will use them. Give them a list of issues and you will be negotiated.

6. The PMO Spectator Effect

There is a special flavour of corporate heckling reserved for the PMO.

On paper, the Project Management Office exists to create predictability, reduce risk, and improve transparency across portfolios. It promises discipline, visibility, and control. In theory, it should be an accelerant.

In practice, when it drifts into commentary, it becomes a narrator of delivery rather than an enabler of it. A heckling PMO produces immaculate status reports while delivery teams wrestle with messy reality. It tracks milestones but does not remove blockers. It escalates risk but does not help redesign plans. It enforces process compliance but does not materially improve outcomes.

Because PMOs rarely own the product or the technical architecture, they can afford to construct clean narratives about what “should” have happened. They can question estimates without absorbing integration complexity. They can demand alignment without carrying the cost of delay. They can score maturity without understanding constraints. The result is governance theatre.

Product teams, meanwhile, are balancing client needs, technical debt, regulatory nuance, and commercial trade offs. They have already considered most of the counter proposals. They have already debated alternatives. But they have less energy to defend their position because they are also trying to ship. Eventually alignment sessions are scheduled. Decks are exchanged. Language is harmonised. Both sides present. Both sides moderate. Both sides leave with a shared document that looks like consensus but mostly reflects exhaustion.

And nothing changes. The product continues largely as it was. The heckler moves to the next target. The only casualty is the energy quietly consumed by the whole exercise, energy that was never measured, never recovered, and never missed until you wonder why the organisation always feels slightly slower than it should.

A high value PMO would look very different. It would embed in delivery. It would focus on unblocking rather than reporting. It would simplify rather than add ceremony. It would help teams make trade offs instead of policing them. If the PMO’s primary output is narrative about other teams, it is heckling. If its output is accelerated delivery, fewer surprises, and clearer decisions, it is leadership.

7. The Roadmap Session as a Litmus Test

The quarterly roadmap review is one of the most common forums in large organisations. It typically has a clear stated purpose: walk through what is planned, build shared understanding of how it connects to strategy, unpack the business case and dependencies, and align on prioritisation across teams and channels. On paper this is collaborative. In practice it is often where heckling reaches its peak intensity.

The difference between a genuine alignment session and a heckling session is not the agenda. It is who is in the room, what they have done before they arrived, and whether they are there to help or to perform scrutiny.

A partner reads the material, understands the constraints, asks questions that reduce ambiguity, and leaves the team clearer on what to do next. A heckler arrives unprepared, asks questions that signal intelligence rather than generate insight, raises risks without proposing alternatives, and departs leaving the team with more uncertainty than they walked in with.

If you are attending a planning session and your primary preparation was checking your calendar, you are not a partner. You are an audience member with a speaking slot.

Executive input and support, which these sessions explicitly require, only has value when it is anchored in real understanding of what the team is trying to do and why the trade offs have landed where they have. Input from someone who has not engaged with the business case is not support. It is noise dressed as governance.

Before the next roadmap review, ask yourself whether you have done the work that earns you a meaningful seat at that table. If not, the most valuable contribution you can make is to cancel the review.

8. Narrative Is Not Delivery

Every function in an organisation must produce something more than a narrative about another team. Architecture must produce principles and platforms that reduce friction. Risk must produce insight that shapes decisions in real time. UX must produce clarity that survives contact with engineering. Cyber must produce systems that prevent recurrence. If your primary output is a deck explaining why others are not good enough, you are not creating value. You are heckling. Delivery teams do not need more commentary. They need partners who share accountability for outcomes.

9. The Seductive Counter Narrative

There is a more insidious dimension to corporate heckling that deserves its own examination.

Because the heckler carries almost no production responsibility, they have something the product team does not: spare capacity for narrative construction. They can spend days crafting a compelling alternative story. A cleaner model. A simpler approach. A vision unburdened by the complexity of actually having to deliver it. And it is often seductive. It sounds coherent. It has diagrams. It has a name. What it rarely has is memory.

On inspection, product teams have almost always already considered this narrative. They have interrogated it, tested it against their client data, weighed it against constraints the heckler has never encountered, and made a deliberate choice to move in a different direction. They understand their clients better. They have a plan. They simply did not write a deck about the roads they chose not to take.

The asymmetry is damaging. The heckler, unencumbered by delivery, can invest disproportionate energy in articulating their counter narrative. The product team, already running a business and trying to ship value, has far less energy to defend decisions they made correctly but informally. The burden of proof falls on the wrong party.

So organisations do what they always do when two competing narratives collide. They force alignment. They convene workshops. They create working groups. They schedule steering forums. Both sides present. Both sides moderate. Both sides leave with a shared document that looks like consensus but mostly reflects exhaustion. And nothing changes. The product continues largely as it was. The heckler moves to the next target. The only casualty is the energy quietly consumed by the whole exercise, energy that was never measured, never recovered, and never missed until you wonder why the organisation always feels slightly slower than it should.

Counter narratives produced without delivery accountability are not strategy. They are expensive theatre dressed as governance.

10. The Bar Raiser Is Not a Heckler

It is worth being precise about what this essay is not arguing. It is not arguing against challenge. It is not arguing against tension. It is not arguing that product teams should be left alone to make decisions in comfortable isolation. Unchallenged teams drift. Comfortable consensus produces mediocre outcomes. Tension, applied well, is one of the most productive forces in any organisation. The bar raiser is the embodiment of this distinction.

A bar raiser is not a commentator. They are a subject matter expert with a proven track record, someone who has run production workloads at comparable scale, who has felt the weight of the trade offs, who has made the hard calls under pressure and lived with the consequences. They do not arrive with a pre formed narrative. They arrive with hard won pattern recognition and the credibility that comes from having delivered.

Critically, they sit inside the team. They attend the same discovery sessions. They read the same constraints. They understand the client problems before they form an opinion about the solution. Their challenge comes after comprehension, not before it. Their narrative is built collaboratively, through conversation, through disagreement grounded in shared context, through tension that makes the output stronger rather than simply louder.

This is what separates the bar raiser from the heckler. The heckler’s narrative precedes understanding. The bar raiser’s narrative emerges from it. The heckler creates two competing stories and forces an alignment process that exhausts both sides. The bar raiser creates one shared story, harder won, more durable, and owned by the people who have to deliver it.

But calling something a bar raising session does not make it so. The label is not the thing. If you want to know whether your bar raising function is actually raising the bar, ask the teams that attended. Ask them whether it added value. Ask whether any outcomes improved as a result. Ask whether they spent the session restating constraints to ideas that were not sufficiently thought through before the meeting was called. Ask whether they left with genuine assistance or with a list of issues to resolve on their own. The answers will tell you quickly whether you have a bar raiser or an institutionalised heckler with a better job title.

Challenge without context is noise. Challenge with context, delivered from inside the work, is how good teams become great ones.

11. What To Do If You Have Heckling Functions

If you recognise these patterns in your organisation, the instinct may be to restructure, defund, or simply ignore the offending functions. That instinct is understandable but wrong, because it wastes the expertise that almost certainly exists inside those teams.

The people in these functions are not usually incompetent. Many of them are deeply skilled, experienced, and genuinely want to make a difference. The problem is structural. They have been placed at a distance from delivery, given frameworks instead of problems, and measured on outputs that have nothing to do with what actually ships. Over time, commentary becomes the only currency available to them. It is not surprising that frustration follows. They sense their own irrelevance. They feel the distance growing between their work and anything that matters. The heckling is often a symptom of that frustration, not its cause.

No organisation can afford teams whose relevance depends on someone else’s product set. That is not a support function. That is a dependency masquerading as governance. When a team’s purpose is defined by what another team is building rather than by what they themselves are accountable for delivering, they will always drift toward commentary. They have no other move. Their existence requires the product team to be doing something they can talk about.

The solution is reintegration. Move architects into delivery teams. Embed risk practitioners into sprint ceremonies. Put UX designers in the room when engineering trade offs are being made. Give cyber teams ownership of platform controls rather than audit findings. Make them accountable for outcomes they can actually influence, not scores on a maturity model that no one trusts. And if a team cannot articulate what it produces independently of what another team is building, that is not a communication problem. It is a mandate problem that no amount of heckling will solve.

Most people, given the chance to do real work and contribute to something that ships, will take it. The commentariat is not a permanent condition. It is what happens when talented people are structurally prevented from being useful. Reintegrate them, give them genuine accountability, and most of them will stop narrating and start building.

Corporate heckling is loud, confident, and useless. The bar raiser is quiet, credible, and transformative. Collaboration is harder than commentary, and slower to begin, but it is the only thing that actually compounds. One builds decks. The other builds outcomes.

It is also worth saying plainly: many of the people doing the most visible heckling are not difficult people. They are good people in the wrong structure. They arrived with real skills, genuine ambition, and a sincere desire to contribute. The organisation then placed them at a distance from anything they could actually influence, gave them frameworks to apply to other people’s problems, and measured them on outputs that nobody outside their own function cares about. Over time, distance breeds frustration. Frustration breeds commentary. Commentary, when it gets loud enough, looks like heckling. But the heckling is not the person. It is what the person has been reduced to by a culture and structure that gave them no better option. Move these people. Put them where their skills can actually land. Set them up with real accountability, real context, and real problems that belong to them. Most of them will transform. Not because they have changed, but because the conditions that were suppressing them have. The frustrated heckler in the wrong team is often the most valuable contributor in the right one. The challenge was never the individual. It was always the structure they were trapped inside.

Note: Almost everyone that I tested this article with thought it was pointed at them. On reflection, I believe that the ambiguity of the application of this article is a healthy space to be 🙂

The Leadership Event Horizon

1. The Shoe Planet Problem

In The Hitchhiker’s Guide to the Galaxy, there is a planet where the inhabitants become so obsessed with shoes that the shoes eventually take over. The civilisation does not collapse because it lacks intelligence. It collapses because something peripheral accumulates mass until it dominates everything essential.

Leadership bloat is the corporate equivalent of that shoe event horizon.

Leadership is necessary. Direction matters. Coherence matters. But beyond a certain density, leadership stops orbiting the work and begins consuming it. The organisation crosses an invisible boundary where supervising value creation becomes more important than value creation itself.

That boundary is the leadership event horizon.

2. The One Metre Ruler

Imagine hiring one hundred leaders to stare at a one metre ruler.

How long is it?

One metre.

Will they agree?

Eventually. After workshops. After alignment sessions. After someone reframes the definition of measurement. After governance confirms the interpretation of length. The ruler does not change. Reality does not move. What expands is the interpretive machinery around it.

What did the hundred leaders change? Not the length. They changed the cost base. They changed the latency of decision making. They changed how long it now takes to say something obvious.

Near an event horizon, mass bends time. In organisations, layers bend speed. The more leadership mass you add, the slower information travels. By the time clarity moves up and back down the hierarchy, the market has already moved.

3. When the Shoes Take Control

In the Hitchhiker universe, the tipping point is subtle. People are still discussing shoes, improving shoes, optimising shoes. Only later does it become clear that the accessories are now in charge.

In business, we hire leaders to coordinate teams. Then we hire leaders to coordinate those leaders. Then we create forums to align the coordinating leaders. Each step feels responsible. Collectively, they create gravitational pull.

At some point, the business exists to sustain its leadership ecosystem rather than to win in its market. The org chart becomes the product. Ritual replaces output. The shoes are no longer worn. They are curated.

4. Sectionalising the Galaxy

To manage complexity, we subdivide the business into smaller domains. Each segment gets its own leader. Each leader builds a reporting structure. Each structure develops language, metrics, and boundaries that reinforce autonomy. Internally, this feels like precision. Externally, it feels like fragmentation.

The client does not experience your segmentation model. They experience one product, one service, one brand. Internally, multiple leaders debate which micro domain owns the button the client just pressed. Every boundary introduces a negotiation. Every negotiation introduces delay. The galaxy becomes a federation of guarded territories rather than a coherent competitive force.

Sectionalisation increases interfaces. Interfaces increase friction. Friction reduces speed.

5. Comfort as Gravity

Leaders often hire people they feel comfortable with. People who speak the same conceptual language. People who understand the politics. People who can sit in a room and have sophisticated conversations about alignment and transformation.

But filling rooms with people you enjoy conversing with is not a business model. It is a social structure with a payroll.

Comfort attracts more comfort. The organisation gradually optimises for internal fluency rather than external performance. The gravitational mass increases. Escaping becomes harder.

6. The Fifty Metre Paradox

There is a particular species of gravitational collapse that deserves its own classification.

An executive hires a business head to prioritise and align what a team is doing. That team sits less than fifty metres from the executive’s office. The team already has a leader. That leader already has a mandate. That mandate was, in most cases, given by the same executive who just hired the business head.

What follows is predictable. Two structures now orbit the same work. Both need to understand it. Both need to articulate it. Both need to feel heard. Workshops are convened. Alignment sessions are scheduled. Vocabulary is negotiated. And after weeks of structured convergence, both sides arrive at exactly the same conclusion the team started with.

The executive feels comfort. Two independent hierarchies have validated the same direction. This feels like rigour. It feels like governance. In reality, it is the organisational equivalent of asking someone to confirm that the clock on the wall matches the clock on your wrist.

But the real cost is hidden below. The teams responsible for serving both structures now maintain dual reporting formats, synchronised decks, parallel status updates. Builders become translators. Engineering hours evaporate into PowerPoint. Every sprint loses capacity not to delivery, but to the overhead of proving to two audiences that the same thing is still the same thing.

The work does not move faster. The team does not gain clarity. What grows is the administrative mass required to keep two gravitational centres from visibly contradicting each other.

Fifty metres. Same floor. Same building. Two reporting lines. Zero additional insight.

7. The Trampoline Committee

There is a governance structure so perfectly circular that it deserves a name. Call it the trampoline committee.

A group of senior leaders convenes to review and debate decisions. The decisions were made by their subordinates. Their subordinates are the subject matter experts. They built the systems, analysed the data, assessed the risk, and arrived at a recommendation based on years of domain knowledge the committee does not have.

The committee examines the decision. They do not understand it. This is not a criticism of their intelligence. It is a structural inevitability. They were not hired to understand the detail. They were hired to lead at a level above it.

So they ask questions. Reasonable questions. Probing questions. Questions that feel like oversight. And who answers those questions? The same subordinates who made the decision in the first place. The experts explain the decision. The committee listens. The experts explain it again in different words. The committee nods. The decision is approved. Unchanged.

The decision bounced up, hit the committee, and bounced back down exactly where it started. A trampoline. Energy expended, altitude achieved, net displacement zero.

Nothing was improved. Nothing was caught. Nothing was redirected. The only measurable output is that delivery slowed by whatever elapsed time the committee cycle consumed. Two weeks. Four weeks. Sometimes longer, if the calendar gods are unkind and the committee only sits monthly.

Trampoline committees exist because they feel like control. Executives feel they have exercised judgement. Governance frameworks can point to an approval step. Auditors can see a signature. But the signature confirms what was already confirmed by the people who actually know.

The most telling sign of a trampoline committee is this: if you removed it entirely, nothing downstream would change. The same decisions would be made. The same outcomes would follow. The only difference is they would arrive faster.

8. Less Than Nothing

Beyond the leadership event horizon, adding another leader does not increase clarity. It increases drag.

Decision cycles lengthen because authority is distributed across more layers.

Accountability diffuses because responsibility becomes collective and abstract.

Cost compounds because senior salaries require disproportionate impact to justify their existence.

Layering leaders on leaders achieves less than nothing when it slows the builders. It is not neutral overhead. It is competitive disadvantage.

9. Escaping the Event Horizon

The problem is not leadership. The problem is inversion of priorities.

Competitive organisations are ruthless about identifying the real constraint. If the constraint is engineering throughput, hire engineers. If the constraint is product clarity, hire product thinkers. If the constraint is distribution, hire market makers. Do not reflexively hire supervision when the bottleneck is production.

Strong leadership often manifests as fewer layers, clearer mandates, wider spans of control, and a bias toward builders over commentators. The aim is not to eliminate gravity. The aim is to prevent collapse.

Markets reward output, speed, and coherence.

No number of leaders staring at a one metre ruler will make it any longer.