The COBOL conversation this week has been useful, because it has forced the industry to confront something it has been avoiding for decades. But most of the coverage is stopping at the wrong point. Everyone is talking about COBOL. Nobody is talking about the architectural philosophy that COBOL gave birth to, the one that outlived the mainframe, survived the client server era, made it through the cloud revolution, and is still being sold to banks today with a straight face.
Core banking. The idea that you can package every conceivable banking function into a single platform, run it as a monolithic system, and call that an architecture. It was a reasonable compromise when banking was about cutting a cheque once a month and buying a house every twenty years. It is a completely useless approach to solving modern banking needs, and the fact that it has persisted this long is one of the most remarkable examples of institutional inertia in the history of enterprise technology.
This is a companion to my earlier article on the COBOL announcement that shook IBM’s stock price. That piece was about the death of COBOL as a moat. This one is about the death of the architectural philosophy that COBOL created, and why that second death is the one that actually matters.
1. Where Core Banking Came From
To understand why core banking became so entrenched, you need to go back to where it started. The first computerised core banking systems emerged in the late 1960s and early 1970s, built in COBOL and running on IBM mainframes. The business problem they were solving was genuine and significant: banks had enormous volumes of transactions to process, they were doing it manually or with primitive automation, and they needed centralisation, speed, and reliability.
The solution was a single centralised computer that handled everything. Account management, transaction processing, interest calculation, fee charging, regulatory reporting, all of it in one place, in one codebase, with batch processing that ran overnight. Transactions were processed in groups at end of day because that was the technical reality of the hardware. Intraday balances required workarounds. The system was only accessible during banking hours. These were not design choices made out of laziness. They were pragmatic responses to the constraints of 1970s computing.
And it worked. For the banking reality of the 1970s, it worked extremely well. A customer visited one branch. They had a current account and perhaps a savings account. They wrote cheques. They took out a mortgage once in their adult life. The entire relationship was narrow, predictable, low volume, and slow moving. A batch processing system that updated balances overnight was entirely adequate for that world. The monolithic architecture made sense because the problem it was solving was genuinely monolithic.
The architectural sin came later. It came when that original pragmatic compromise got packaged up, sold as a product, extended by vendors across decades, and eventually canonised as the correct way to build a bank’s technology. The compromise became the convention. The workaround became the standard. And by the time the banking world had changed beyond recognition, the core banking system had become too embedded, too expensive, and too complex to dislodge.
2. The Stuck Thought That Refuses to Die
By the 1980s and 1990s, banking had already changed enough that the monolithic core was showing its limitations. Banks were adding credit cards, mortgages, foreign exchange, investment products. Each of these added specialist systems, often with their own ledgers, their own data models, their own business logic. The monolith started to fracture, not by design but by accretion, as new modules were bolted onto an architecture that was never designed to accommodate them.
Vendors responded by building larger monoliths. Temenos, Oracle FLEXCUBE, Finacle, SAP Banking — these systems attempted to consolidate the sprawl by packaging more and more functionality into a single platform. The pitch was compelling: one vendor, one contract, one system of record, one throat to choke. For a generation of technology leaders who had lived through the nightmare of integrating dozens of incompatible specialist systems, the appeal was understandable.
But the packaging created a new problem. These systems were so comprehensive, so interconnected, and so deeply embedded in a bank’s operations that they became impossible to change without enormous risk and cost. Upgrading a core banking system became a multi year programme. Configuring a new product required navigating hundreds of interdependent parameters. Adding a feature that the vendor had not anticipated required either a costly customisation that would be deprecated in the next release, or a multi year wait for the vendor roadmap to catch up with the business need.
The result, as Thoughtworks has described in their analysis of coreless banking, was that the rigid coupling of product features and core systems became inadequate, while the complexity protecting those systems kept growing. Banks found themselves in a situation where the cost of change was so high that they simply stopped changing. Instead they wrapped the core in middleware, built APIs around the edges, and told themselves that digital transformation was happening while the fundamental architecture underneath stayed frozen.
This is the stuck thought that UXDA documented so vividly in their research into banking back office systems: the customer sees a modern mobile app, a sleek digital interface, an award winning UX. Behind that interface is a core banking system that looks like software from the 1990s, that employees have to learn through month long training programmes, that causes the kind of operational errors that cost Citibank $900 million in a single afternoon when a contractor clicked the wrong option in an unintelligible Flexcube interface. The front of house is modern. The engine room has not changed.
3. Why Almost Every New Bank Refuses to Build This Way
Here is the most telling evidence that core banking as an architectural philosophy is obsolete: virtually every new bank built in the last decade has explicitly rejected it.
Dozens of new digital banks have built around domain driven design and event driven architecture from the ground up. They did not choose this approach because it was fashionable. They chose it because when you are starting from a clean sheet, building a monolithic core banking system is obviously the wrong answer to the problems you are actually trying to solve.
The problems modern banking needs to solve are completely different from the problems of 1970. A customer today might interact with their bank hundreds of times a month, not once. They expect real time balances, instant payments, instant lending decisions, personalised product recommendations, seamless integration with third party services, and the ability to open a new product in under two minutes. They expect the bank to know them across every product and every channel simultaneously. They expect changes to the product to happen in days, not years.
None of these requirements can be met by a batch processing system designed to update balances overnight. None of them are well served by a monolith where changing one component requires testing the entire system. None of them benefit from packaging every banking function into a single platform that can only scale vertically and can only be deployed as a whole.
What serves these requirements is a domain driven architecture where payments is a domain, lending is a domain, identity is a domain, notifications is a domain, and each of these domains owns its own data, exposes its own APIs, publishes its own events, and can be scaled, deployed, and changed independently of every other domain. When the payments domain needs to handle ten times the usual volume on a public holiday, it scales without touching the lending domain. When the product team wants to iterate on the lending decision engine, they do it without a change freeze on the rest of the bank. When regulatory requirements change the way identity must be handled, that change is contained to the identity domain rather than rippling through a monolith in unpredictable ways.
This is not a new idea. The Banking Industry Architecture Network has been developing the Coreless Banking Model for years, defining semantic APIs and data models that support exactly this kind of domain centric architecture. Thoughtworks published their analysis of coreless banking in 2024 making the same argument with clinical precision. The intellectual case for domain driven banking architecture has been settled for the better part of a decade. The reason it has not been universally adopted is not that the arguments are wrong. It is that the switching cost from a monolithic core is genuinely enormous, and the organisations selling monolithic cores have been very effective at making that switching cost feel even larger than it actually is.
4. The Vendor Trap
The core banking vendor market deserves its own examination, because it has been one of the primary mechanisms through which the stuck thought has perpetuated itself.
The major core banking vendors, Temenos, Oracle, Finastra, FIS, Fiserv, and their equivalents, have built extraordinarily successful businesses on a straightforward proposition: banking is too complex for you to build yourself, so buy our platform and we will handle the complexity for you. For smaller and mid tier banks without large technology organisations, this proposition was often correct. The cost of building and maintaining a custom core was prohibitive, and the vendor platform, for all its limitations, was more reliable than what the bank could build in house.
But the proposition came with hidden costs that only became apparent over time. Implementation took years. Customisation was expensive and fragile. Upgrades required the kind of programme management that consumed entire technology departments. The vendor roadmap moved at the vendor’s pace, not the bank’s. And most critically, the more deeply a bank embedded itself in a vendor’s platform, the more expensive it became to ever leave.
This is the architectural equivalent of the MIPS pricing problem I wrote about in the COBOL article. Just as MIPS pricing gave IBM leverage over every new workload a bank wanted to run, core banking vendor contracts give those vendors leverage over every new product a bank wants to launch. The bank becomes dependent not just on the platform but on the vendor’s interpretation of what banking should look like, what products should be possible, what data models should exist. The vendor’s architecture becomes the bank’s architecture by default, and the bank’s ability to differentiate on technology becomes increasingly constrained.
Fintech Futures documented this dynamic in their analysis of the core banking packaged software market, describing an industry at a crossroads where the limitations of the packaged approach have become undeniable but the switching costs have made change feel impossible. The vendors know this. Their licensing models, their implementation dependencies, their proprietary data formats are all optimised to make the cost of leaving feel higher than the cost of staying. It is a very sophisticated form of the same complexity moat that COBOL built around the mainframe.
5. What Good Architecture Actually Looks Like
The answer is not to build another monolith. It is not to buy the newest generation of packaged core banking and hope that this one is more flexible than the last. It is to stop thinking about banking technology as a platform problem and start thinking about it as a domain problem.
Every bank has a set of distinct business capabilities: account management, payments processing, lending, customer identity, fraud detection, product management, regulatory reporting. Each of these is a genuine domain with its own data model, its own business logic, its own rate of change, and its own scaling requirements. Payments needs to handle enormous burst volumes in real time. Lending decisions need access to rich customer data but do not need to scale the same way as payments. Regulatory reporting needs a complete and immutable audit trail but does not need to be fast. These are different problems that benefit from different solutions, different technology choices, different team ownership.
A domain driven architecture treats each of these as an independently deployable unit with clear ownership and explicit interfaces. Domains talk to each other by publishing events that other domains can consume, or by exposing APIs that other domains can call. They do not share databases. They do not share code. They own their own data and are responsible for keeping it consistent. When a domain changes, it publishes a new event schema or a new API version, and the downstream consumers can upgrade on their own schedule.
This is how Starling Bank was built. It is how Monzo was built. It is how Nubank scaled to 100 million customers in Latin America without the kind of operational debt that is crushing their traditional competitors. It is how TymeBank has been able to move at pace in South Africa while the older banks are still trying to upgrade their cores. The pattern works. It has been proven at scale. The only remaining argument for the monolithic core is switching cost and organisational inertia, and those are not architectural arguments.
6. The Finextra Argument That Still Gets Ignored
The case against core banking system conversion was made clearly by Dave Wascha in Finextra over a decade ago. The argument was simple and remains correct: a core banking system conversion is the wrong approach because it asks you to solve the wrong problem. The problem is not the technology. The problem is the architecture. Ripping out one monolithic core and replacing it with another, even a more modern one, just resets the clock on the same fundamental constraints. You will be having the same conversation in fifteen years.
The right approach is to strangle the core incrementally, building new capabilities on modern domain architecture alongside the existing system, migrating workloads progressively, and shrinking the footprint of the legacy core until it either becomes so small that replacement is trivial or its remaining functions are so well isolated that they can be maintained indefinitely without constraining everything else. This is the strangler fig pattern applied to banking, and it is the only migration approach that consistently produces good outcomes at acceptable risk.
The reason this argument has been ignored for so long is partly organisational and partly commercial. Organisationally, incremental transformation is harder to fund and harder to explain to a board than a single big programme. A “core banking replacement” is a project. An “incremental domain migration” is a journey with no obvious end date, and boards are more comfortable writing cheques for projects than journeys. Commercially, the vendors who benefit most from core banking replacement programmes are exactly the ones with the most influence over technology strategy at major banks. They are sponsoring the industry events, funding the research, and sitting on the advisory boards of the institutions making these decisions.
7. The Moment That Changes the Calculation
This week’s AI announcement changes the calculation in a specific and important way. If AI can genuinely compress the analysis phase of COBOL modernisation from months to weeks, it also compresses the analysis phase of core banking domain decomposition. The same capability that maps COBOL dependencies across thousands of lines of code can map the business logic embedded in a core banking system, identify which functions are genuinely coupled and which only appear to be, and produce the domain decomposition blueprint that has historically cost millions in consulting fees before a single line of new code is written.
That does not mean migration is suddenly easy. The data migration problem, the regulatory validation problem, the organisational change management problem — these remain genuinely hard. But the argument that understanding the problem is too expensive has just gotten significantly weaker.
For technology leaders at established banks, this is the moment to have the honest conversation that has been deferred for too long. Not about whether to replace the core banking system with another core banking system. About whether the core banking architectural model itself is the right answer for the next twenty years of banking, given that it was clearly not the right answer for the last twenty.
The neobanks have already answered that question. They answered it by building domains instead of cores, by publishing events instead of sharing databases, by owning their architecture instead of renting it from a vendor. The results speak for themselves.
The monolithic core had its era. That era was the 1970s. It is time to let it go.
8. But What About 10x and Thought Machine?
At this point someone will raise the next generation platforms. 10x Banking. Thought Machine’s Vault. Mambu. Thought Machine in particular has attracted serious attention and serious investment, and it deserves a serious answer rather than being dismissed alongside the legacy vendors it is trying to displace.
These platforms are genuinely better than what came before. They are cloud native, API first, built on modern data models, designed around real time processing rather than batch, and engineered with the assumption that they will operate inside a broader ecosystem rather than own it entirely. Thought Machine’s Vault is a technically impressive piece of work. 10x’s architecture represents a meaningful step forward from Oracle FLEXCUBE or Temenos in terms of composability and integration capability. These are not the same mistake with a modern coat of paint.
But they are still platforms. And platforms still have roadmaps. And roadmaps still belong to the vendor.
When you build on Thought Machine, you are building on Thought Machine’s interpretation of what a bank’s data model should look like, which products should be natively configurable, which events should be publishable, which integration patterns should be supported. For the things the platform anticipated well, you move fast. For the things it did not anticipate, you are back in a conversation you recognise. You raise a feature request. You get a timeline. You wait for a release cycle. You consider paying for a customisation that may or may not survive the next major version.
The vendor lock in problem does not disappear because the vendor is newer and more technically sophisticated. It changes shape. With a legacy core the lock in is brutal and visible: an ancient data model, proprietary languages, batch processing that cannot be unwound, a UI that looks like it was designed during the Reagan administration. With a next generation platform the lock in is subtler, almost comfortable, because the platform is genuinely capable enough that you rarely hit the ceiling in the early years. The constraint only becomes apparent when your business wants to do something the platform’s designers did not anticipate, and by that point you are too deeply embedded to leave cleanly.
The deeper question is what the core ledger should actually be responsible for. If the answer is the immutable record of financial transactions, balances, postings, and the audit trail that regulators require, then the core ledger can be thin, commodity, and almost interchangeable. Everything else, product configuration, pricing logic, customer decisioning, risk models, notification orchestration, should live in domain services that the bank owns, controls, and can evolve independently of any vendor’s roadmap. The next generation platforms gesture toward this model. None of them fully deliver it. They still want to own more of your architecture than a thin ledger should.
The banks that will genuinely win the next decade are not the ones that chose the most impressive platform in a vendor selection process. They are the ones that asked the harder question: how do we minimise our dependence on any platform, maximise the surface area of architecture we own outright, and treat the core ledger as a utility rather than a foundation? That is a more difficult organisational and cultural problem than picking a technology. It requires engineering leadership to resist the gravitational pull of the comprehensive platform pitch, which is always compelling and always costs more freedom than it looks like in the demo.
10x and Thought Machine are the right answer if the alternative is Temenos. They are not the right answer if the question is whether to build a banking architecture you actually own.
9. The Exquisite Pain of the Core Banking Upgrade
There is a particular kind of suffering in enterprise technology that has no equivalent elsewhere in the industry. It is the core banking upgrade. And if you have never lived through one, you cannot fully appreciate the combination of expense, duration, risk, and ultimate anticlimax that defines the experience.
A typical core banking upgrade programme runs three to five years. It consumes hundreds of millions of dollars when you account for implementation partners, internal resource, parallel running, testing infrastructure, and the inevitable scope creep that accompanies any programme of this complexity. It occupies the attention of the most senior technology leadership in the organisation for its entire duration. It generates a programme governance structure so elaborate that the governance itself becomes a full time job. It dominates board reporting, risk committee agendas, and regulator conversations for years at a stretch.
And then it goes live. And the very best outcome, the outcome the programme director dreams about, the outcome that gets celebrated with a quiet internal announcement and a cautious all staff email, is that nobody noticed. Not the customers. Not the operations teams. Not the regulators. The system behaves exactly as it did before, processes the same transactions, produces the same outputs, and the only visible change is that the version number in the admin console has incremented.
That is the success case. Three years. Tens of millions of dollars. New leadership, because the old CTO either burned out or was quietly moved on sometime around year two. And the headline achievement is: we did not break anything.
The business case that justified the programme spoke of future capability. Once on the new platform, the bank would be able to launch products faster, integrate with partners more easily, respond to regulatory changes with less pain, and unlock features from the vendor roadmap that the old system could not support. Some of those things materialise. Many of them do not, or materialise so slowly that the business opportunity they were meant to serve has already been captured by someone else.
Because here is the uncomfortable truth about the features you were going to unlock after the upgrade: if a customer wanted them badly enough, they left before you finished the programme. The customers who stayed either do not care about those features, have adapted their behaviour around their absence, or are locked in by switching costs of their own. You spent three years and a hundred million dollars catching up to a market position you should have held five years ago, in a world that has moved on since then.
The cruelest part is the competitive dynamics. When a major bank announces a core banking replacement programme, the correct response from every competitor is quiet celebration. Not because you wish them ill, but because you know what is coming. That bank is about to disappear into an internal programme for three to five years. Their best technology people will be consumed by the upgrade. Their ability to ship new products will be constrained by change freezes. Their senior leadership will be distracted. Their risk appetite will contract because nobody wants a major incident during a core migration. They will emerge on the other side with a swollen balance sheet, exhausted teams, and technology leadership that has largely turned over, and they will need another year just to rediscover what they were doing before the programme started. During their upgrade cycle the bank will have essentially gifted all of their competitors an uncontested market space.
This is the final indictment of the monolithic core banking model. It does not just constrain your architecture. It periodically forces you to consume enormous organisational energy in programmes whose best case outcome is standing still, while your competitors who made different architectural choices are shipping features every sprint. The upgrade treadmill is not an accident. It is a structural consequence of the architecture, and it will not end until the architecture changes.
Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the future of financial services technology at andrewbaker.ninja.