https://andrewbaker.ninja/wp-content/themes/twentysixteen/fonts/merriweather-plus-montserrat-plus-inconsolata.css

Why Capitec Pulse Is a World First and Why You Cannot Just Copy It

By Andrew Baker, Chief Information Officer, Capitec Bank

The Engineering Behind Capitec Pulse

1. Introduction

I have had lots of questions about how we are “reading our clients minds”. This is a great question, but the answer is quite complex – so I decided to blog it. The article below really focuses on the heavy lifting required to make agentic solutions first class citizens of your architecture. I dont go down to box diagrams in this article, but it should give you enough to frame the shape of your architecture and the choices you have.

When Capitec launched Pulse this week, the coverage focused on the numbers. An AI powered contact centre tool that reduces call handling time by up to 18%, delivering a 26% net performance improvement across the pilot group, with agents who previously took 7% longer than the contact centre average closing that gap entirely after adoption. Those are meaningful numbers, and they are worth reporting. But they are not the interesting part of the story.

The interesting part is the engineering that makes Pulse possible at all, and why the “world first” claim, which drew measured scepticism from TechCentral and others who pointed to existing vendor platforms with broadly similar agent assist capabilities, is more defensible than the initial coverage suggested. The distinction between having a concept and being able to deploy it in production, at banking scale, against a real estate of 25 million clients, is not a marketing question. It is a physics question. This article explains why.

2. What Pulse Actually Does

To understand why Pulse is difficult to build, it helps to understand precisely what it is being asked to do. When a Capitec client contacts the support centre through the banking app, Pulse fires. Before the agent picks up the call, the system assembles a real time contextual picture of that client’s recent account activity, drawing on signals from across the bank’s systems: declined transactions, app diagnostics, service interruptions, payment data and risk indicators. All of that context is surfaced to the agent before the first word is exchanged, so that the agent enters the conversation already knowing, or at least having a high confidence hypothesis about, why the client is calling.

The goal, as I described it in the launch statement, is not simply faster resolution. It is an effortless experience for clients at the moment they are most frustrated. The removal of the repetitive preamble, the “can you tell me the last four digits of your card” and “when did the problem start” that precedes every contact centre interaction, is what makes the experience qualitatively different, not just marginally faster. The 18% reduction in handling time is a consequence of that. It is not the objective.

What makes this hard is not the user interface, or the machine learning, or the integration with Amazon Connect. What makes it hard is getting the right data, for the right client, in the right form, in the window of time between the client tapping “call us” and the agent picking up. That window is measured in seconds. The data in question spans the entire operational footprint of a major retail bank.

3. Why Anyone Can Build Pulse in a Meeting Room, But Not in Production

When TechCentral noted that several major technology vendors offer agent assist platforms with broadly similar real time context capabilities, they were correct on the surface. Genesys, Salesforce, Amazon Connect itself and a number of specialised contact centre AI vendors all offer products that can surface contextual information to agents during calls. The concept of giving an agent more information before they speak to a customer is not new, and Capitec has never claimed it is.

The “world first” claim is more specific than that. It is a claim about delivering real time situational understanding at the moment a call is received, built entirely on live operational data rather than batch replicated summaries, without impacting the bank’s production transaction processing. That specificity is what the coverage largely missed, and it is worth unpacking in detail, because the reason no comparable system exists is not that nobody thought of it. It is that the engineering path to deploying it safely is extremely narrow, and it requires a degree of control over the underlying data architecture that almost no bank in the world currently possesses.

To see why, it helps to understand the two approaches any bank or vendor would naturally reach for, and why both of them fail at scale.

4. Option 1: Replicate Everything Into Pulse Before the Call Arrives

The first and most intuitive approach is to build a dedicated data store for Pulse and replicate all relevant client data into it continuously. Pulse then queries its own local copy of the data when a call comes in, rather than touching production systems at all. The production estate is insulated, the data is pre assembled, and the agent gets a fast response because Pulse is working against its own index rather than firing live queries into transactional databases.

This approach has significant appeal on paper, and it is the model that most vendor platforms implicitly rely on. The problem is what happens to it at banking scale, in a real production environment, under real time load.

Most banks run their data replication through change data capture (CDC) pipelines. A CDC tool watches the database write ahead log, the sequential record of every committed transaction, and streams those changes downstream to consuming systems: the data warehouse, the fraud platform, the reporting layer, the risk systems. These pipelines are already under substantial pressure in large scale banking environments. They are not idle infrastructure with spare capacity waiting to be allocated. Adding a new, high priority, low latency replication consumer for contact centre data means competing with every other downstream consumer for CDC throughput, and the replication lag that results from that contention can easily reach the point where the data Pulse is working with is minutes or tens of minutes old rather than seconds.

For some of our core services, due to massive volumes, CDC replication is not an option, so these key services would not be eligible for Pulse if we adopted a replication architecture approach.

The more fundamental problem, though, is one of scope. You cannot wait for a call to come in before deciding what to replicate. By the time the client has initiated the support session, there is no longer enough time to go and fetch all the data for currently over 60 databases and log stores. The replication into the Pulse data store has to be continuous, complete and current for all 25 million clients, not just the ones currently on calls. That means maintaining sub second freshness across the entire operational footprint of the bank, continuously, around the clock. The storage footprint of that at scale is large. The write amplification, where every transaction is written twice, once to the source system and once to the Pulse replica, effectively doubles the IOPS demand on an already loaded infrastructure. And the cost of provisioning enough I/O capacity to maintain that freshness reliably, without tail latency spikes that would degrade the contact centre experience, is substantial and recurring.

All of our core services have to be designed for worst case failure states. During an outage, when all our systems are already under huge scale out pressures, contact centre call volumes are obviously at their peak as well. If Pulse replication added pressure during that scenario to the point where we could not recover our services, or had to turn it off precisely when it was most valuable, the architectural trade off would be untenable.

Option 1 works on paper. In production, against a real banking client base of the size Capitec serves, it is expensive, architecturally fragile and, in practice, not reliably fresh enough for the use case it is meant to serve.

5. Option 2: Query the Live Production Databases as the Call Comes In

The second approach is more direct: abandon the replication model entirely and let Pulse query the live production databases at the moment the call arrives. There is no replication lag, because there is no replication. The data Pulse reads is the same data the bank’s transactional systems are working with right now, because Pulse is reading from the same source. Freshness is guaranteed by definition.

This approach also fails at scale, and the failure mode is more dangerous than the one in Option 1, because it does not manifest as stale data. It manifests as degraded payment processing.

To understand why, it is necessary to understand how relational databases handle concurrent reads and writes. Almost every OLTP (online transaction processing) database, including Oracle, SQL Server, MySQL, and PostgreSQL in its standard read committed isolation configuration, uses shared locks, also called read locks, to manage concurrent access to rows and pages. When a query reads a row, it acquires a shared lock on that row for the duration of the read. A shared lock is compatible with other shared locks, so multiple readers can access the same row simultaneously without blocking each other. But a shared lock is not compatible with an exclusive lock, which is what a write operation requires. A write must wait until all shared locks on the target row have been released before it can proceed. This is the fundamental concurrency model of most production relational databases, and it exists for a good reason: it ensures that readers see a consistent view of data that is not mid modification. The cost of that consistency guarantee is that reads and writes are not fully independent.

In a low concurrency environment, this trade off is rarely visible. Reads complete quickly, locks are released, writes proceed with negligible delay. In a high throughput banking environment, where thousands of transactions per second are competing for access to the same set of account rows, adding a new class of read traffic directly into that contention pool has measurable consequences. Every time a Pulse query reads a client’s account data to prepare a contact centre briefing, it acquires shared locks on the rows it touches. Every write transaction targeting those same rows, whether a payment completing, a balance updating, or a fraud flag being set, must wait until those shared locks are released. At Capitec’s scale, with a large number of contact centre calls arriving simultaneously, the aggregate lock contention introduced by Pulse queries onto the production write path would generate a consistent and material increase in transaction tail latency. That is not a theoretical risk. It is a predictable consequence of the locking model that virtually every production RDBMS uses, and it is a consequence that cannot be engineered away without changing the database platform itself.

Option 2 solves the data freshness problem while introducing a write path degradation problem that, in a regulated banking environment, is not an acceptable trade off. The integrity and predictability of payment processing is not something that can be compromised in exchange for better contact centre context.

6. Option 3: Redesign the Foundations

Capitec arrived at a third path, and it was available to us for a reason that has nothing to do with being smarter than the engineers at other banks or at the vendor platforms. It was available because Capitec owns its source code. The entire banking stack, from the core transaction engine to the client facing application layer, is built internally. There is no third party core banking platform, no licensed system with a vendor controlled schema and a contractual restriction on architectural modification. When we decided that real time operational intelligence was worth getting right at a foundational level, we had the ability to act on that decision across the entire estate.

The central architectural choice was to build every database in the bank on Amazon Aurora PostgreSQL, with Aurora read replicas provisioned with dedicated IOPS rather than relying on Aurora’s default autoscaling burst IOPS model (with conservative min ACUs). Aurora’s architecture is important here because it separates the storage layer from the compute layer in a way that most traditional relational databases do not. In a conventional RDBMS, a read replica is a physically separate copy of the database that receives a stream of changes from the primary and applies them sequentially. Replication lag in a conventional model accumulates when write throughput on the primary outpaces the replica’s ability to apply changes. In Aurora, the primary and all read replicas share the same underlying distributed storage layer. A write committed on the primary is immediately visible to all replicas, because they are all reading from the same storage volume. The replica lag in Aurora PostgreSQL under normal operational load is measured in single digit milliseconds rather than seconds or minutes, and that difference is what makes the contact centre use case viable.

Pulse has access exclusively to the read replicas. By design and by access control, it cannot touch the write path at all. This is the critical architectural guarantee. The read replicas are configured with access patterns, indexes and query plans optimised specifically for the contact centre read profile, which is structurally different from the transactional write profile the primary instances are optimised for. Because Aurora’s read replicas use PostgreSQL’s MVCC (multi version concurrency control) architecture, reads on the replica never acquire shared locks that could interfere with writes on the primary. MVCC works by maintaining multiple versions of each row simultaneously, one for each concurrent transaction that needs to see a consistent snapshot of the data. When Pulse queries a read replica, PostgreSQL serves it a snapshot of the data as it existed at the moment the query started, without acquiring any row level locks whatsoever. There is no mechanism by which Pulse’s read traffic can cause a write on the primary to wait.

Beyond the relational data layer, all operational log files across the platform are coalesced into Amazon OpenSearch, giving Pulse a single, indexed view of the bank’s entire log estate without requiring it to fan out queries to dozens of individual service logs scattered across the infrastructure. App diagnostics, service health events, error patterns and system signals are all searchable through a single interface, and OpenSearch’s inverted index architecture means that the kinds of pattern matching and signal correlation queries that Pulse needs to produce a useful agent briefing execute in milliseconds against a well tuned cluster, rather than in seconds against raw log streams.

The result of these architectural choices taken together is a system in which Pulse reads genuinely current data, through a read path that is completely isolated from the write path, with effectively no replication lag, no lock contention and no impact on the transaction processing that is the bank’s core operational obligation.

7. Why a Vendor Could Not Have Delivered This

This is the part of the “world first” argument that the sceptics most consistently miss, and it is worth addressing directly. The question is not whether vendors are capable of building the software components that Pulse uses. Of course they are. Amazon, Salesforce, Genesys and others have engineering teams that are among the best in the industry. The question is whether any vendor could have deployed a Pulse equivalent system successfully against a real world banking estate, and the answer to that question is almost certainly no, for reasons that have nothing to do with engineering capability and everything to do with the constraints that vendors face when they deploy into environments they did not build.

A vendor arriving at a major bank with a Pulse equivalent product would encounter a technology estate built on a core banking platform they do not control, with a CDC replication architecture that is already at or near capacity, and with OLTP databases running a locking model that is baked into the platform and cannot be modified without the platform vendor’s involvement. They would be presented with exactly the choice described in sections 4 and 5 of this article: replicate everything and accept the lag and IOPS cost, or query production and accept the locking risk. Neither of those options produces a system that works reliably at the scale and performance level that a contact centre use case demands, and a vendor has no ability to change the underlying estate to create the third option.

The only path to the architecture described in section 6 is to control the source code of the underlying banking systems and to have made the decision to build the data infrastructure correctly from the beginning, before the specific use case of contact centre AI was on anyone’s roadmap. That is a decision Capitec made, and it is a decision that most banks, running licensed core banking platforms with limited architectural flexibility, are not in a position to make regardless of budget or intent.

8. Pulse Is the First Output of a Broader Capability

It would be a mistake to read Pulse purely as a contact centre initiative, because that is not what it is. It is the first publicly visible output of a platform capability that Capitec has been building for several years, and that capability was designed to serve a much broader set of real time operational decisions than contact centre agent briefings.

The traditional data architecture in banking separates the transactional estate from the analytical estate. The OLTP systems process transactions in real time. A subset of that data is replicated, usually overnight, into a data warehouse or data lake, where it is available to analytical tools and operational decision systems. Business intelligence, fraud models, credit decisioning engines and risk systems are typically built on top of this batch refreshed analytical layer. It is a well understood model and it works reliably, but its fundamental limitation is that every decision made on the analytical layer is made on data that is, at minimum, hours old.

For fraud prevention, that delay is increasingly unacceptable. Fraud patterns evolve in minutes, and a fraud signal that is twelve hours old is, in many cases, a signal that arrived after the damage was done. For credit decisions that should reflect a client’s current financial position rather than yesterday’s snapshot, Capitec Advances is one example where the decision should reflect income received this morning rather than income received last month, and the batch model introduces systematic inaccuracy that translates directly into worse outcomes for clients. For contact centre interactions, it means agents are working with context that may not reflect the last several hours of a client’s experience, which is precisely the window in which the problem they are calling about occurred. Capitec’s investment in the real time data infrastructure that underpins Pulse was motivated by all three of these use cases simultaneously, and Pulse is simply the first system to emerge from that investment in a publicly deployable form. It will not be the last.

9. The Hallucination Trap

So you have liberated your data and AI can access everything. Congratulations. Here is your next problem, and it is one that almost nobody talks about openly: your schema needs a cryptologist to understand it.

I have seen vendor systems where retrieving a simple transaction history for a client across all their accounts requires over four thousand lines of SQL. Four thousand lines. Not because the query is sophisticated. Because the schema has been abused so systematically over so many years that it has become genuinely incomprehensible. Field A means one thing for product type 1 and something entirely different for product type 2. The same column carries different semantics depending on a discriminator flag three joins away that half the team has forgotten exists. The schema was not designed this way deliberately. It accumulated this way, one pragmatic shortcut at a time, over a decade of releases where the path of least resistance was always to reuse an existing column rather than add a new one.

When you point an AI at a schema like this and ask it to answer questions about client behaviour, you are not testing the AI. You are testing whether the AI can reverse-engineer fifteen years of undocumented modelling decisions from first principles, in real time, while a client is waiting on the line. The model is not hallucinating. You have simply given it no chance. The garbage is in the schema, not in the model.

The instinctive response is to fix the schema. That instinct is correct and also career-limiting. A schema remediation project of that scope touches every upstream writer and every downstream consumer simultaneously. It takes years, it breaks things in ways that are difficult to predict and expensive to test, and it competes for the same engineering capacity that is meant to be delivering the AI capabilities the business is waiting for. In practice, it does not happen. The schema persists, the SQL grows longer, and the AI continues to produce answers that are subtly wrong in ways that are difficult to trace back to their root cause.

The better answer is to stop trying to fix the past and build a clean projection of it instead. You take the ugly SQL, you encapsulate it in a service, and you publish the output onto a Kafka topic with a logical schema that any engineer can read without a glossary. A transaction is a transaction. An account is an account. The field names mean what they say, consistently, regardless of product type. The complexity of the source system is hidden behind the service boundary, versioned, tested and owned by a team that understands it deeply rather than distributed invisibly across every system that needs to query it.

This approach has compounding benefits that go well beyond making AI queries more reliable. A client’s five year transaction history, retrieved for a tax enquiry, no longer runs as a live query against your core banking database at the worst possible moment. It is read from the Kafka topic, which was built precisely for that read profile and carries no locking risk whatsoever against the transaction processing path. Every change to the underlying logic is isolated to the service, regression tested independently, and deployed without touching the consumers. The operational complexity that used to be everyone’s problem becomes the well-defined responsibility of a single team.

And then, once you have a clean logical schema flowing through a reliable event stream, something shifts. The AI stops guessing. The queries become short and readable. The answers become trustworthy. You stop spending half your prompt engineering budget compensating for schema ambiguity and start asking the questions that actually matter. You can anticipate why a client is calling before they tell you. You can see the shape of their financial life clearly enough to offer them something useful rather than just resolving their immediate complaint. These details are not glamorous. They do not appear in product launch coverage. But they are the actual reason Pulse works, and they are genuinely hard to get right. Get them right, and the AI does not just answer questions. It starts to read your clients’ minds.

The broader lesson here is one that the industry keeps learning the hard way. You do not need to train and retrain models endlessly to compensate for the complexity you have accumulated. You do not need exotic prompt engineering to paper over a schema that was never coherent to begin with. You need to go on a complexity diet and get fit. Simplify the data, clean the contracts, publish logical schemas, and then let the model do what it was actually built to do. The banks that are chasing their tails retraining models to handle their own internal chaos are solving the wrong problem at enormous cost. The ones that do the unglamorous work of cleaning up the foundations find that the model does not need to be retrained at all. It just works. That is the difference between an AI strategy and an AI bill.

10. Where the Insights Come From

Once the data architecture described in section 6 is in place, the inference layer that actually produces the agent briefing is, relatively speaking, the easy part. The decisions Pulse makes — the synthesis of declined transactions, app diagnostics, payment signals and risk indicators into a coherent hypothesis about why a client is calling — are generated by Amazon Bedrock, predominantly using Claude as the underlying model. The assembled context is passed to Claude as a structured prompt, and Claude returns a natural language briefing that the agent reads before picking up. There is no hand-coded decision tree, no brittle rules engine, and no model trained from scratch on Capitec-specific data. The reasoning is emergent from the context, which is exactly what a large language model is designed to do well.

What is worth noting for engineers who have not yet worked with Bedrock at production scale is that the AI layer, once the data problem is solved, introduces almost none of the architectural complexity that the preceding sections describe. Claude reads context, produces a summary, and it does so with a consistency and quality that would have been implausible from any commercially available model even two years ago. The model does not need to be fine-tuned for this use case. It needs to be given good inputs, and the entire engineering effort described in this article is, in a sense, the work required to produce those good inputs reliably and at speed.

The one genuinely frustrating constraint at the AI layer has nothing to do with model capability. AWS accounts are provisioned with default throughput limits on Bedrock — tokens per minute and requests per minute caps that are set conservatively for new or low-volume accounts. At contact centre scale, those defaults are insufficient, and lifting them requires a support request to AWS that, in practice, takes approximately a day to process. For a team trying to move quickly from pilot to production, that is an unexpected bottleneck: the data architecture performs, the model performs, and progress stalls on an account configuration ticket. It is a solvable problem, but it is worth naming because it catches teams off guard when everything else is working.

11. The World First Verdict

The “world first” claim, properly understood, is this: no comparable system delivers real time situational understanding to contact centre agents at the moment a call is received, built on live operational data with sub second freshness, at the scale of a 25 million client retail banking estate, without any impact on the bank’s production write path. That is a precise claim, and it is defensible precisely because the engineering path that leads to it requires a combination of architectural decisions, including full internal ownership of source code, Aurora PostgreSQL with dedicated read replicas across the entire estate, MVCC based read isolation, and OpenSearch log aggregation, that very few organisations in the world have made, and that could not have been retrofitted to an existing banking estate by a third party vendor regardless of their capability.

Any bank can describe Pulse in a presentation. The vast majority of them cannot deploy it, because they do not control the foundations it depends on. The distinction between the idea and the working system is what the claim is actually about, and on that basis it stands.

References

TechCentral, “Capitec’s new AI tool knows your problem before you explain it”, 5 March 2026. https://techcentral.co.za/capitecs-new-ai-tool-knows-your-problem-before-you-explain-it/278635/

BizCommunity, “Capitec unveils AI system to speed up client support”, 5 March 2026. https://www.bizcommunity.com/article/capitec-unveils-ai-system-to-speed-up-client-support-400089a

MyBroadband, “Capitec launches new system that can almost read customers’ minds”, 2026. https://mybroadband.co.za/news/banking/632029-capitec-launches-new-system-that-can-almost-reads-customers-minds.html

Amazon Web Services, “Amazon Aurora PostgreSQL read replicas and replication”, AWS Documentation. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/Aurora.Replication.html

Amazon Web Services, “Amazon Connect, cloud contact centre”, AWS Documentation. https://aws.amazon.com/connect/

PostgreSQL Global Development Group, “Chapter 13: Concurrency Control and Multi Version Concurrency Control”, PostgreSQL 16 Documentation. https://www.postgresql.org/docs/current/mvcc.html

Amazon Web Services, “What is Amazon OpenSearch Service?”, AWS Documentation. https://docs.aws.amazon.com/opensearch-service/latest/developerguide/what-is.html

Capitec Bank, “Interim Results for the six months ended 31 August 2025”, 1 October 2025. https://www.capitecbank.co.za/blog/news/2025/interim-results/

Transcripts from the Meeting Where Core Banking was Invented (A Faithful Reconstruction)

A companion piece to Core Banking Is a Terrible Idea. It Always Was.

It is 1972. A group of very serious men in very wide ties are gathered in a very beige conference room. They are about to make decisions that will haunt your change advisory board fifty years from now. The following is a faithful reconstruction of that meeting, because clearly someone needed to write it down.

CHAIRMAN: Gentlemen, we need to computerise the bank. The IBM salesman is outside. He’s been there since Tuesday. Security has tried to remove him twice. He seems to feed on rejection.

HEAD OF TECHNOLOGY (there is only one of him, and he is wearing a short-sleeved shirt, which everyone agrees is suspicious): We need a system that handles everything. Accounts, transactions, interest, fees, reporting. Everything.

CHAIRMAN: Everything?

HEAD OF TECHNOLOGY: Everything. In one place. One machine. One vendor.

CHAIRMAN: Should we perhaps have two vendors? For resilience?

HEAD OF TECHNOLOGY: Absolutely not. We want one vendor. Ideally one who makes hardware that only runs their software, so that if we ever want to leave we have to physically replace the building. That’s what I call commitment.

COMPLIANCE OFFICER: Will this system be easy to change when regulations evolve?

HEAD OF TECHNOLOGY: Change? Why would we change it? We’re going to write it in a language that reads like English was translated into German and then back into English by someone who had only ever read a tax return. That will ensure only a very specific kind of person can maintain it, and that person will be irreplaceable. That’s job security for everyone, really.

COMPLIANCE OFFICER: Visionary.

HEAD OF TECHNOLOGY: We’re going to run everything on a single box. All products. All customers. All transactions. Payments, lending, savings, reporting: one box, all of it, one throat to choke.

OPERATIONS MANAGER: What if the box falls over?

HEAD OF TECHNOLOGY: Then we have a disaster recovery plan.

OPERATIONS MANAGER: How long will recovery take?

HEAD OF TECHNOLOGY: Several hours. Possibly a day. We’re still working on the documentation. The recovery procedure will require a specialist who we will train exactly once and who will subsequently leave for a competitor. His successor will have the manual, which will be wrong by then, but written with such confidence that no one will question it until the actual disaster.

OPERATIONS MANAGER: And we need to test this?

HEAD OF TECHNOLOGY: We will test it once, during the original implementation, and then assume it still works forever. Testing it again would require a change freeze, three committees, a consultant from the vendor, and eight months. So: once.

CHAIRMAN: What about releases? How often will we update this system?

HEAD OF TECHNOLOGY: As rarely as possible. I’m thinking: annually. Maybe biennially if we can get away with it. Every release will be a full programme. Full regression testing across every function. Army of testers. Army of project managers managing the army of testers. A war room. Probably a dedicated floor.

FINANCE DIRECTOR: That sounds expensive.

HEAD OF TECHNOLOGY: It’s not expensive, it’s thorough. The release will take between six and eighteen months. We will begin change freeze approximately four months before the release date, which means the business cannot ship anything new for the better part of a year. This is a feature. It keeps everyone focused.

FINANCE DIRECTOR: Focused on what?

HEAD OF TECHNOLOGY: On not breaking anything. Which is the same as progress, if you think about it correctly.

CHAIRMAN: What do our customers get out of this release?

Silence.

HEAD OF TECHNOLOGY: Better MIS reports.

CHAIRMAN: They won’t see those.

HEAD OF TECHNOLOGY: No, but we will, and they are very clean reports. Very clean. Some of the cleanest reports you’ll ever see. Worth every penny of the hundred million we’re spending.

OPERATIONS MANAGER: How will the operators interact with this system?

HEAD OF TECHNOLOGY: Through a screen. One screen. The screen will have approximately four hundred fields. Many of them will be unlabelled, for security. The operator will learn which combinations of field values correspond to which operations through a combination of formal training, informal knowledge transfer, and trial and error with real money. Experienced operators will develop an almost mystical intuition for it. New operators will occasionally initiate a full principal repayment when they meant to process an interest charge, but that’s a training issue, not a system issue.

COMPLIANCE OFFICER: And there’s no confirmation step?

HEAD OF TECHNOLOGY: There’s a button. The button says OK. It always says OK. It says OK whether you’re creating a savings account or accidentally wiring nine hundred million dollars to the wrong counterparties. We felt a consistent user experience was important.

HEAD OF TECHNOLOGY: Now, about scaling. This system cannot scale horizontally. If we need more capacity we buy a bigger box. When the box reaches its limit we buy the biggest box IBM makes. When we exceed that box, we have a different kind of conversation.

OPERATIONS MANAGER: What kind of conversation?

HEAD OF TECHNOLOGY: The kind where we explain to the board that we need to run batch jobs overnight because we’ve run out of intraday capacity, and that customers cannot see their real balances until morning, and that this is normal and expected and completely fine. The batch run will begin at midnight. If it’s not finished by opening, we delay opening. This will never be a problem because it’s 1972 and banks open at ten.

CHAIRMAN: What happens in fifty years when banks operate around the clock and customers expect real time balances and instant payments from their pocket computers?

Long pause.

HEAD OF TECHNOLOGY: I’m going to stop you there. That is an unreasonable hypothetical and I think you should apologise for raising it.

FINANCE DIRECTOR: How long will implementation take?

HEAD OF TECHNOLOGY: Three years, minimum. Probably five if we want to do it properly.

FINANCE DIRECTOR: And what does ‘doing it properly’ deliver?

HEAD OF TECHNOLOGY: A working system. Same products as before. Same prices as before. Same service model as before. Customers will notice nothing has changed.

FINANCE DIRECTOR: That’s the success case?

HEAD OF TECHNOLOGY: That is the dream. If nobody notices, we’ve done it perfectly. If customers call in to say things are different, something has gone wrong.

FINANCE DIRECTOR: And when will we need to replace this system?

HEAD OF TECHNOLOGY: Never. This is the last system we’ll ever need.

Another long pause.

HEAD OF TECHNOLOGY: Or in about fifteen years, when the business has changed enough that this system can no longer accommodate it, and we’ll need to select a new vendor and begin a new three to five year programme that will produce the same products at the same prices that customers will not notice have changed.

CHAIRMAN: And then?

HEAD OF TECHNOLOGY: And then we’ll do it again. And then again. Each time, we’ll write a requirements document that captures everything the old system did plus everything the business has always wanted, and we’ll select the new vendor who covers the most requirements. And each time, we will have purchased a slightly more modern version of the same architectural mistake.

CHAIRMAN: That sounds like a treadmill.

HEAD OF TECHNOLOGY: I prefer the term upgrade cycle. Much more professional.

COMPLIANCE OFFICER: One final question. Could we instead build separate systems for each domain: payments, lending, identity: each independently deployable, each owning its own data, able to scale on its own terms and change without disrupting everything else?

The room goes very quiet.

HEAD OF TECHNOLOGY: That’s not how banking works.

COMPLIANCE OFFICER: Why not?

HEAD OF TECHNOLOGY: Because banking is complex. And regulated. And the vendors tell us it’s impossible. And frankly if it were possible someone would have done it already.

Forty-five years later, Monzo does exactly this with a team a fraction of the size. But that’s a different meeting.

CHAIRMAN: Very good. Let the IBM man in.

The IBM man has apparently already let himself in. He has been sitting at the head of the table for the last twenty minutes. Nobody is sure when he arrived.

IBM SALESMAN: Gentlemen. I understand you want one vendor, one box, one contract, a language only specialists can read, releases that take eighteen months, a user interface that requires interpretive experience, disaster recovery nobody has tested since 2003, and a licensing model that ensures leaving us is economically indistinguishable from burning the bank to the ground.

He opens his briefcase.

IBM SALESMAN: I have just the thing.

And that, more or less, is how we got here.

The remarkable thing is not that this meeting happened in 1972. The remarkable thing is that some version of it is still happening today, in banks that have had fifty years to notice the pattern, conducted by people clever enough to know better, producing requirements documents that run to hundreds of pages and conclude, with great confidence, that what the bank needs is a newer version of the same decision.

The neobanks walked in, ignored the IBM salesman entirely, and built banks that work. The architecture was never the mystery. The willingness to walk out of the meeting was.

Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the infinite patience required to watch the same mistake happen in slow motion at andrewbaker.ninja.

Core Banking Is a Terrible Idea. It Always Was.

The COBOL apocalypse conversation this week has been useful, because it has forced the industry to confront something it has been avoiding for decades. But most of the coverage is stopping at the wrong point. Everyone is talking about COBOL. Nobody is talking about the architectural philosophy that COBOL gave birth to, the one that outlived the mainframe, survived the client server era, made it through the cloud revolution, and is still being sold to banks today with a straight face.

Core banking. The idea that you can package every conceivable banking function into a single platform, run it as a monolithic system, and call that an architecture. It was a reasonable compromise when banking was about cutting a cheque once a month and buying a house every twenty years. It is a completely useless approach to solving modern banking needs, and the fact that it has persisted this long is one of the most remarkable examples of institutional inertia in the history of enterprise technology.

This is a companion to my earlier article on the COBOL announcement that shook IBM’s stock price. That piece was about the death of COBOL as a moat. This one is about the death of the architectural philosophy that COBOL created, and why that second death is the one that actually matters.

1. Where Core Banking Came From

1.1 The Original Problem Was Real

To understand why core banking became so entrenched, you need to go back to where it started. The first computerised core banking systems emerged in the late 1960s and early 1970s, built in COBOL and running on IBM mainframes. The business problem they were solving was genuine and significant: banks had enormous volumes of transactions to process, they were doing it manually or with primitive automation, and they needed centralisation, speed, and reliability.

1.2 A Pragmatic Solution for a Narrow World

The solution was a single centralised computer that handled everything. Account management, transaction processing, interest calculation, fee charging, regulatory reporting, all of it in one place, in one codebase, with batch processing that ran overnight. Transactions were processed in groups at end of day because that was the technical reality of the hardware. Intraday balances required workarounds. The system was only accessible during banking hours. These were not design choices made out of laziness. They were pragmatic responses to the constraints of 1970s computing.

And it worked. For the banking reality of the 1970s, it worked extremely well. A customer visited one branch. They had a current account and perhaps a savings account. They wrote cheques. They took out a mortgage once in their adult life. The entire relationship was narrow, predictable, low volume, and slow moving. A batch processing system that updated balances overnight was entirely adequate for that world. The monolithic architecture made sense because the problem it was solving was genuinely monolithic.

1.3 When the Compromise Became the Convention

The architectural sin came later. It came when that original pragmatic compromise got packaged up, sold as a product, extended by vendors across decades, and eventually canonised as the correct way to build a bank’s technology. The compromise became the convention. The workaround became the standard. And by the time the banking world had changed beyond recognition, the core banking system had become too embedded, too expensive, and too complex to dislodge.

2. The Stuck Thought That Refuses to Die

2.1 How the Monolith Started to Fracture

By the 1980s and 1990s, banking had already changed enough that the monolithic core was showing its limitations. Banks were adding credit cards, mortgages, foreign exchange, investment products. Each of these added specialist systems, often with their own ledgers, their own data models, their own business logic. The monolith started to fracture, not by design but by accretion, as new modules were bolted onto an architecture that was never designed to accommodate them.

2.2 Vendors Responded by Building Bigger Monoliths

Vendors responded by building larger monoliths. Temenos, Oracle FLEXCUBE, Finacle, SAP Banking; these systems attempted to consolidate the sprawl by packaging more and more functionality into a single platform. The pitch was compelling: one vendor, one contract, one system of record, one throat to choke. For a generation of technology leaders who had lived through the nightmare of integrating dozens of incompatible specialist systems, the appeal was understandable.

2.3 The Cost of Change Became Prohibitive

But the packaging created a new problem. These systems were so comprehensive, so interconnected, and so deeply embedded in a bank’s operations that they became impossible to change without enormous risk and cost. Upgrading a core banking system became a multi year programme. Configuring a new product required navigating hundreds of interdependent parameters. Adding a feature that the vendor had not anticipated required either a costly customisation that would be deprecated in the next release, or a multi year wait for the vendor roadmap to catch up with the business need.

The result was that the rigid coupling of product features and core systems became inadequate, while the complexity protecting those systems kept growing. Banks found themselves in a situation where the cost of change was so high that they simply stopped changing. Instead they wrapped the core in middleware, built APIs around the edges, and told themselves that digital transformation was happening while the fundamental architecture underneath stayed frozen.

2.4 The Gap Between the Front Door and the Engine Room

Look at the screenshot below. This is an Oracle FLEXCUBE drawdown screen, the kind of interface that bank operations staff use every day in institutions that run major corporate and syndicated lending books. It is not a screenshot from 1998. This is the actual class of interface that was active in Citibank’s operations in August 2020.

Citibank Flexcube system interface form displaying banking platform fields

The screen is a form with unlabelled fields, cryptic component codes (COLLAT, COMPINTSF, DEFAUL, DFLFTC), checkbox columns with ambiguous headers, and no affordance whatsoever to indicate what selecting a given combination of options will actually do to real money. In the Citibank case, a contractor attempting to process a routine interest payment on the Revlon term loan instead initiated a full principal repayment of roughly $900 million dollars, to the wrong counterparties, with no confirmation step capable of catching the error before it cleared. Citibank recovered most of it after a long legal battle. They did not recover all of it.

This is not a UI design failure. It is an architectural one. The reason the FLEXCUBE interface looks the way it does is that it is trying to expose the full configurability of a system designed to handle thousands of product permutations across every banking function imaginable, through a single generalised screen. The monolith underneath has no concept of what a specific transaction is supposed to do in plain language terms. It has parameters. The operator maps parameters to intent. When that mapping is wrong, the transaction executes exactly as configured, not as intended.

A domain driven architecture inverts this. A payments domain knows what a principal repayment is. It has a specific workflow for authorising it. It has explicit confirmation gates appropriate to the size and type of transaction. It cannot be accidentally triggered by checking the wrong box on a generalised parameter screen because the operation exists as a named, typed, validated function rather than as a configuration state. The modern app on the customer’s phone and the modern interface on the operator’s screen share the same design philosophy because they are both built on top of systems that understand what they are doing. The engine room matches the front door.

The FLEXCUBE screenshot is not an embarrassing historical artefact. Banks running Oracle FLEXCUBE, Temenos T24, and Finastra Fusion are operating interfaces like this today, in production, across their most sensitive wholesale and retail operations. The Citibank incident was the moment the industry glimpsed what operational risk looks like when the complexity of a monolith is projected directly onto the people responsible for operating it.

3. Why the Monolith Cannot Serve Modern Banking

3.1 Modern Banking Has Different Problems

The problems modern banking needs to solve are completely different from the problems of 1970. A customer today might interact with their bank hundreds of times a month, not once. They expect real time balances, instant payments, instant lending decisions, personalised product recommendations, seamless integration with third party services, and the ability to open a new product in under two minutes. They expect the bank to know them across every product and every channel simultaneously. They expect changes to the product to happen in days, not years.

None of these requirements can be met by a batch processing system designed to update balances overnight. None of them are well served by a monolith where changing one component requires testing the entire system. None of them benefit from packaging every banking function into a single platform that can only scale vertically and can only be deployed as a whole.

3.2 The Monolith Cannot Serve Different Masters

The reason this distinction matters is not academic. When you force domains with fundamentally different characteristics into a single architectural model, you end up optimising for none of them and constraining all of them.

Payments needs to scale horizontally and instantly. On a major public holiday, payment volumes can spike to ten times normal load with almost no warning. In a monolithic core, scaling payments means scaling everything; the lending engine, the regulatory reporting module, the customer identity system, all of it, because they share infrastructure, share databases, and share deployment pipelines. You are paying to scale components that do not need to scale because the architecture cannot distinguish between them. And when the scaling event ends, you cannot scale down selectively either.

Lending has the opposite problem. A lending decision engine benefits from rich customer data, complex scoring models, and the ability to iterate rapidly on decision logic as credit conditions change. In a monolithic core, changing the lending decision model requires a change to the core system. That means a change freeze, a full regression test cycle across every function the monolith owns, a release management process designed for a system where everything is coupled to everything else. A lender who wants to tighten credit criteria in response to a deteriorating macroeconomic signal cannot do it in a day. They wait for the next release window.

Regulatory reporting needs something different again: a complete, immutable, auditable record of every state change in the system, queryable across arbitrary time ranges, accurate to the transaction. A monolith that was designed for operational speed is not designed for this. The data model optimised for processing transactions is rarely the data model optimised for reconstructing the history of those transactions for a regulator. Banks running monolithic cores typically solve this by building a separate reporting warehouse that ingests data from the core and tries to reconstruct an audit trail after the fact. That warehouse is always slightly wrong and everyone knows it.

Three domains, three completely different technical requirements, one architecture serving all of them badly. That is not a coincidence. It is what happens when the architecture is selected for comprehensiveness rather than fit.

3.3 What Domain Driven Architecture Actually Enables

What serves these requirements is a domain driven architecture where payments is a domain, lending is a domain, identity is a domain, notifications is a domain, and each of these domains owns its own data, exposes its own APIs, publishes its own events, and can be scaled, deployed, and changed independently of every other domain. When the payments domain needs to handle ten times the usual volume on a public holiday, it scales without touching the lending domain. When the product team wants to iterate on the lending decision engine, they do it without a change freeze on the rest of the bank. When regulatory requirements change the way identity must be handled, that change is contained to the identity domain rather than rippling through a monolith in unpredictable ways.

A domain driven architecture treats each of these as an independently deployable unit with clear ownership and explicit interfaces. Domains talk to each other by publishing events that other domains can consume, or by exposing APIs that other domains can call. They do not share databases. They do not share code. They own their own data and are responsible for keeping it consistent. When a domain changes, it publishes a new event schema or a new API version, and the downstream consumers can upgrade on their own schedule.

3.4 The Clean Sheet Answer

Here is the most telling evidence that core banking as an architectural philosophy is obsolete: virtually every new bank built in the last decade has explicitly rejected it.

Monzo and Starling built around domain driven design and event driven architecture from the ground up. They did not choose this approach because it was fashionable. They chose it because when you are starting from a clean sheet, building a monolithic core banking system is obviously the wrong answer to the problems you are actually trying to solve. The pattern works. It has been proven at scale. The only remaining argument for the monolithic core is switching cost and organisational inertia, and those are not architectural arguments.

4. The Vendor Trap

4.1 The Proposition and Its Hidden Costs

The core banking vendor market deserves its own examination, because it has been one of the primary mechanisms through which the stuck thought has perpetuated itself.

The major core banking vendors have built extraordinarily successful businesses on a straightforward proposition: banking is too complex for you to build yourself, so buy our platform and we will handle the complexity for you. For smaller and mid tier banks without large technology organisations, this proposition was often correct. The cost of building and maintaining a custom core was prohibitive, and the vendor platform, for all its limitations, was more reliable than what the bank could build in house.

But the proposition came with hidden costs that only became apparent over time. Implementation took years. Customisation was expensive and fragile. Upgrades required the kind of programme management that consumed entire technology departments. The vendor roadmap moved at the vendor’s pace, not the bank’s.

4.2 The Dependency Deepens Over Time

Most critically, the more deeply a bank embedded itself in a vendor’s platform, the more expensive it became to ever leave. This is the architectural equivalent of the MIPS pricing problem. Just as MIPS pricing gave IBM leverage over every new workload a bank wanted to run, core banking vendor contracts give those vendors leverage over every new product a bank wants to launch. The bank becomes dependent not just on the platform but on the vendor’s interpretation of what banking should look like, what products should be possible, what data models should exist. The vendor’s architecture becomes the bank’s architecture by default, and the bank’s ability to differentiate on technology becomes increasingly constrained.

4.3 The Complexity Moat Is by Design

The vendors know this. Their licensing models, their implementation dependencies, their proprietary data formats are all optimised to make the cost of leaving feel higher than the cost of staying. The limitations of the packaged approach have become undeniable but the switching costs have made change feel impossible. It is a very sophisticated form of the same complexity moat that COBOL built around the mainframe.

5. The Question Nobody in the Room Ever Asked

5.1 The Wrong Question, Asked Expensively

Here is what continues to baffle me about the generation of banking technology leaders who ran these programmes. Somewhere in every one of those core banking replacement journeys, there was a room full of smart people, expensive people, people with decades of enterprise software experience, and collectively they convinced themselves that the central question they needed to answer was: which vendor can replace my current mess with a different vendor’s mess?

5.2 The Requirements Document as a Form of Self Harm

Think about what that process actually looked like. A bank assembles a requirements document. That requirements document runs to hundreds of pages. It covers every feature, every workflow, every edge case, every regulatory obligation, every reporting requirement, every integration point the current system handles, and then, for good measure, it adds everything the business has ever wanted but never got. The team spends months on it. Consultants bill handsomely for it. It becomes a definitive statement of what the bank wants from its technology for the next twenty years.

And at no point in that process does the penny drop that writing a twenty year feature wishlist for a monolithic vendor platform is itself a form of institutional self harm. The very act of producing that document is an admission that you have outsourced your architectural thinking to a sales catalogue. You are not designing a technology strategy. You are shopping.

5.3 The Quote Arrives and Nobody Asks the Hard Question

Then the quotes come in. The implementation is going to take three years, minimum. The risk profile is enormous. The cost is tens of millions of dollars before a single client sees any benefit. The programme will consume your best people, freeze your change pipeline, and create the kind of organisational stress that makes good engineers leave. And somewhere in that moment, in every single one of those programmes, someone should have stood up and asked the question that apparently nobody ever did: what is in this for our clients?

Not for the vendor. Not for the compliance team. Not for the CIO who wants a modern system on their CV. For the clients.

5.4 Better MIS Is Not Client Value

The honest answer to what the programme delivers for clients is, in almost every case: nothing they will ever see.

Better MIS reports. Slicker ETL. A compliance model that does not require a spreadsheet army to maintain. Internal dashboards that no longer require a PhD to operate. These are real improvements. They are not nothing. But they are improvements the client will never encounter, never feel, and never benefit from in any direct way.

The standard defence is that operational resilience is client value. That a bank which cannot see its own positions clearly will eventually harm its clients through failure, mis-selling, or collapse. That argument is not wrong. But it proves too little. Operational resilience justifies modernising your reporting layer. It does not justify a three year programme, tens of millions of dollars, and a change freeze across your entire product organisation; and then delivering the same products, at the same price, through the same channels, to clients who were unaware anything had changed.

The clients were going to get the same products, at the same price, with the same service model, through the same channels. The account was still going to be the account. The loan was still going to be the loan. The three year programme, the tens of millions of dollars, the organisational disruption; all of it was being spent on internal plumbing dressed up as transformation.

5.5 The Uncomfortable Question That Was Never Asked

This is what makes the entire core banking replacement era so difficult to defend in retrospect. The industry hired armies of technologists, built enormous internal capability, and then concluded that the highest and best use of that capability was to manage the procurement of vendor platforms. The talent was real. The investment was real. The output was a new set of vendor dependencies that looked marginally more modern than the old ones and came with an implementation trauma that took years to recover from.

If that talent had been directed at building domain capability instead of managing vendor relationships, the outcome would have been categorically different. But that would have required someone in the room to ask the uncomfortable question: why are we paying all these people if the answer is always to pay a vendor to do the actual work?

6. The Exquisite Pain of the Core Banking Upgrade

There is a particular kind of suffering in enterprise technology that has no equivalent elsewhere in the industry. It is the core banking upgrade. And if you have never lived through one, you cannot fully appreciate the combination of expense, duration, risk, and ultimate anticlimax that defines the experience.

A typical core banking upgrade programme runs three to five years. It consumes hundreds of millions of dollars when you account for implementation partners, internal resource, parallel running, testing infrastructure, and the inevitable scope creep that accompanies any programme of this complexity. It occupies the attention of the most senior technology leadership in the organisation for its entire duration. It generates a programme governance structure so elaborate that the governance itself becomes a full time job. It dominates board reporting, risk committee agendas, and regulator conversations for years at a stretch.

And then it goes live. And the very best outcome, the outcome the programme director dreams about, the outcome that gets celebrated with a quiet internal announcement and a cautious all staff email, is that nobody noticed. Not the customers. Not the operations teams. Not the regulators. The system behaves exactly as it did before, processes the same transactions, produces the same outputs, and the only visible change is that the version number in the admin console has incremented.

That is the success case. Three years. Tens of millions of dollars. New leadership, because the old CTO either burned out or was quietly moved on sometime around year two. And the headline achievement is: we did not break anything.

The business case that justified the programme spoke of future capability. Once on the new platform, the bank would be able to launch products faster, integrate with partners more easily, respond to regulatory changes with less pain, and unlock features from the vendor roadmap that the old system could not support. Some of those things materialise. Many of them do not, or materialise so slowly that the business opportunity they were meant to serve has already been captured by someone else.

Because here is the uncomfortable truth about the features you were going to unlock after the upgrade: if a customer wanted them badly enough, they left before you finished the programme. The customers who stayed either do not care about those features, have adapted their behaviour around their absence, or are locked in by switching costs of their own. You spent three years and a hundred million dollars catching up to a market position you should have held five years ago, in a world that has moved on since then.

The cruelest part is the competitive dynamics. When a major bank announces a core banking replacement programme, the correct response from every competitor is quiet celebration. Not because you wish them ill, but because you know what is coming. That bank is about to disappear into an internal programme for three to five years. Their best technology people will be consumed by the upgrade. Their ability to ship new products will be constrained by change freezes. Their senior leadership will be distracted. Their risk appetite will contract because nobody wants a major incident during a core migration. They will emerge on the other side with a swollen balance sheet, exhausted teams, and technology leadership that has largely turned over, and they will need another year just to rediscover what they were doing before the programme started. During their upgrade cycle the bank will have essentially gifted all of their competitors an uncontested market space.

This is the final indictment of the monolithic core banking model. It does not just constrain your architecture. It periodically forces you to consume enormous organisational energy in programmes whose best case outcome is standing still, while your competitors who made different architectural choices are shipping features every sprint. The upgrade treadmill is not an accident. It is a structural consequence of the architecture, and it will not end until the architecture changes.

7. The Replacement Trap

The question, then, is what to do about it. And this is where the industry has consistently chosen the wrong answer.

When a bank decides to replace its core banking system, it produces a requirements document. That document captures everything the current system does, everything the business has wanted but never received, and everything compliance has been asking for since the last major programme. It is a comprehensive statement of what the bank needs from its technology for the next twenty years. The selection process that follows evaluates vendors against that document. The winning vendor is the one whose platform covers the most requirements at an acceptable cost with a credible implementation track record.

Notice what has just happened. The bank has selected a new system based on its ability to replicate the functional footprint of the old one. The selection criterion is comprehensiveness. The winner is the most capable monolith available. The bank has spent eighteen months and several million dollars in consulting fees to procure a more modern version of the architectural problem it was trying to escape.

Ripping out one monolithic core and replacing it with another, even a genuinely more capable one, does not change the fundamental constraint. The constraint is not the age of the technology. The constraint is the architectural model: a single system that owns all the data, embeds all the business logic, and must be deployed, upgraded, and changed as a whole. A newer monolith has a cleaner codebase and a more modern API layer. It has exactly the same properties that will make it unmovable in fifteen years. You will be having this conversation again. The vendors know this. It is not a flaw in their business model. It is the business model.

The problem is not the technology. The problem is the decision to package every banking function into a single platform and call that an architecture. No amount of re-platforming resolves that decision. It only defers the reckoning.

7.1 The Strangler Fig Is the Only Approach That Consistently Works

The right approach is to strangle the core incrementally, building new capabilities on modern domain architecture alongside the existing system, migrating workloads progressively, and shrinking the footprint of the legacy core until it either becomes so small that replacement is trivial or its remaining functions are so well isolated that they can be maintained indefinitely without constraining everything else. This is the strangler fig pattern applied to banking, and it is the only migration approach that consistently produces good outcomes at acceptable risk.

7.2 Why the Argument Keeps Getting Ignored

The reason this argument has been ignored for so long is partly organisational and partly commercial, and partly something more primitive than either.

The organisational problem is that incremental transformation is harder to fund and harder to explain to a board than a single big programme. A “core banking replacement” is a project. An “incremental domain migration” is a journey with no obvious end date, and boards are more comfortable writing cheques for projects than journeys. But that framing only explains the governance mechanics. It does not explain the appetite for it, the genuine enthusiasm that intelligent people bring to these programmes. That comes from somewhere else.

Everyone wants to be at the big reveal. The go live. The moment the new system takes over and the old one is switched off forever. There is a ribbon cutting energy to a major core banking replacement that a decade of quiet domain migration can never replicate. Careers are built around it. It has a name and a programme board and a war room and a launch date. It is the kind of thing you put at the top of your CV and describe in keynotes. The people running these programmes are not stupid and they are not cynical. They genuinely believe in the transformational moment. They want to be the ones who finally fixed it.

The problem is that technology is not a rehabilitation programme. You do not get ill, go to rehab, come out cured, and return to normal life. Banks that treat core banking replacement as rehab are back in the same room five years after go live, wondering why the new system is already calcifying, already accumulating the technical debt that made the old one unbearable, already generating the requirements document for the next replacement. The cycle is not a coincidence. It is what happens when you try to solve a continuous problem with a discrete event.

Technology change is an infinite game. There is no big reveal. There is no moment at which the architecture is finished and the organisation can declare victory and go home. The neobanks that built domain driven architectures did not do it in one programme. They did it continuously, deploying changes daily, evolving domain boundaries as the business changed, treating the architecture as a living thing that requires constant attention rather than a project that can be completed and closed. That is not a less exciting way to run technology. It is the only way that actually works.

The commercial dimension reinforces this dynamic, and it does not require conspiracy to explain. The vendors who benefit most from large replacement programmes are the same vendors with the most presence at industry events, the most investment in thought leadership, and the most seats at the tables where technology strategy gets discussed. They do not need to coordinate. They just need to keep showing up, sponsoring the conferences, funding the research, and making the case for the kind of programmes that happen to be good for their revenue model. A vendor whose business depends on large implementation programmes has no commercial incentive to sell you a philosophy of incremental continuous change. The big reveal is very good for their business. The infinite game is not. The result is an industry conversation that is shaped, without anyone necessarily intending it, by the organisations with the most to gain from the status quo.

8. The Objections That Miss the Point

8.1 The Wheelchair Argument

There is a class of objection to the domain migration argument that surfaces reliably in every serious conversation about replacing COBOL and core banking systems. It goes something like this: AI will never replicate the performance of those undocumented hand optimised assembly routines that the COBOL engineers wrote to extract maximum throughput from the mainframe. Therefore AI cannot replace COBOL.

This objection is technically accurate and entirely irrelevant. It is the equivalent of arguing that electric vehicles cannot replace combustion engines because an electric motor cannot replicate the precise combustion dynamics of a finely tuned V8. You are not trying to replicate the V8. You are trying to move people from one place to another, and electric motors do that better by most measures that actually matter.

The assembly optimisation objection assumes that the goal is to take the existing system and make AI run it faster. It is not. The argument in this article; and the argument the neobanks proved in practice; is that you do not start with a wheelchair and strap a rocket to it. You build something that was designed from the beginning to go fast. A domain built specifically to process payments at scale, running on modern hardware, with a data model designed for throughput rather than retrofitted from 1972, does not have the same performance constraints as a COBOL batch processor. It does not need to replicate the assembly tricks because it does not have the architectural problems those tricks were invented to solve.

The question is not whether AI can match the performance of a hand optimised mainframe routine. The question is whether a modern domain architecture can meet the actual performance requirements of a modern bank. The answer to the second question has been demonstrated conclusively. The answer to the first question is irrelevant to anything anyone is actually trying to build.

8.2 The CAP Theorem Objection

The other objection that follows closely is the distributed systems consistency argument. ACID transactions are impossible on distributed systems. You cannot have the same transactional guarantees across domain boundaries that you have inside a monolith. Therefore domain driven architecture cannot replace a core banking system.

This one has the additional quality of being technically outdated.

ACID compliance within a single domain is not meaningfully different from ACID compliance within a monolith, because a well designed domain owns its own data store and processes its own transactions with full consistency guarantees. The complexity arises at domain boundaries, in operations that span multiple domains, and this is where the objection has historically had some validity.

What the objection misses is that this is a solved architectural problem, and has been for years. Saga patterns manage distributed transactions across domain boundaries by breaking them into a sequence of local transactions with compensating transactions for rollback. Event sourcing provides an immutable audit log of every state change, giving you the historical consistency that regulators require without forcing every domain to share a database. Eventual consistency, applied correctly to the parts of a banking system where it is appropriate, is not a concession. It is a design choice that matches the actual consistency requirements of the operation in question.

Not every banking operation requires synchronous ACID consistency across the entire system. A payment confirmation can be issued once the payments domain has committed its local transaction and published its event, even before the downstream ledger domain has processed that event. The money has moved. The question is only how quickly all systems agree that it has moved, and modern distributed systems close that window to milliseconds.

The banks that raise the CAP theorem objection as a reason not to migrate are not grappling with a genuine technical constraint. They are using a theoretical framework to avoid a difficult organisational decision. The constraint is not the theorem. The constraint is the willingness to make the transition.

8.3 The dual-book problem

You don’t need to run two books. The migration strategy isn’t a lift-and-shift of the book of record — you leave the existing ledger exactly where it is and use its own APIs to post debits and credits while you extract business logic and products out into separate domains incrementally. The core banking system becomes a dumb ledger temporarily, not something you have to replicate in parallel. The regulatory complexity argument collapses because you’re not running two books — you’re re-routing product logic while the same ledger of record continues to serve as the source of truth throughout. The UBS/Credit Suisse situation is actually a perfect illustration of what not to do: that was a forced, wholesale client migration across incompatible platforms under acquisition pressure. Domain extraction from a living system is a fundamentally different problem.

8.4 Neo-banks only doing the easy parts

Scale before complexity is rational product sequencing, not a ceiling. And neo-banks are already tackling things that universal banks genuinely struggle with: BNPL, crypto custody and rails, advanced payment infrastructure, real-time cross-border settlement. These aren’t simple — they’re just differently complex. Low-frequency, high-complexity products like commercial lending, mortgages, and IB aren’t being ignored; they’re next. When neo-banks get there, they’ll apply the same decomposed, domain-driven architectural philosophy they’ve already proven at scale — which gives them a structural advantage over incumbents who’ve been bolting those products onto monolithic cores for decades.

9. The Moment That Changes the Calculation

9.1 AI Dissolves the “Understanding Is Too Expensive” Argument

The AI announcement that rattled IBM’s stock this week is relevant to this argument in a specific and limited way. The claim that AI can compress the COBOL analysis phase from months to weeks is also a claim that it can compress the domain decomposition analysis of a core banking system. The same capability that traces data flows and business logic dependencies across hundreds of thousands of lines of COBOL can produce the domain boundary map that has historically cost millions in consulting fees before a single line of new architecture is written.

This matters because one of the most durable objections to domain driven migration has been that understanding what the current system actually does is itself prohibitively expensive. Nobody alive knows the full behaviour of a system built over forty years. The documentation is wrong or missing. The institutional knowledge has retired. The only way to know what the monolith does is to run it and watch it, which means the analysis phase alone is a multi year programme before migration even begins. If AI genuinely compresses that phase by an order of magnitude, it removes the single most credible technical objection to starting.

The hard problems remain hard. Data migration, regulatory validation, parallel running, cutover risk, the organisational change management required to shift a bank’s operating model from vendor dependency to domain ownership — none of that gets easier because a language model can read COBOL faster. But the argument that the problem is too complex to even understand clearly has just become significantly less convincing.

9.2 The Honest Conversation Cannot Be Deferred Indefinitely

For technology leaders at established banks, the variables are now shifting in a way that makes continued deferral harder to justify. The neobanks have moved from proof of concept to proven at scale — Monzo and Starling are no longer experiments, they are operational competitors with cost structures and change velocities that the monolithic architecture cannot match. AI is reducing the cost of understanding the problem. And the regulatory environment in most major markets is moving toward open banking requirements that a monolithic core serves badly and a domain architecture serves naturally.

The question is not whether the transition to domain driven banking architecture will happen. The neobank evidence has settled that. The question is whether established banks will lead that transition or react to it after competitors have used it to take market position they cannot recover.

Every year of continued dependency on a monolithic core is a year in which the cost of the eventual transition grows, the competitive gap widens, and the argument for delay gets slightly weaker. The switching cost is real. It has always been real. But it is not a permanent barrier. It is a cost that compounds the longer it is avoided.

9.3 The Architecture Reflects the Belief

The deepest reason core banking persisted is not technical and not commercial. It is a belief, held by successive generations of banking technology leaders, that banking is too complex and too regulated to be built any other way. That belief was never correct. It was a rationalisation that the vendors reinforced because it was commercially useful to them, and that technology organisations accepted because it released them from the responsibility of building genuine architectural capability.

The neobanks disproved it. They built domain driven banks, at scale, under the same regulatory frameworks, with teams a fraction of the size of the technology organisations at major incumbents. They did not discover a secret. They simply refused to accept the premise that the only architecture available to a bank was the one the vendors were selling.

The monolithic core had its era. That era was the 1970s. The question worth asking now is not how to replace it with a better monolith. It is how to build a bank whose architecture reflects what banking has actually become, rather than what it was when the architecture was first designed.


Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the future of financial services technology at andrewbaker.ninja.

The Blog Post That Erased $30 Billion from IBM

Anthropic published a blog post on Monday. Not a product launch, not a partnership announcement, not a keynote at a major conference. Just a simple blog post explaining that Claude Code can read COBOL.

IBM proceeded to drop 13%, its worst single day loss since October 2000, with twenty five years of stock resilience gone in an afternoon because one AI company quietly updated the world on what its coding tool can do.

Here is what actually happened, and why it matters more than the stock price suggests.

1. We All Knew This Day Was Coming

Nobody in technology is surprised that COBOL is finally meeting its match. The writing has been on the wall for years, and AI was always going to get here eventually. The debate was never if, it was when.

What nobody predicted was how it would actually arrive. We imagined a moment of reckoning — a dramatic product launch, a CEO on stage, a press cycle with gravity proportional to what was being disrupted, something that signalled to the world that a $30 billion industry was about to be restructured. Instead we got a blog post with the energy of a minor feature enhancement, casual, almost blasé, tucked between other announcements. “By the way, Claude Code can now help you modernise COBOL. Here is a playbook. Have fun.”

That casualness is itself the signal. When the death blow to fifty years of mainframe dependency reads like a changelog entry, it tells you something profound about the pace at which AI is normalising disruption. The technology has gotten so capable so fast that genuinely historic announcements are being made in the same tone as a library update. COBOL’s day of reckoning came. It just did not bother to dress up for the occasion.

2. Which Businesses Feel Safe Now?

That question is worth sitting with, because if a blog post can erase $30 billion from IBM in an afternoon, the question every board should be asking is not “is this bad for IBM?” but “what is our equivalent of COBOL?” Every industry has one: the process that has not changed because it was too expensive to understand, the system that has not been replaced because the analysis cost was prohibitive, the business model that persisted not because it was good but because the complexity protecting it was real and formidable.

AI is not just threatening COBOL. It is threatening complexity itself as a competitive moat. Legal firms built on the impenetrability of case law, consulting practices built on the opacity of enterprise systems, insurance actuarial models built on proprietary data interpretation, compliance functions built on regulatory complexity; any organisation whose value proposition includes “we understand the incomprehensible so you do not have to” should be reading Monday’s news very carefully.

I wrote about this dynamic in a different context earlier this year. The Death Star Paradox explores why AI first mover advantage is not a gradient but a cliff. The organisations that move first do not just get ahead, they make the response irrelevant. Monday was a live demonstration of that thesis. Anthropic did not outcompete IBM’s COBOL tools. They made IBM’s COBOL tools feel like they belonged to a different era, and the same technology, framed by a different narrative, landed with completely different force.

3. The Language That Refuses to Die

COBOL is 67 years old, designed in 1959 via a public private partnership that included the Pentagon and IBM with the goal of creating a universal, plain English programming language for business applications. Most of the developers who wrote it have retired, and most universities stopped teaching it years ago. And yet COBOL handles roughly 95% of ATM transactions in the United States, with hundreds of billions of lines of it running in production every single day, powering banks, airlines, and government systems on every continent.

The developers who built these systems encoded decades of business logic, regulatory compliance, and institutional knowledge directly into the code, with no comments and often no documentation. The only way to understand what a COBOL system actually does is to read it, trace it, and map it : a process that takes teams of specialists months before a single line of replacement code gets written. That analysis cost is exactly why COBOL never got replaced.

4. The MIPS Tax Nobody Talks About

Here is something the financial press almost never covers when they write about mainframes. IBM does not sell mainframe capacity the way cloud providers sell compute. IBM prices mainframe usage in MIPS: Millions of Instructions Per Second, and that pricing model has had profound consequences for the institutions running it.

MIPS pricing means that every workload you run on a mainframe is metered, every transaction, every batch job, every new product feature. As your business grows, your IBM bill grows with it, not because you bought more hardware but because you used more of the hardware you already own. The mainframe also only scales vertically, so you cannot add nodes the way you add cloud instances. When you hit the ceiling, you hit an outage, not a queue, not a slowdown, but a ceiling and then a fall. Burst protection was therefore not a nice to have on mainframe estates but an architectural necessity, because the alternative was a production outage triggered by demand spikes you could not absorb. Financial institutions spent years engineering around a constraint that simply does not exist on modern horizontally scaled infrastructure.

The consequences of MIPS pricing for customer facing products have been quietly catastrophic. I have spoken to technology leaders at major financial institutions who made deliberate decisions to restrict what products they offer to retail customers specifically to manage MIPS consumption. Think about that for a moment: a bank limiting its own product portfolio not because of regulation, not because of market demand, not because of engineering constraints, but because launching a new feature would push their IBM bill past a threshold their CFO had approved. That is the hidden tax the mainframe imposed on an entire generation of financial innovation, and it is one that COBOL modernisation, done properly, finally removes. When your transaction processing runs on commodity cloud compute, burst protection comes standard, you pay for what you use, you scale horizontally, and nobody in your product team has to ask whether a new feature is worth the MIPS.

5. What Claude Code Actually Does

Anthropic’s announcement is technically precise. Claude Code can map dependencies across thousands of lines of legacy code, document workflows that have never been written down, identify migration risks, and surface institutional knowledge that would take human analysts months to find. The key claim is that with AI, teams can modernise their COBOL codebase in quarters instead of years, and that single sentence is what sent IBM’s stock into freefall.

If the analysis phase collapses from months to days, the entire economic argument for leaving COBOL alone collapses with it. The reason banks, governments, and airlines kept paying IBM billions was not that they loved mainframes, it was that the alternative required an enormous, expensive, risky analysis programme before any actual migration work could even begin. Remove that barrier and the calculation changes entirely.

6. IBM’s Uncomfortable Position

Here is the part that does not make it into most of the coverage. IBM has been saying this themselves since 2023, having built watsonx Code Assistant for Z specifically to help organisations understand and modernise their COBOL estates. Their own CEO said in mid 2025 that it had wide adoption across their customer base. Nobody moved IBM’s stock 13% when IBM said it.

What moved the stock is that Anthropic said it. A company the market has decided represents the future described disrupting something the market has decided represents the past, and the technical merits became almost irrelevant once that narrative took hold. That is the uncomfortable truth IBM is sitting with today. It is not that their technology is inferior, it is that the market no longer seems to grant them the credibility to define what modern looks like anymore. When an AI startup and a 113 year old technology company make the same claim and the market weights them so very differently : you need to reflect on what seems to be a very clear message.

7. The Architectural Sin Nobody Named

The mainframe did more than create a pricing problem. It created an architectural pathology that infected an entire industry and quietly persisted for fifty years. When everything runs on a single box, you stop thinking in systems, stop thinking in domains, stop asking which parts of your business logic belong together, which data belongs to which bounded context, which services should be decoupled from which. You just throw it all on the mainframe and call it an architecture. It is not an architecture. It is fly tipping with a Service Level Agreement.

The core banking platforms that emerged from the mainframe era inherited this thinking wholesale: monolithic systems that encode every conceivable banking function into a single codebase, with a data model built for batch processing in the 1970s, sold to banks as enterprise architecture when they are really just mainframe thinking with a modern price tag. These platforms have been extraordinarily difficult to displace not because they are good but because replacing them requires untangling the same kind of complexity that makes COBOL modernisation so expensive.

The insidious thing is that the architectural pattern itself became normalised, everything on one box, no domain boundaries, no service separation, no independent scalability, and entire generations of banking technologists grew up thinking this was how enterprise systems were supposed to work, that you built the big thing and managed the big thing and that was the job. It was never the job. It was the compromise you made when the alternative was too expensive to contemplate. With that excuse now weakening, there is no reason left to defend it. Engineers should be waking up to what actually comes after the mainframe: domain driven design, clear service boundaries, independent scalability, systems built around how the business actually works rather than around what a 1970s box could physically accommodate. Stop fly tipping on a single box and calling it enterprise architecture. The mainframe deserved our respect. It does not deserve our imitation.

8. The Question That Still Needs Answering

Anthropic released a Code Modernisation Playbook alongside the announcement, and it is detailed, technically credible, and genuinely useful for organisations thinking about where to start. What it does not contain is a completed end to end migration of a production core banking system in a regulated environment, validated against the original system and signed off by an external auditor.

That is the proof that matters. The analysis phase getting faster is real, but what happens after the analysis, the data architecture redesign, the regulatory validation, the transaction integrity verification, the performance engineering, that work is still hard. A better map of the territory does not flatten the territory. The organisations that respond to this announcement by treating it as a solved problem will learn that lesson expensively, while the organisations that respond by running careful pilots on bounded parts of their estate, building genuine modernisation competency, and treating AI as an accelerant rather than a replacement for rigorous engineering will be in a fundamentally stronger position three years from now.

9. is The Real Signal in the Noise

IBM lost 13% on Monday and 27% in February, its worst monthly performance since 1968. That is not a market making a precise technical assessment of what Claude Code can and cannot do to mainframe revenue. That is a market expressing something it has believed for a while and finally found a reason to act on: that the era of complexity as a competitive moat is ending, and that organisations whose entire value proposition depends on being the only ones who can navigate the obscure, the legacy, and the deliberately impenetrable are facing a structural repricing.

The mainframe era produced extraordinary engineering. It also produced an architectural culture that mistook consolidation for design, confused vertical scale with resilience, and let pricing models constrain what products banks could build for their customers. That era is ending, not because of a blog post, but a blog post just made it impossible to pretend otherwise. And it did it in the most devastating way possible: casually, without drama, in the same tone you would use to announce a new keyboard shortcut. That is how you know it is real.

10. This Is Not About IBM

It would be easy to read everything above as a story about one company having a bad February. It is not. IBM is the example, not the subject.

The subject is every business that built its competitive position on the same foundation: embedded complexity so expensive to understand that nobody bothered to challenge it. Switching costs so high that clients stayed not out of satisfaction but out of resignation. Legacy so deep that the cost of leaving exceeded the cost of enduring.

Warren Buffett spent decades actively hunting for exactly this quality. He called it a moat, and he was unambiguous about how much he valued it. “The most important thing,” he said at the 1995 Berkshire Hathaway annual meeting, “is trying to find a business with a wide and long-lasting moat around it, protecting a terrific economic castle with an honest lord in charge of the castle.” He went further in Fortune in 1999: “The key to investing is not assessing how much an industry is going to affect society, or how much it will grow, but rather determining the competitive advantage of any given company and, above all, the durability of that advantage. The products or services that have wide, sustainable moats around them are the ones that deliver rewards to investors.” He was right. For fifty years, complexity was one of the most durable moats in business. If you were the only one who could read the castle map, nobody could storm the gates.

The inversion that is now underway is almost poetic in its completeness. The moat has not been bridged. It has been drained. AI does not need to breach complexity slowly and expensively the way human teams did. It maps it, documents it, and hands you a migration plan before your CFO has finished the first slide of the business case. What made the moat wide was the cost of analysis. That cost is collapsing. And when the cost of analysis collapses, the moat does not just get shallower. It disappears, and it disappears fast.

The businesses that should be uncomfortable are not just mainframe shops. They are any organisation where the honest answer to “why do our clients stay?” includes some version of “because leaving is too hard.” Legal practices whose value lives in the impenetrability of case law. Consulting firms whose leverage depends on the opacity of enterprise systems. Core banking vendors whose renewal rates reflect the terror of replacement rather than the satisfaction of the product. Compliance functions whose headcount is justified by regulatory complexity that AI is beginning to navigate faster than the humans who built careers around it.

Buffett also said something less quoted but more important for this moment: “A moat that must be continuously rebuilt will eventually be no moat at all.” He was warning about the fragility of advantages that depend on external conditions remaining stable. He was right about that too, just not in the way anyone expected. The moats built on complexity did not need to be rebuilt. They needed the world to stay complicated enough to justify them. That world is ending.

What Monday revealed is that the businesses most at risk are not the ones that failed to build moats. They are the ones that built the deepest moats of all and then, over decades, forgot how to do anything else. The complexity that trapped their clients is now trapping them. The castle that was supposed to protect them has become the thing they cannot escape. And a blog post just let everyone see it clearly for the first time.

The question is not whether your moat is under threat. The question is whether, when the water drains, there is something worth defending underneath.

Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the future of financial services technology at andrewbaker.ninja.

Is Banking Complexity a Shared Destiny or Is It a Leadership Failure?

If you look back over time at all once great companies, you will see that eventually simplicity gave way to scale. What are some of the risks that drive this?

  • Product sprawl (payments, credit, insurance, business banking)
  • Complexity creep in operations
  • More regulators, more rules, more controls
  • Cultural dilution as headcount grows (nobody can answer the question “what do all those people actually do?”)

This is where many great banks lose their edge. But is this really a shared destiny for all banks, or did the leadership simply fail to lead?

It is a comforting idea: scale is gravity, and operational drag is just what happens when you get big. If that were true, every large organisation would converge on the same outcome: bloated estates, fragile systems, endless governance, and chronic delivery failure. But complexity is not a law of nature. It is a residue. It is what remains when decisions are postponed instead of resolved. It is what accumulates when compromise is allowed to harden into architecture. It is what grows when organisations confuse activity with progress.

Two banks can grow at the same pace, operate under the same regulatory regime, and still end up with radically different realities.

The difference is not growth. The difference is what growth is allowed to amplify.

1. Doesn’t Growth Force Layers, Process, and Bureaucracy?

Growth forces repetition. It does not force bureaucracy.

Bureaucracy appears when organisations stop trusting their systems to behave predictably. It is a defensive response:

  • to systems that are too coupled to change safely
  • to teams that cannot deploy independently
  • to ownership that is unclear or contested
  • to leadership that lacks technical confidence

In well designed environments, growth punishes excess process because process slows feedback. Simplicity becomes a survival trait.

In poorly designed environments, growth rewards control because control is the only way to reduce surprise. Scale does not create bureaucracy. Fear does.

2. Don’t Mature Product Portfolios Naturally Become Complex?

Only if nothing ever truly ends. Product complexity explodes when organisations refuse to delete. Old products linger because retirement is politically painful. New products are layered on top because fixing the original mistake would require accountability.

Over time, the portfolio stops being intentional. It becomes archaeological. Operational complexity emerges when:

  • product boundaries are unclear
  • shared state becomes the default
  • release cycles are coupled
  • incidents span multiple domains by design

Maturity is not the accumulation of features.
Maturity is the accumulation of clarity.

3. Growth Reveals Truth. It Does Not Change It.

Bank executives discussing strategy around conference table with financial documents
Screenshot

This is the uncomfortable part. Scale is not a transformation engine. It is an amplifier. Growth does not turn good systems into bad ones. Growth turns weak assumptions into outages. If you already have:

  • clear domain boundaries, growth multiplies throughput
  • strong technical leadership, growth accelerates decision making
  • predictable delivery, growth increases confidence
  • resilient architecture, growth improves stability

If you already have:

  • unclear ownership, growth magnifies politics
  • entangled systems, growth multiplies blast radius
  • indecision, growth creates paralysis
  • weak architecture, growth exposes fragility

When people say “they will become complex as they grow”, what they are really saying is:
“Growth will expose whatever they have been avoiding.”

4. Why Does Scarcity Force Simplicity Including Organisational Design?

Banking executives discussing strategy around conference table with financial documents
Screenshot

Scarcity is not just a financial or technical constraint. It is an organisational one.

When resources are scarce, organisations are forced to make explicit choices about ownership, scope, and accountability. You cannot create twenty product teams for the same savings account and hope simplicity will somehow emerge either for the client or architecturally. Scarcity enforces:

  • a small number of clearly accountable teams
  • sharply defined product boundaries
  • single sources of truth
  • architectural coherence

When you only have a handful of teams, duplication is obvious and intolerable. Overlap becomes expensive immediately. Decisions are made early, when they are still cheap. Abundance breaks this discipline. With enough people and budget, organisations fragment responsibility:

  • multiple teams own different “aspects” of the same product
  • customer journeys are split across silos
  • data ownership becomes ambiguous
  • architecture starts to mirror reporting lines instead of domains

This is how organisations create massive internal motion while the customer experience degrades and operational risk increases.

Organisational simplicity and architectural simplicity are inseparable. If your org chart is tangled, your systems will be too.

Bank executives discussing strategy around conference table with financial charts
Screenshot

5. Doesn’t Maturity Inevitably Create Complexity?

No, and this is where many organisations lie to themselves.

We routinely confuse an organisation getting older with an organisation becoming mature. They are not the same thing. Maturity does not create complexity, but immaturity does.

As immature organisations age, they do not magically become disciplined, coherent, or deliberate. They reveal their immaturity more clearly. Deferred decisions surface. Leadership vacuums widen. Weak architectural choices harden into constraints.

Organisations are not like bottles of wine that effortlessly reveal sophistication over time. They are more like a box of frogs, full on entropy and constantly needing to be corrected.

Without active leadership, clarity, and constant intervention, entropy takes over. Chaos rushes in where decisions are delayed. Politics replaces strategy when direction is absent.

Time is not a cure. Time is an accelerant.

Business professionals in meeting discussing banking strategy and leadership challenges
Screenshot

6. Isn’t Operational Drag Simply the Cost of Regulation and Risk?

Regulation adds constraints. It does not mandate chaos. In practice, regulators reward:

  • clean boundaries
  • deterministic processes
  • auditable flows
  • explicit accountability

What creates regulatory pain is not simplicity but opacity: tangled estates, unclear data lineage, and uncontrolled change paths.

Many organisations hide behind regulation because it is a convenient excuse not to simplify. Compliance does not require complexity. It requires clarity.

7. Don’t All Large Systems Eventually Become Fragile?

Large does not mean fragile. Coupled means fragile. Fragility appears when:

  • multiple products share the same state
  • deployments are linked
  • teams cannot change without coordination
  • ownership is blurred

Resilience comes from clean failure domains.

If systems are isolated, you can grow without multiplying outage impact.
If they are not, every new product increases systemic risk.

8. Isn’t This Just a Different Phase of the Same Journey?

This assumes there is only one destination.

It implies every organisation eventually converges on the same architecture, the same cost base, and the same operational burden.

That belief protects poor performance. There are divergent paths:

  • one treats simplicity as a first class constraint
  • the other treats complexity as inevitable and builds governance to manage the damage

These are not phases. They are philosophies.

9. If Complexity Isn’t Inevitable, Why Do So Many Organisations Suffer From It?

Because complexity is what you get when you refuse to choose. It is easier to:

  • keep two systems than retire one
  • add a layer than remove a dependency
  • add a new product, than fix the existing ones
  • create a committee than empower a team
  • declare inevitability than admit poor decisions

Operational complexity is not created by growth. It is created by accumulated compromise.

10. So What Actually Creates Operational Complexity?

Almost always the same four forces:

  1. Indecision
    Parallel paths are kept alive to avoid conflict.
  2. Product complexity
    Portfolios grow without pruning.
  3. Poor strategic architectural decisions
    Short term delivery is traded for long term fragility.
  4. No technically viable strategy for co existence
    Products cannot live in isolated domains.

Growth does not cause these. Growth merely exposes them.

11. What Is the Real Destiny?

There is no destiny. There is only design. Organisations that invest in:

  • scarcity as a deliberate constraint
  • value stream aligned organisational design
  • isolation as a scaling strategy
  • strong technical leadership
  • ruthless simplification

Do not collapse under growth. They compound efficiency. Those that do not, will call their outcomes “inevitable”. They never were.

Banking in South Africa: Abundance, Pressure, and the Coming Consolidation

I wanted to write about the trends we can see playing out, both in South Africa and globally with respect to: Large Retailers, Mobile Networks, Banking, Insurance and Technology. These thoughts are my own and I am often wrong, so dont get too excited if you dont agree with me 🙂

South Africa is experiencing a banking paradox. On one hand, consumers have never had more choice: digital challenger banks, retailer backed banks, insurer led banks, and mobile first offerings are launching at a remarkable pace. On the other hand, the fundamental economics of running a bank have never been more challenging. Margins are shrinking, fees are collapsing toward zero, fraud and cybercrime costs are exploding, and clients are fragmenting their financial lives across multiple institutions.

This is not merely a story about digital disruption or technological transformation. It is a story about scale, cost gravity, fraud economics, and the inevitable consolidations.

1. The Market Landscape: Understanding South Africa’s Banking Ecosystem

Before examining the pressures reshaping South African banking, it is essential to understand the current market structure. As of 2024, South Africa’s banking sector remains concentrated among a handful of large institutions. Together with Capitec and Investec, the major traditional banks held around 90 percent of the banking assets in the country.

Despite this dominance, the landscape is shifting. New bank entrants have gained large numbers of clients in South Africa. However, client acquisition has not translated into meaningful market share. This disconnect between client numbers and actual banking value reveals a critical truth: in an abundant market, acquiring accounts is easy. Becoming someone’s primary financial relationship is extraordinarily difficult.

2. The Incumbents: How Traditional Banks Face Structural Pressure

South Africa’s traditional banking system remains dominated by large institutions that have built their positions over decades. They continue to benefit from massive balance sheets, regulatory maturity, diversified revenue streams including corporate and investment banking, and deep institutional trust built over generations.

However, these very advantages now carry hidden liabilities. The infrastructure that enabled dominance in a scarce market has become expensive to maintain in an abundant one.

2.1 The True Cost Structure of Modern Banking

Running a traditional bank today means bearing the full weight of regulatory compliance spanning Basel frameworks, South African Reserve Bank supervision, anti money laundering controls, and know your client requirements. It means investing continuously in cybersecurity and fraud prevention systems that have evolved from control functions into permanent warfare operations. It means maintaining legacy core banking systems that are expensive to operate, difficult to modify, and politically challenging to replace. It means supporting hybrid client service models that span physical branches, call centres, and digital platforms, each requiring different skillsets and infrastructure.

Add to this the ongoing costs of card payment rails, interchange fees, and cash logistics infrastructure, and the fixed cost burden becomes clear. These are not discretionary investments that can be paused during difficult periods. They are the fundamental operating requirements of being a bank.

2.2 The Fee Collapse and Revenue Compression

At the same time that structural costs continue rising, transactional banking revenue is collapsing. Consumers are no longer willing to pay for monthly account fees, per transaction charges, ATM withdrawals, or digital interactions. What once subsidized the cost of branch networks and back office operations now generates minimal revenue.

This creates a fundamental squeeze where costs rise faster than revenue can be replaced. The incumbents still maintain advantages in complexity based products such as home loans, vehicle finance, large credit books, and business banking relationships. These products require sophisticated risk management, large balance sheets, and regulatory expertise that new entrants struggle to replicate.

However, they are increasingly losing the day to day transactional relationship. This is where client engagement happens, where financial behaviors are observed, and where long term loyalty is either built or destroyed. Without this foundation, even complex product relationships become vulnerable to attrition.

3. The Crossover Entrants: Why Retailers, Telcos, and Insurers Want Banks

Over the past decade, a powerful second segment has emerged: non banks launching banking operations. Retailers, insurers, and telecommunications companies have all moved into financial services. These players are not entering banking for prestige or diversification. They are making calculated economic decisions driven by specific strategic objectives.

3.1 The Economic Logic Behind Retailers Entering Banking

Retailers see five compelling reasons to operate banks:

First, they want to offload cash at tills. When customers can deposit and withdraw cash while shopping or visiting stores, retailers dramatically reduce cash in transit costs, eliminate expensive standalone ATM infrastructure, and reduce the security risks associated with holding large cash balances.

Second, they want to eliminate interchange fees by keeping payments within their own ecosystems. Every transaction that stays on their own payment rails avoids card scheme costs entirely, directly improving gross margins on retail sales.

Third, and most strategically, they want to control payment infrastructure. The long term vision extends beyond cards to account to account payment systems integrated directly into retail and mobile ecosystems. This would fundamentally shift power away from traditional card networks and banks.

Fourth, zero fee banking becomes a powerful loss leader. Banking services drive foot traffic, increase share of wallet across the ecosystem, and reduce payment friction for customers who increasingly expect seamless digital experiences.

Fifth, and increasingly the most sophisticated motivation, they want to capture higher quality client data and establish direct digital relationships with customers. This creates a powerful lever for upstream supplier negotiations that traditional retailers simply cannot replicate. Loyalty programs, whilst beneficial in respect of accurate client data, they typically fail to give you the realtime digital engagement needed to shift product. Most loyalty programs are either bar coded plastic cards or apps which have low client engagements and high drop off rates, principally due to their narrow value proposition.

Consider the dynamics this enables: a retailer with deep transactional banking relationships knows precisely which customers purchase specific product categories, their purchase frequency, their price sensitivity, their payment patterns, and their responsiveness to promotions. This is not aggregate market research. This is individualised, verified, behavioural data tied to actual spending.

Armed with this intelligence, the retailer can approach Supplier A with a proposition that would have been impossible without the banking relationship: “If you reduce your price by 10 basis points, we will actively engage the 340,000 customers in our ecosystem who purchase your product category. Based on our predictive models, we can demonstrate that targeted digital engagement through our banking app and payment notifications will double sales volume within 90 days.”

This is not speculation or marketing bravado. It is a data backed commitment that can be measured, verified, and contractually enforced.

The supplier faces a stark choice: accept the price reduction in exchange for guaranteed volume growth, or watch the retailer redirect those same 340,000 customers toward a competing supplier who will accept the terms.

Traditional retailers without banking operations cannot make this proposition credible. They might claim to have customer data, but it is fragmented, often anonymised, and lacks the real time engagement capability that banking infrastructure provides. A banking relationship means the retailer can send a push notification at the moment of payment, offer instant cashback on targeted products, and measure conversion within hours rather than weeks.

This upstream leverage fundamentally changes the power dynamics in retail supply chains. Suppliers who once dictated terms based on brand strength now find themselves negotiating with retailers who possess superior customer intelligence and the direct communication channels to act on it.

The implications extend beyond simple price negotiations. Retailers can use this data advantage to optimise product ranging, predict demand with greater accuracy, negotiate exclusivity periods, and even co develop products with suppliers based on demonstrated customer preferences. The banking relationship transforms the retailer from a passive distribution channel into an active market maker with privileged access to consumer behaviour.

This is why the smartest retailers view banking not as a side business or diversification play, but as strategic infrastructure that enhances their core retail operations. The banking losses during the growth phase are an investment in capabilities that competitors without banking licences simply cannot match.

3.2 The Hidden Complexity They Underestimate

What these players consistently underestimate is that banking is not retail with a license. The operational complexity, regulatory burden, and risk profile of banking operations differ fundamentally from their core businesses.

Fraud, cybercrime, dispute resolution, chargebacks, scams, and client remediation are brutally complex challenges. Unlike retail where a product return is a process inconvenience, banking disputes involve money that may be permanently lost, identities that can be stolen, and regulatory obligations that carry severe penalties for failure.

The client service standard in banking is fundamentally different. When a retail transaction fails, it is frustrating. When a banking transaction fails and money disappears, it becomes a crisis that can devastate client trust and trigger regulatory scrutiny.

The experience of insurer led banks illustrates these challenges with brutal precision. Building a banking operation requires billions of rand in upfront investment, primarily in technology infrastructure and regulatory compliance systems. Banks launched by insurers have operated at significant losses for several years while building scale. In a market already saturated with low cost options and fierce competition for the primary account relationship, the margin for strategic error is extraordinarily thin.

4. Case Study: Old Mutual and the Nedbank Paradox

The crossover entrant dynamics described above find their most striking illustration in Old Mutual’s decision to build a new bank just six years after unbundling a R43 billion stake in one of South Africa’s largest banks. This is not merely an interesting corporate finance story. It is a case study in whether insurers can learn from their own history, or whether they are destined to repeat expensive mistakes.

4.1 The History They Already Lived

Old Mutual acquired a controlling 52% stake in Nedcor (later Nedbank) in 1986 and held it for 32 years. During that time, they learned exactly how difficult banking is. Nedbank grew into a full service institution with corporate banking, investment banking, wealth management, and pan African operations. By 2018, Old Mutual’s board concluded that managing this complexity from London was destroying value rather than creating it.

The managed separation distributed R43.2 billion worth of Nedbank shares to shareholders. Old Mutual reduced its stake from 52% to 19.9%, then to 7%, and today holds just 3.9%. The market’s verdict: Nedbank’s market capitalisation is now R115 billion, more than double Old Mutual’s R57 billion.

Then, in 2022, Old Mutual announced it would build a new bank from scratch.

4.2 The Bet They Are Making Now

Old Mutual has invested R2.8 billion to build OM Bank, with cumulative losses projected at R4 billion to R5 billion before reaching break even in 2028. To succeed, they need 2.5 to 3 million clients, of whom 1.6 million must be “active” with seven or more transactions monthly.

They are launching into a market where Capitec has 24 million clients, TymeBank has achieved profitability with 10 million accounts, Discovery Bank has over 2 million clients, and Shoprite and Pepkor are both entering banking. The mass market segment Old Mutual is targeting is precisely where Capitec’s dominance is most entrenched.

The charitable interpretation: Old Mutual genuinely believes integrated financial services requires owning transactional banking capability. The less charitable interpretation: they are spending R4 billion to R5 billion to relearn lessons they should have retained from 32 years owning Nedbank.

4.3 The Questions That Should Trouble Shareholders

Why build rather than partner? Old Mutual could have negotiated a strategic partnership with Nedbank focused on mass market integration. Instead, they distributed R43 billion to shareholders and are now spending R5 billion to recreate a fraction of what they gave away.

What institutional knowledge survived? The resignation of OM Bank’s CEO and COO in September 2024, months before launch, suggests the 32 years of Nedbank experience did not transfer to the new venture. They are learning banking again, expensively.

Is integration actually differentiated? Discovery has pursued the integrated rewards and banking model for years with Vitality. Old Mutual Rewards exists but lacks the behavioural depth and brand recognition. Competing against Discovery for integration while competing against Capitec on price is a difficult strategic position.

What does success even look like? If OM Bank acquires 3 million accounts but most clients keep their salary at Capitec, the bank becomes another dormant account generator. The primary account relationship is what matters. Everything else is expensive distraction.

4.4 What This Tells Us About Insurer Led Banking

The Old Mutual case crystallises the risks facing every crossover entrant discussed in Section 3. Banking capability cannot be easily exited and re entered. Managed separations can destroy strategic options while unlocking short term value. The mass market is not a gap waiting to be filled; it is a battlefield where Capitec has spent 20 years building structural dominance.

Most importantly, ecosystem integration is necessary but not sufficient. The theory that insurance plus banking plus rewards creates unassailable client relationships remains unproven. Old Mutual’s version of this integrated play will need to be meaningfully better than Discovery’s, not merely present.

Whether Old Mutual’s second banking chapter ends differently from its first depends on whether the organisation has genuinely learned from Nedbank, or whether it is replaying the same strategies in a market that has moved on without it. The billions already committed suggest they believe the former. The competitive dynamics suggest the latter.

5. Fraud Economics: The Invisible War Reshaping Banking

Fraud has emerged as one of the most significant economic forces in South African banking, yet it remains largely invisible to most clients until they become victims themselves. The scale, velocity, and sophistication of fraud losses are fundamentally altering banking economics and will drive significant market consolidation over the coming years.

5.1 The Staggering Growth in Fraud Losses

The fraud landscape in South Africa has deteriorated at an alarming rate. Looking at the three year trend from 2022 to 2024, the acceleration is unmistakable. More than half of the total digital banking fraud cases in the last three years occurred in 2024 alone, according to SABRIC.

Digital banking crime increased by 86% in 2024, rising from 52,000 incidents in 2023 to almost 98,000 reported cases. When measured by actual cases rather than just value, digital banking fraud more than doubled, jumping from 31,612 in 2023 to 64,000 in 2024. The financial impact climbed from R1 billion in 2023 to over R1.4 billion in 2024, representing a 74% increase in losses year over year.

Card fraud continues its relentless climb despite banks’ investments in security. Losses from card related crime increased by 26.2% in 2024, reaching R1.466 billion. Card not present transactions, which occur primarily in online and mobile environments, accounted for 85.6% of gross credit card fraud losses, highlighting where criminals have concentrated their efforts.

Critically, 65.3% of all reported fraud incidents in 2024 involved digital banking channels. This is not a temporary spike. Banking apps alone bore the brunt of fraud, suffering losses exceeding R1.2 billion and accounting for 65% of digital fraud cases.

The overall picture is sobering: total financial crime losses, while dropping from R3.3 billion in 2023 to R2.7 billion in 2024, mask the explosion in digital and application fraud. SABRIC warns that fraud syndicates are becoming increasingly sophisticated, technologically advanced, and harder to detect, setting the stage for what experts describe as a potential “fraud storm” in 2025.

5.2 Beyond Digital: The Application Fraud Crisis

Digital banking fraud represents only one dimension of the crisis. Application fraud has become another major growth area that threatens bank profitability and balance sheet quality.

Vehicle Asset Finance (VAF) fraud surged by almost 50% in 2024, with potential losses estimated at R23 billion. This is not primarily digital fraud; it involves sophisticated document forgery, cloned vehicles, synthetic identities, and increasingly, AI generated employment records and payslips to deceive financing systems.

Unsecured credit fraud rose sharply by 57.6%, with more than 62,000 fraudulent applications reported. Actual losses more than doubled from the previous year to R221.7 million, demonstrating that approval rates for fraudulent applications are improving from the criminals’ perspective.

Home loan fraud, though slightly down in reported case numbers, remains highly lucrative for organized crime. Fraudsters are deploying AI modified payslips, deepfake video calls for identity verification, and sophisticated impersonation techniques to secure financing that will never be repaid.

5.3 The AI Powered Evolution of Fraud Techniques

The rapid advancement of artificial intelligence has fundamentally changed the fraud landscape. According to SABRIC CEO Andre Wentzel, criminals are leveraging AI to create scams that appear more legitimate and convincing than ever before.

From error free phishing emails to AI generated WhatsApp messages that perfectly mimic a bank’s communication style, and even voice cloned deepfakes impersonating bank officials or family members, these tactics highlight an unsettling reality: the traditional signals that helped clients identify fraud are disappearing.

SABRIC has cautioned that in 2025, real time deepfake audio and video may become common tools in fraud schemes. Early cases have already emerged of fraudsters using AI voice cloning to impersonate individuals and banking officials with chilling accuracy.

Importantly, SABRIC emphasizes that these incidents result from social engineering techniques that exploit human error rather than technical compromises of banking platforms. No amount of technical security investment alone can solve a problem that fundamentally targets human psychology and decision making under pressure.

5.3.1 The Android Malware Explosion: Repackaging and Overlay Attacks

Beyond AI powered social engineering, South African banking clients face a sophisticated Android malware ecosystem that operates largely undetected until accounts are drained.

Repackaged Banking Apps: Criminals are downloading legitimate banking apps from official stores, decompiling them, injecting malicious code, and repackaging them for distribution through third party app stores, phishing links, and even legitimate looking websites. These repackaged apps function identically to the real banking app, making detection nearly impossible for most users. Once installed, they silently harvest credentials, intercept one time passwords, and grant attackers remote control over the device.

GoldDigger and Advanced Banking Trojans: The GoldDigger banking trojan, first identified targeting South African and Vietnamese banks, represents the evolution of mobile banking malware. Unlike simple credential stealers, GoldDigger uses multiple sophisticated techniques: it abuses Android accessibility services to read screen content and interact with legitimate banking apps, captures biometric authentication attempts, intercepts SMS messages containing OTPs, and records screen activity to capture PINs and passwords as they are entered. What makes GoldDigger particularly dangerous is its ability to remain dormant for extended periods, activating only when specific banking apps are launched to avoid detection by antivirus software.

Overlay Attacks: Overlay attacks represent perhaps the most insidious form of Android banking malware. When a user opens their legitimate banking app, the malware detects this and instantly displays a pixel perfect fake login screen overlaid on top of the real app. The user, believing they are interacting with their actual banking app, enters credentials directly into the attacker’s interface. Modern overlay attacks are nearly impossible for average users to detect. The fake screens match the bank’s branding exactly, include the same security messages, and even replicate loading animations. By the time the user realizes something is wrong, usually when money disappears, the malware has already transmitted credentials and initiated fraudulent transactions.

The Scale of the Android Threat: Unlike iOS devices which benefit from Apple’s strict app ecosystem controls, Android’s open architecture and South Africa’s high Android market share create a perfect storm for mobile banking fraud. Users sideload apps from untrusted sources, delay security updates due to data costs, and often run older Android versions with known vulnerabilities. It’s important to note that the various Android variants hold roughly 70–73% of the global mobile operating system market share as of late 2025. In South Africa, Android holds a slightly higher share of 81–82 % of mobile devices.

For banks, this creates an impossible support burden. When a client’s account is compromised through malware they installed themselves, who bears responsibility? Under emerging fraud liability frameworks like the UK’s 50:50 model, banks may find themselves reimbursing losses even when the client unknowingly installed malware, creating enormous financial exposure with no clear technical solution.

The only effective defence is a combination of server side behavioural analysis to detect anomalous login patterns, device fingerprinting to identify compromised devices, and aggressive client education; but even this assumes clients will recognize and act on warnings, which social engineering attacks have proven they often will not.

5.4 The Operational and Reputational Burden of Fraud

Every fraud incident triggers a cascade of costs that extend far beyond the direct financial loss. Banks must investigate each case, which requires specialized fraud investigation teams working around the clock. They must manage call centre volume spikes as concerned clients seek reassurance that their accounts remain secure. They must fulfill regulatory reporting obligations that have become increasingly stringent. They must absorb reputational damage that can persist for years and influence client acquisition costs.

Client trust, once broken by a poor fraud response, is nearly impossible to rebuild. In a market where clients maintain multiple banking relationships and can switch their primary account with minimal friction, a single high profile fraud failure can trigger mass attrition.

Complexity magnifies this operational burden in ways that are not immediately obvious. clients who do not fully understand their bank’s products, account structures, or transaction limits are slower to recognize abnormal activity. They are more susceptible to social engineering attacks that exploit confusion about how banking processes work. They are more likely to contact support for clarification, driving up operational costs even when no fraud has occurred.

In this way, confusing product structures do not merely frustrate clients. They actively increase both fraud exposure and the operational costs of managing fraud incidents. A bank with ten account types, each with subtly different fee structures and transaction limits, creates far more opportunities for confusion than one with a single, clearly defined offering.

5.5 The UK Model: Fraud Liability Sharing Between Banks

The United Kingdom has introduced a revolutionary approach to fraud liability that fundamentally changes the economics of payment fraud. Since October 2024, UK payment service providers have been required to split fraud reimbursement liability 50:50 between the sending bank (victim’s bank) and the receiving bank (where the fraudster’s account is held).

Under the Payment Systems Regulator’s mandatory reimbursement rules, UK PSPs must reimburse in scope clients up to £85,000 for Authorised Push Payment (APP) fraud, with costs shared equally between sending and receiving firms. The sending bank must reimburse the victim within five business days of a claim being reported, or within 35 days if additional investigation time is required.

This represents a fundamental shift from the previous voluntary system, which placed reimbursement burden almost entirely on the sending bank and resulted in highly inconsistent outcomes. In 2022, only 59% of APP fraud losses were returned to victims under the voluntary framework. The new mandatory system ensures victims are reimbursed in most cases, unless the bank can prove the client acted fraudulently or with gross negligence.

The 50:50 split creates powerful incentives that did not exist under the old model. Receiving banks, which previously had little financial incentive to prevent fraudulent accounts from being opened or to act quickly when suspicious funds arrived, now bear direct financial liability. This has driven unprecedented collaboration between sending and receiving banks to detect fraudulent behavior, interrupt mule account activities, and share intelligence about emerging fraud patterns.

Sending banks are incentivized to implement robust fraud warnings, enhance real time transaction monitoring, and educate clients about common scam techniques. Receiving banks must tighten account opening procedures, monitor for suspicious deposit patterns, and act swiftly to freeze accounts when fraud is reported.

5.6 When South Africa Adopts Similar Regulations: The Coming Shock

When similar mandatory reimbursement and liability sharing regulations are eventually applied in South Africa, and they almost certainly will be, the operational impact will be devastating for banks operating at the margins.

The economics are straightforward and unforgiving. Banks with weak fraud detection capabilities, limited balance sheets to absorb reimbursement costs, or fragmented operations spanning multiple systems will face an impossible choice: invest heavily and immediately in fraud prevention infrastructure, or accept unsustainable losses from mandatory reimbursement obligations.

For smaller challenger banks, retailer or telco backed banks without deep fraud expertise, and any bank that has prioritized client acquisition over operational excellence, this regulatory shift could prove existential. The UK experience provides a clear warning: smaller payment service providers and start up financial services companies have found it prohibitively costly to comply with the new rules. Some have exited the market entirely. Others have been forced into mergers or partnerships with larger institutions that can absorb the compliance and reimbursement costs.

Consider the mathematics for a sub scale bank in South Africa. If digital fraud continues growing at 86% annually and mandatory 50:50 reimbursement is introduced, a bank with 500,000 active accounts could face tens of millions of rand in annual reimbursement costs before any investment in prevention systems. For a bank operating on thin margins with limited capital reserves, this is simply not sustainable.

The banks that will survive this transition are those that can achieve the scale necessary to amortize fraud prevention costs across millions of active relationships. Fraud detection systems, AI powered transaction monitoring, specialized investigation teams, and rapid response infrastructure all require significant fixed investment. These costs do not scale linearly with client count; they are largely fixed regardless of whether a bank serves 100,000 or 10 million clients.

Banks that cannot achieve this scale will find themselves in a death spiral where fraud losses and reimbursement obligations consume an ever larger percentage of revenue, forcing them to cut costs in ways that further weaken fraud prevention, creating even more losses. This dynamic will accelerate the consolidation that is already inevitable for other reasons.

The pressure will be particularly acute for banks that positioned themselves as low friction, high speed account opening experiences. Easy onboarding is a client experience win, but it is also a fraud liability nightmare. Under mandatory reimbursement with shared liability, banks will be forced to choose between maintaining fast onboarding and accepting massive fraud costs, or implementing stricter controls that destroy the very speed that differentiated them.

The only viable path forward for most banks will be radical simplification of products to reduce client confusion, massive investment in AI powered fraud detection, and either achieving scale through growth or accepting acquisition by a larger institution. The banks hustling at the margins, offering mediocre fraud prevention while burning cash on client acquisition, will not survive the transition to mandatory reimbursement.

If a bank gets fraud wrong, no amount of free banking, innovative features, or marketing spend will save it. Trust and safety will become the primary differentiators in South African banking, and the banks that invested early and deeply in fraud prevention will capture a disproportionate share of the primary account relationships that actually matter.

6.0 Technology as a Tailwind and a Trap for New Banks

Technology has dramatically lowered the barrier to starting a bank. Cloud infrastructure, software based cores, and banking platforms delivered as services mean a regulated banking operation can now be launched in months rather than years. This is a genuine tailwind and it will embolden more companies to attempt banking.

Retailers, insurers, fintechs, and digital platforms increasingly believe that with the right vendor stack they can become banks.

That belief is only partially correct.

6.1 Bank in a Box and SaaS Banking

Modern platforms promise fast launches and reduced engineering effort by packaging accounts, payments, cards, and basic lending into ready made systems.

Common examples include Mambu, Thought Machine, Temenos cloud deployments, and Finacle, alongside banking as a service providers such as Solaris, Marqeta, Stripe Treasury, Unit, Vodeno, and Adyen Issuing.

These platforms dramatically reduce the effort required to build a core banking system. What once required years of bespoke engineering can now be achieved in a fraction of the time.

But this is where many new entrants misunderstand the problem.

6.2 The Core Is a Small Part of Running a Bank

The core banking system is no longer the hard part. It is only a small fraction of the total effort and overhead of running a bank.

The real complexity sits elsewhere:
• Fraud prevention and reimbursement
• Credit risk and underwriting
• Financial crime operations
• Regulatory reporting and audit
• Customer support and dispute handling
• Capital and liquidity management
• Governance and accountability

A bank in a box provides undifferentiated infrastructure. It does not provide a sustainable banking business.

6.3 Undifferentiated Technology, Concentrated Risk

Modern banking platforms are intentionally generic. New banks often start with the same capabilities, the same vendors, and similar architectures.

As a result:
• Technology is rarely a lasting differentiator
• Customer experience advantages are quickly copied
• Operational weaknesses scale rapidly through digital channels

What appears to be leverage can quickly become fragility if not matched with deep operational competence and scaling out quickly, meaningfully to millions of clients. Banking is not a “hello world” moment, my first banking app has to come with significant and meaningful differences then scale quickly.

6.4 Why This Accelerates Consolidation

Technology makes it easier to start a bank but harder to sustain one.

It encourages more entrants, but ensures that many operate similar utilities with little durable differentiation. Those without discipline in cost control, risk management, and execution become natural consolidation candidates.

In a world where the core is commoditised, banking success is determined by operational excellence, the scale of the ecosystem clients interact with and not software selection.

Technology has made starting a bank easier, but it has not made running one simpler.

7. The Reality of Multi Banking and Dormant Accounts

South Africans are no longer loyal to a single bank. The abundance of options and the proliferation of zero fee accounts has fundamentally changed consumer behavior. Most consumers now maintain a salary account, a zero fee transactional account, a savings pocket somewhere else, and possibly a retailer or telco wallet.

This shift has created an ecosystem characterized by millions of dormant accounts, high acquisition but low engagement economics, and marketing vanity metrics that mask unprofitable user bases. Banks celebrate account openings while ignoring that most of these accounts will never become active, revenue generating relationships.

7.1 The Primary Account Remains King

Critically, salaries still get paid into one primary account. That account, the financial home, is where long term value accrues. It receives the monthly inflow, handles the bulk of payments, and becomes the anchor of the client’s financial life. Secondary accounts are used opportunistically for specific benefits, but they rarely capture the full relationship.

The battle for primary account status is therefore the only battle that truly matters. Everything else is peripheral.

8. The Coming Consolidation: Not Everyone Survives Abundance

There is a persistent fantasy in financial services that the current landscape can be preserved with enough innovation, enough branding, or enough regulatory patience. It cannot.

Abundance collapses margins, exposes fixed costs, and strips away the illusion of differentiation. The system does not converge slowly. It snaps. The only open question is whether institutions choose their end state, or have it chosen for them.

8.1 The Inevitable End States

Despite endless strategic options being debated in boardrooms, abundance only allows for a small number of viable outcomes.

End State 1: Primary Relationship Banks (Very Few Winners). A small number of institutions become default financial gravity wells. They hold the client’s salary and primary balance. They process the majority of transactions. They anchor identity, trust, and data consent. Everyone else integrates around them. These banks win not by having the most features, but by being operationally boring, radically simple, and cheap at scale. In South Africa, this number is likely two, maybe three. Not five. Not eight. Everyone else who imagines they will be a primary bank without already behaving like one is delusional.

End State 2: Platform Banks That Own the Balance Sheet but Not the Brand. These institutions quietly accept reality. They own compliance, capital, and risk. They power multiple consumer facing brands. They monetize through volume and embedded finance. Retailers, telcos, and fintechs ride on top. The bank becomes infrastructure. This is not a consolation prize. It is seeing the board clearly. But it requires executives to accept that brand ego is optional. Most will fail this test.

End State 3: Feature Banks and Specialist Utilities. Some institutions survive by narrowing aggressively. They become lending specialists, transaction processors, or foreign exchange and payments utilities. They stop pretending to be universal banks. They kill breadth to preserve depth. This path is viable, but brutal. It requires shrinking the organisation, killing products, and letting clients go. Few management teams have the courage to execute this cleanly.

End State 4: Zombie Institutions (The Most Common Outcome). This is where most end up. Zombie banks are legally alive. They have millions of accounts. They are nobody’s primary relationship. They bleed slowly through dormant clients, rising unit costs, and talent attrition. Eventually they are sold for parts, merged under duress, or quietly wound down. This is not stability. It is deferred death.

8.2 The Lie of Multi Banking Forever

Executives often comfort themselves with the idea that clients will happily juggle eight banks, twelve apps, and constant money movement. This is nonsense.

clients consolidate attention long before they consolidate accounts. The moment an institution is no longer default, it is already irrelevant. Multi banking is a transition phase, not an end state.

8.3 Why Consolidation Will Hurt More Than Expected

Consolidation is painful because it destroys illusions: that brand loyalty was real, that size implied relevance, that optionality was strategy.

It exposes overstaffed middle layers, redundant technology estates, and products that never should have existed. The pain is not just financial. It is reputational and existential.

8.4 The Real Divide: Those Who Accept Gravity and Those Who Deny It

Abundance creates gravity. clients, data, and liquidity concentrate.

Institutions that accept this move early, choose roles intentionally, and design for integration. Those that resist it protect legacy, multiply complexity, and delay simplification. And then they are consolidated without consent.

9. The Traits That Will Cause Institutions to Struggle

Abundance does not reward everyone equally. In fact, it is often brutal to incumbents and late movers because it exposes structural weakness faster than scarcity ever did. As transaction costs collapse, margins compress, and clients gain unprecedented choice, certain organisational traits become existential liabilities.

9.1 Confusing Complexity with Control

Many struggling institutions believe that complexity equals safety. Over time they accumulate multiple overlapping products solving the same problem, redundant approval layers, duplicated technology platforms, and slightly different pricing rules for similar clients.

This complexity feels like control internally, but externally it creates friction, confusion, and cost. In an abundant world, clients simply route around complexity. They do not complain, they do not escalate, they just leave.

Corporate culture symptom: Committees spend three months debating whether a new savings account should have 2.5% or 2.75% interest while competitors launch entire banks.

Abundance rewards clarity, not optionality.

9.2 Optimising for Internal Governance Instead of client Outcomes

Organisations that struggle tend to design systems around committee structures, reporting lines, risk ownership diagrams, and policy enforcement rather than client experience.

The result is products that are technically compliant but emotionally hollow. When zero cost competitors exist, clients gravitate toward institutions that feel intentional, not ones that feel procedurally correct.

Corporate culture symptom: Product launches require sign off from seventeen people across eight departments, none of whom actually talk to clients.

Strong governance matters, but when governance becomes the product, clients disengage.

9.3 Treating Technology as a Project Instead of a Capability

Struggling companies still think in terms of “the cloud programme”, “the core replacement project”, or “the digital transformation initiative”.

These organisations fund technology in bursts, pause between efforts, and declare victory far too early. In contrast, winners treat technology as a permanent operating capability, continuously refined and quietly improved.

Corporate culture symptom: CIOs present three year roadmaps in PowerPoint while engineering teams at winning banks ship code daily.

Abundance punishes stop start execution. The market does not wait for your next funding cycle.

9.4 Assuming clients Will Act Rationally

Many institutions believe clients will naturally rationalise their financial lives: “They’ll close unused accounts eventually”, “They’ll move everything once they see the benefits”, “They’ll optimise for fees and interest rates”.

In reality, clients are lazy optimisers. They consolidate only when there is a clear emotional or experiential pull, not when spreadsheets say they should.

Corporate culture symptom: Marketing teams celebrate 2 million account openings while finance quietly notes that 1.8 million are dormant and generating losses.

Companies that rely on rational client behaviour end up with large numbers of dormant, loss making relationships and very few primary ones.

9.5 Designing Products That Require Perfect Behaviour

Another common failure mode is designing offerings that only work if clients behave flawlessly: repayments that must happen on rigid schedules, penalties that escalate quickly, and products that assume steady income and stable employment.

In an abundant system, flexibility beats precision. Institutions that cannot tolerate variance, missed steps, or irregular usage push clients away, often toward simpler, more forgiving alternatives.

Corporate culture symptom: Credit teams reject 80% of applicants to hit target default rates, then express surprise when growth stalls.

The winners design for how people actually live, not how risk models wish they did.

9.6 Mistaking Distribution for Differentiation

Some companies believe scale alone will save them: large branch networks, massive client bases, and deep historical brand recognition.

But abundance erodes the advantage of distribution. If everyone can reach everyone digitally, then distribution without differentiation becomes a cost centre.

Corporate culture symptom: Executives tout “our 900 branches” as a competitive advantage while clients increasingly view them as an inconvenience.

Struggling firms often have reach, but no compelling reason for clients to engage more deeply or more often.

9.7 Fragmented Ownership and Too Many Decision Makers

When accountability is diffuse, every domain has its own technology head, no one owns end to end client journeys, and decisions are endlessly deferred across forums.

Execution slows to a crawl. Abundance favours organisations that can make clear, fast, and sometimes uncomfortable decisions.

Corporate culture symptom: Six different “digital transformation” initiatives run in parallel, each with its own budget, none talking to each other.

If everyone is in charge, no one is.

9.8 Protecting Legacy Revenue at the Expense of Future Relevance

Finally, struggling organisations are often trapped by their own success. They hesitate to simplify, reduce fees, or remove friction because it threatens existing revenue streams.

But abundance ensures that someone else will do it instead.

Corporate culture symptom: Finance vetoes removing a R5 monthly fee that generates R50 million annually, ignoring that it costs R200 million in client attrition and support calls.

Protecting yesterday’s margins at the cost of tomorrow’s relevance is not conservatism. It is delayed decline.

9.9 The Uncomfortable Truth

Abundance does not kill companies directly. It exposes indecision, over engineering, cultural inertia, teams working slavishly towards narrow anti-client KPIs and misaligned incentives.

The institutions that struggle are not usually the least intelligent or the least resourced. They are the ones most attached to how things used to work.

In an abundant world, simplicity is not naive. It is strategic.

10. The Traits That Enable Survival and Dominance

In stark contrast to the failing patterns above, the banks that will dominate South African banking over the next decade share a remarkably consistent set of traits.

10.1 Radically Simple Product Design

Winning banks offer one account, one card, one fee model, and one app. They resist the urge to create seventeen variants of the same product.

Corporate culture marker: Product managers can explain the entire product line in under two minutes without charts.

Complexity is a choice, and choosing simplicity requires discipline that most organisations lack.

10.2 Obsessive Cost Discipline Without Sacrificing Quality

Winners run aggressively low cost bases through modern cores, minimal branch infrastructure, and automation first operations. But they invest heavily where it matters: fraud prevention, client support when things go wrong, and system reliability.

Corporate culture marker: CFOs are revered, not resented. Every rand is questioned, but client impacting investments move fast.

Cheap does not mean shoddy. It means ruthlessly eliminating waste.

10.3 Treating Fraud as Warfare, Not Compliance

Dominant banks understand fraud is a permanent conflict requiring specialist teams, AI powered detection, real time monitoring, and rapid response infrastructure.

Corporate culture marker: Fraud teams have authority to freeze accounts, block transactions, and shut down attack vectors immediately. If you get fraud wrong, nothing else matters.

10.4 Speed Over Consensus

Winning organisations make fast decisions with incomplete information and course correct quickly. They ship features weekly, not quarterly.

Corporate culture marker: Teams use “disagree and commit” rather than “let’s form a working group to explore this further”.

Abundance punishes deliberation. The cost of being wrong is lower than the cost of being slow.

10.5 Designing for Actual Human Behaviour

Winners build products that work for how people actually live: irregular income, forgotten passwords, missed payments, confusion under pressure.

Corporate culture marker: Product teams spend time in call centres listening to why clients struggle, not in conference rooms hypothesising about ideal user journeys.

The best products feel obvious because they assume nothing about client behaviour except that it will be messy.

10.6 Becoming the Primary Account by Earning Trust in Crisis

The ultimate trait that separates winners from losers is this: winners are there when clients need them most. When fraud happens, when money disappears, when identity is stolen, they respond immediately with empathy and solutions.

Corporate culture marker: client support teams have real authority to solve problems on the spot, not scripts requiring three escalations to do anything meaningful.

Trust cannot be marketed. It must be earned in the moments that matter most.

11. The Consolidation Reality: How South African Banking Reorganises Itself

South African banking has moved beyond discussion to inevitability. The paradox in the market, abundant options but shrinking economics, is not a transitional phase; it is the structural condition driving consolidation. The forces shaping this are already visible: shrinking margins, collapsing transactional fees, exploding fraud costs, and clients fragmenting their banking relationships while never truly committing as primaries.

Consolidation is not a risk. It is the outcome.

11.1 The Economics That Drive Consolidation

The system that once rewarded scale and complexity now penalises them. Legacy governance, hybrid branch networks, dual technology stacks, and product breadth are all costs that cannot be supported when transactional revenue trends toward zero. Compliance, fraud prevention, cyber risk, KYC/AML, and ongoing supervision from SARB are fixed costs that do not scale with account openings.

clients are not spreading their value evenly across institutions; they are fragmenting activity but consolidating value into a primary account, the salary account, the balance that matters, the financial home. Others become secondary or dormant accounts with little commercial value.

This structural squeeze cannot be reversed by better branding, faster apps, or more channels. There is only one way out: simplify, streamline, or exit.

11.2 What Every Bank Must Do to Survive

Survival will not be granted by persistence or marketing. It will be earned by fundamentally changing the business model.

Radically reduce governance and decision overhead. Layers of committees and approvals must be replaced by automated controls and empowered teams. Slow decision cycles are death in a world where client behaviour shifts in days, not years.

Drastically cut cost to serve. Branch networks, legacy platforms, duplicated services, these are liabilities. Banks must automate operations, reduce support functions, and shrink cost structures to match the new economics.

Simplify and consolidate products. clients don’t value fifteen savings products, four transactional tiers, and seven rewards models. They want clarity, predictability, and alignment with their financial lives.

Modernise technology stacks. Old cores wrapped with new interfaces are stopgaps, not solutions. Banks must adopt modular, API first systems that cut marginal costs, reduce risk, and improve reliability.

Reframe fees to reflect value. clients expect free basic services. Fees will survive only where value is clear, credit, trust, convenience, and outcomes, not transactions.

Prioritise fraud and risk capability. Fraud is not a peripheral cost; it is a core determinant of economics. Banks must invest in real time detection, AI assisted risk models, and client education, or face disproportionate losses.

Focus on primary relationships. A bank that is never a client’s financial home will eventually become irrelevant.

11.3 Understanding Bank Tiers: What Separates Tier 1 from Tier 2

Not all traditional banks are equally positioned to survive consolidation. The distinction between Tier 1 and Tier 2 traditional banks is not primarily about size or brand heritage. It is about structural readiness for the economics of abundance.

Tier 1 Traditional Banks are characterised by demonstrated digital execution capability, with modern(ish) technology stacks either deployed or credibly in progress. They have diversified revenue streams that reduce dependence on transactional fees, including strong positions in corporate banking, investment banking, or wealth management. Their cost structures, while still high, show evidence of active rationalisation. Most critically, they have proven ability to ship digital products at competitive speed and have successfully defended or grown primary account relationships in the mass market.

Tier 2 Traditional Banks remain more dependent on legacy infrastructure and have struggled to modernise core systems at pace. Their revenue mix is more exposed to transactional fee compression, and cost reduction efforts have often stalled in governance complexity. Technology execution tends to be slower, more project based, and more prone to delays. They rely heavily on consultants to tell them what to do and have a sprawling array of vendor products that are poorly integrated. Primary account share in the mass market has eroded more significantly, leaving them more reliant on existing relationship inertia than active client acquisition.

The distinction matters because Tier 1 banks have a viable path to competing directly for primary relationships in the new economics. Tier 2 banks face harder choices: accelerate transformation dramatically, accept a platform or specialist role, or risk becoming acquisition targets or zombie institutions.

11.4 Consolidation Readiness by Category

Below is a high level summary of institutional categories and what they must do to survive:

CategoryWhat Must ChangeEffort Required
Tier 1 Traditional BanksConsolidate product stacks, automate risk and operations, maintain digital execution paceHigh
Tier 2 Traditional BanksSimplify governance, modernise core systems, drastically reduce costs, consider partnershipsVery High
Digital First BanksDefend simplicity, scale risk and fraud capability, deepen primary engagementMedium
Digital ChallengersDeepen primary engagement, invest heavily in fraud and lending capability, improve unit economicsVery High
Insurer Led BanksFocus on profitable niches, leverage ecosystem integration, accept extended timeline to profitabilityHigh
Specialist LendersNarrow focus aggressively, partner for distribution and technology, automate operationsMedium-High
Niche and SME BanksStay niche, automate aggressively, consider merger or specialisationHigh
Sub Scale BanksPartner or merge to gain scale, exit non-core activitiesVery High
Mutual BanksSimplify or consolidate early, consider cooperative mergersVery High
Foreign Bank BranchesShrink retail footprint, focus on corporate and institutional servicesMedium

This readiness spectrum illustrates the real truth: institutions with scale, execution discipline, and structural simplicity have the best odds; those without these characteristics will be absorbed or eliminated.

11.5 The Pattern of Consolidation

Consolidation will not be uniform. The most likely sequence is:

First, sub scale and mutual banks exit or merge. They are unable to amortise fixed costs across enough primary relationships.

Second, digital challengers face the choice: invest heavily or be acquired. Rapid client acquisition without deep engagement or lending depth is not sustainable in an environment where fraud liability looms large and fee income is near zero.

Third, traditional banks consolidate capabilities, not brands. Large banks will more often absorb technology, licences, and teams than merge brand to brand. Duplication will be eliminated inside existing platforms.

Fourth, foreign banks retreat to niches. Global players will prioritise corporate and institutional services, not mass retail banking, in markets where local economics are unfavourable.

11.6 Winners and Losers

Likely Winners: Digital first banks with proven simplicity and low cost models. Tier 1 traditional banks with strong digital execution. Any institution that genuinely removes complexity rather than just managing it.

Likely Losers: Sub scale challengers without lending depth. Institutions that equate governance with safety. Banks that fail to dramatically cut cost and complexity. Any organisation that protects legacy revenue at the expense of future relevance.

12. Back to the Future

Banking has become the new corporate fidget spinner, grabbing the attention of relevance staved corporates. Most don’t know why they want it, exactly what it is, but they know others have it and so it should be on the plan somewhere.

South African banking is no longer about who can build the most features or launch the most products. It is about cost discipline, trust under pressure, relentless simplicity, and scale that compounds rather than collapses.

The winners will not be the loudest innovators. They will be the quiet operators who make banking feel invisible, safe, and boring.

And in banking, boring done well is very hard to beat.

The consolidation outcome is not exotic. It is a return to a familiar pattern: a small number of dominant banks. We will likely end up back to the future, with a small number of dominant banks, which is exactly where we started.

The difference will be profound. Those dominant banks will be more client centric, with lower fees, lower fraud, better lending, and better, simpler client experiences.

The journey through abundance, with its explosion of choice, its vanity metrics of account openings, and its billions burned on client acquisition, will have served its purpose. It will have forced the industry to strip away complexity, invest in what actually matters, and compete on the only dimensions that clients genuinely value: trust, simplicity, and being there when things go wrong.

The market will consolidate not because regulators force it, but because economics demands it. South African banking is not being preserved. It is being reformed, by clients, by economics, and by the unavoidable logic of abundance.

Those who embrace the logic early will shape the future. Those who do not will watch it happen to them.

And when the dust settles, South African consumers will be better served by fewer, stronger institutions than they ever were by the fragmented abundance that preceded them.

12.1 Final Thought: The Danger of Fighting on Two Fronts

There is a deeper lesson embedded in the struggles of crossover players that pour energy and resources into secondary, loss-making businesses typically do so by redirecting investment and operational focus from their primary business. This redirection is rarely neutral. It weakens the core.

Every rand allocated to the second front, every executive hour spent in strategy sessions, every technology resource committed to banking infrastructure is a rand, an hour, and a resource that cannot be deployed to defend and strengthen the primary businesses that actually generate profit today.

Growth into secondary businesses must be evaluated not just on their own merits, but in terms of how dominant and successful the company has been in its primary business. If you are not unquestionably dominant in your core market, if your primary business still faces existential competitive threats, if you have not achieved such overwhelming scale and efficiency that your position is effectively unassailable, then opening a second front is strategic suicide.

It is like opening another front in a war when the first front is not secured. You redirect troops, you split command attention, you divide logistics, and you leave your current positions weakened and vulnerable to counterattack. Your competitors in the primary business do not pause while you build the secondary one. They exploit the distraction.

Banks that will thrive are those that have already won their primary battle so decisively that expansion becomes an overflow of strength rather than a diversion of it. Capitec can expand into mobile networks because they have already dominated transactional banking. They are not splitting focus; they are leveraging surplus capacity.

Institutions that have not yet won their core market, that are still fighting for primary account relationships, that have not yet achieved the operational excellence and cost discipline required to survive in abundance, cannot afford the luxury of secondary ambitions.

The market will punish divided attention ruthlessly. And in South African banking, where fraud costs are exploding, margins are collapsing, and consolidation is inevitable, there is no forgiveness for strategic distraction.

The winners will be those who understood that dominance in one thing beats mediocrity in many. And they will inherit the market share of those who learned that lesson too late.

13. Authors Note

This article synthesises public data, regulatory reports, industry analysis, and observed market behaviour. Conclusions are forward-looking and represent the author’s interpretation of structural trends rather than predictions of specific outcomes. The author is sharing opinion and is in now way claiming to have any special insights or be an expert in predicting the future.

14. Sources

  1. Wikipedia — List of banks in South Africa
    https://en.wikipedia.org/wiki/List_of_banks_in_South_Africa
    (Structure of the South African banking system)
  2. PwC South Africa — Major Banks Analysis
    https://www.pwc.co.za/en/publications/major-banks-analysis.html
    (Performance, digital transformation, and competitive dynamics of major banks)
  3. South African Reserve Bank (SARB) — Banking Sector Risk Assessment Report
    https://www.resbank.co.za/content/dam/sarb/publications/media-releases/2022/pa-assessment-reports/Banking%20Sector%20Risk%20Assessment%20Report.pdf
    (Systemic risks including fraud and compliance costs)
  4. Banking Association of South Africa / SABRIC — Financial Crime and Fraud Statistics
    https://www.banking.org.za/news/sabric-reports-significant-increase-in-financial-crime-losses-for-2023/
    (Industry-wide fraud trends)
  5. Reuters — South Africa’s Nedbank annual profit rises on non-interest revenue growth
    https://www.reuters.com/business/finance/south-africas-nedbank-full-year-profit-up-non-interest-revenue-growth-2025-03-04/
    (Recent financial performance)
  6. Reuters — Nedbank sells 21.2% Ecobank stake
    https://www.reuters.com/world/africa/nedbank-sells-100-million-ecobank-stake-financier-nkontchous-bosquet-investments-2025-08-15/
    (Strategic refocus and portfolio rationalisation)
  7. Nedbank Group (official) — About Us & Strategy Overview
    https://group.nedbank.co.za/home/about-us.html
    (Strategy including digital leadership and cost-income focus)
  8. Nedbank Group (official) — Managed Evolution digital transformation
    https://group.nedbank.co.za/news-and-insights/press/2024/euromoney-2024-awards.html
    (Euromoney 2024 Awards)
  9. Nedbank CFO on Digital Transformation (CFO South Africa)
    https://cfo.co.za/articles/digital-transformation-is-not-optional-says-nedbank-cfo-mike-davis/
    (Executive perspective on digital transformation)
  10. Nedbank Interim / Annual Financial Results (official)
    https://group.nedbank.co.za/news-and-insights/press/2025/nedbank-delivers-improved-financial-performance.html
    (Interim and annual 2025/2024 financial performance)
  11. Moneyweb — Did Old Mutual pick the exact wrong time to launch a bank?
    https://www.moneyweb.co.za/news/companies-and-deals/did-old-mutual-pick-the-exact-wrong-time-to-launch-a-bank/
    (Analysis of Old Mutual’s banking entry and competitive context)
  12. Wikipedia — Old Mutual
    https://en.wikipedia.org/wiki/Old_Mutual
    (Background on the group launching a new bank)
  13. Moneyweb — Old Mutual to open new SA bank in 2025
    https://www.moneyweb.co.za/news/companies-and-deals/old-mutual-to-open-new-sa-bank-in-2025/
    (Coverage of planned bank launch)
  14. Old Mutual (official) — OM Bank CEO gets regulatory approval from SARB
    https://www.oldmutual.co.za/news/om-bank-ceo-gets-the-thumbs-up-from-the-reserve-bank/
    (Confirmation of launch timeline)
  15. Zensar Technologies — South Africa Financial Services Outlook 2025
    https://www.zensar.com/assets/files/3lMug4iZgOZE5bT35uh4YE/SA-Financial-Service-Trends-2025-WP-17_04_25.pdf
    (Digital disruption, cost pressure, and technology trends)
  16. BDO South Africa — Fintech in Africa Report 2024
    https://www.bdo.co.za/getmedia/0a92fd54-18e6-4a18-8f21-c22b0ae82775/Fintech-in-Africa-Report-2024_June.pdf
    (Broader fintech impact)
  17. Hippo.co.za — South African banking fees comparison
    https://www.hippo.co.za/money/banking-fees-guide/
    (Competitive fee pressure)
  18. Wikipedia — Discovery Bank
    https://en.wikipedia.org/wiki/Discovery_Bank
    (Context on another digital bank)
  19. Wikipedia — TymeBank
    https://en.wikipedia.org/wiki/TymeBank
    (Digital bank competitor in South Africa)
  20. Wikipedia — Bank Zero
    https://en.wikipedia.org/wiki/Bank_Zero
    (Digital mutual bank in South Africa)
  21. Banking CX from Linkedin: https://www.linkedin.com/pulse/capitec-wins-absa-standard-bank-confuse-lessons-product-ndebele-oc1pf?utm_source=share&utm_medium=member_ios&utm_campaign=share_via
  22. Global Android Market Share (Mobile OS) https://gs.statcounter.com/os-market-share/mobile/worldwide (StatCounter Global Stats)

Why Bigger Banks Were Historically More Fragile and Why Architecture Determines Resilience

1. Size Was Once Mistaken for Stability

For most of modern banking history, stability was assumed to increase with size. The thinking was the bigger you are, the more you should care, the more resources you can apply to problems. Larger banks had more capital, more infrastructure, and more people. In a pre-cloud world, this assumption appeared reasonable.

In practice, the opposite was often true.

Before cloud computing and elastic infrastructure, the larger a bank became, the more unstable it was under stress and the harder it was to maintain any kind of delivery cadence. Scale amplified fragility. In 2025, architecture (not size) has become the primary determinant of banking stability.

2. Scale, Fragility, and Quantum Entanglement

Traditional banking platforms were built on vertically scaled systems: mainframes, monolithic databases, and tightly coupled integration layers. These systems were engineered for control and predictability, not for elasticity or independent change.

As banks grew, they didn’t just add clients. They added products. Each new product introduced new dependencies, shared data models, synchronous calls, and operational assumptions. Over time, this created a state best described as quantum entanglement.

In this context, quantum entanglement refers to systems where:

  • Products cannot change independently
  • A change in one area unpredictably affects others
  • The full impact of change only appears under real load
  • Cause and effect are separated by time, traffic, and failure conditions

The larger the number of interdependent products, the more entangled the system becomes.

2.1 Why Entanglement Reduces Stability

As quantum entanglement increases, change becomes progressively riskier. Even small modifications require coordination across multiple teams and systems. Release cycles slow and defensive complexity increases.

Recovery also becomes harder. When something breaks, rolling back a single change is rarely sufficient because multiple products may already be in partially failed or inconsistent states.

Fault finding degrades as well. Logs, metrics, and alerts point in multiple directions. Symptoms appear far from root causes, forcing engineers to chase secondary effects rather than underlying faults.

Most importantly, blast radius expands. A fault in one product propagates through shared state and synchronous dependencies, impacting clients who weren’t using the originating product at all.

The paradox is that the very success of large banks (broad product portfolios) becomes a direct contributor to instability.

3. Why Scale Reduced Stability in the Pre-Cloud Era

Before cloud computing, capacity was finite, expensive, and slow to change. Systems scaled vertically, and failure domains were large by design.

As transaction volumes and product entanglement increased, capacity cliffs became unavoidable. Peak load failures became systemic rather than local. Recovery times lengthened and client impact widened.

Large institutions often appeared stable during normal operation but failed dramatically under stress. Smaller institutions appeared more stable largely because they had fewer entangled products and simpler operational surfaces (not because they were inherently better engineered).

Capitec itself experienced this capacity cliff, when its core banking SQL DB hit a capacity cliff in August 2022. In order to recover the service, close to 100 changes were made which resulted in a downtime of around 40 hrs. The wider service recovery took weeks, with missed payments a duplicate payments being fixed on a case by case basis. It was at this point that Capitec’s leadership drew a line in the sand and decided to totally re-engineer its entire stack from the ground up in AWS. This blog post is really trying to share a few nuggets from the engineering journey we went on, and hopefully help others all struggling the with burden of scale and hardened synchronous pathways.

4. Cloud Changed the Equation (But Only When Architecture Changed)

Cloud computing made it possible to break entanglement, but only for organisations willing to redesign systems to exploit it.

Horizontal scaling, availability zone isolation, managed databases, and elastic compute allow products to exist as independent domains rather than tightly bound extensions of a central core.

Institutions that merely moved infrastructure to the cloud without breaking product entanglement continue to experience the same instability patterns (only on newer hardware).

5. An Architecture Designed to Avoid Entanglement

Capitec represents a deliberate rejection of quantum entanglement.

Its entire App production stack is cloud native on AWS, Kubernetes, Kafka and Postgres. The platform is well advanced in rolling out new Java 25 runtimes, alongside ahead of time (AOT) optimisation to further reduce scale latency, improve startup characteristics, and increase predictability under load. All Aurora Serverless are setup with read replicas, offloading read pressure from write paths. All workloads are deployed across three availability zones, ensuring resilience. Database access is via the AWS JDBC wrapper (which enables extremely rapid failovers, outside of DNS TTLs)

Crucially, products are isolated by design. There is no central product graph where everything depends on everything else. But, a word of caution, we are “not there yet”. We will always have edges that can hurt and we you hit an edge at speed, sometimes its hard to get back up on your feet. Often you see that the downtime you experienced, simply results in pent up demand. Put another way, the volume that took your systems offline, is now significantly LESS than the volume thats waiting for you once you recover! This means that you somehow have to magically add capacity, or optimise code, during an outage in order to recover the service. You will often say “Rate Limiting” fan club put a foot forward when I discuss burst recoverability. I personally don’t buy this for single entity services (for a complex set of reasons). For someone like AWS, it absolutely makes sense to carry the enormous complexity of guarding services with rate limits. But I don’t believe the same is true for a single entity ecosystem, in these instances, offloading is normally a purer pathway.

6. Write Guarding as a Stability Primitive

Capitec’s mobile and digital platforms employ a deliberate **write guarding** strategy.

Read only operations (such as logging into the app) are explicitly prevented from performing inline write operations. Activities like audit logging, telemetry capture, behavioural flags, and notification triggers are never executed synchronously on high volume read paths.

Instead, these concerns are offloaded asynchronously using Amazon MSK (Managed Streaming for Apache Kafka) or written to in memory data stores such as Valkey, where they can be processed later without impacting the user journey.

This design completely removes read-write contention from critical paths. Authentication storms, balance checks, and session validation no longer compete with persistence workloads. Under load, read performance remains stable because it is not coupled to downstream write capacity.

Critically, write guarding prevents database maintenance pressure (such as vacuum activity) from leaking into high volume events like logins. Expensive background work remains isolated from customer facing read paths.

Write guarding turns one of the most common failure modes in large banking systems (read traffic triggering hidden writes) into a non event. Stability improves not by adding capacity, but by removing unnecessary coupling.

7. Virtual Threads as a Scalability Primitive

Java 25 introduces mature virtual threading as a first class concurrency model. This fundamentally changes how high concurrency systems behave under load.

Virtual threads decouple application concurrency from operating system threads. Instead of being constrained by a limited pool of heavyweight threads, services can handle hundreds of thousands of concurrent blocking operations without exhausting resources.

Request handling becomes simpler. Engineers can write straightforward blocking code without introducing thread pool starvation or complex asynchronous control flow.

Tail latency improves under load. When traffic spikes, virtual threads queue cheaply rather than collapsing the system through thread exhaustion.

Failure isolation improves. Slow downstream calls no longer monopolise scarce threads, reducing cascading failure modes.

Operationally, virtual threads align naturally with containerised, autoscaling environments. Concurrency scales with demand, not with preconfigured thread limits.

When combined with modern garbage collectors and ahead of time optimisation, virtual threading removes an entire class of concurrency related instability that plagued earlier JVM based banking platforms.

8. Nimbleness Emerges When Entanglement Disappears

When blast zones and integration choke points disappear, teams regain the ability to move quickly without increasing systemic risk.

Domains communicate through well defined RESTful interfaces, often across separate AWS accounts, enforcing isolation as a first class property. A failure in one domain does not cascade across the organisation.

To keep this operable at scale, Capitec uses Backstage (via an internal overlay called ODIN) as its internal orchestration and developer platform. All AWS accounts, services, pipelines, and operational assets are created to a common standard. Teams consume platform capability rather than inventing infrastructure.

This eliminates configuration drift, reduces cognitive load, and ensures that every new product inherits the same security, observability, and resilience characteristics.

The result is nimbleness without fragility.

9. Operational Stability Is Observability Plus Action

In entangled systems, failures are discovered by clients and stability is measured retrospectively.

Capitec operates differently. End to end observability through Instana and its in house AI platform, Neo, correlates client side errors, network faults, infrastructure signals, and transaction failures in real time. Issues are detected as they emerge, not after they cascade.

This operational awareness allows teams to intervene early, contain issues quickly, and reduce client impact before failures escalate.

Stability, in this model, is not the absence of failure. It is fast detection, rapid containment, and decisive response.

10. Fraud Prevention Without Creating New Entanglement

Fraud is treated as a first class stability concern rather than an external control.

Payments are evaluated inline as they move through the bank. Abnormal velocity, behavioural anomalies, and account provenance are assessed continuously. Even fraud reported in the call center is immediately visible to other clients paying from the Capitec App. Clients are presented with conscience pricking prompts for high risk payments; these frequently stop fraud as the clients abandon the payment when presented with the risks.

Capitec runs a real time malware detection engine directly on client devices. This engine detects hooks and overlays installed by malicious applications. When malware is identified, the client’s account is immediately stopped, preventing fraudulent transactions before they occur.

Because fraud controls are embedded directly into the transaction flow, they don’t introduce additional coupling or asynchronous failure modes.

The impact is measurable. Capitec’s fraud prevention systems have prevented R300 million in client losses from fraud. In November alone, these systems saved clients a further R60 million in fraud losses.

11. The Myth of Stability Through Multicloud

Multicloud is often presented as a stability strategy. In practice, it is largely a myth.

Running across multiple cloud providers does not remove failure risk. It compounds it. Cross cloud communication can typically only be secured using IP based controls, weakening security posture. Operational complexity increases sharply as teams must reason about heterogeneous platforms, tooling, failure modes, and networking behaviour.

Most critically, multicloud does not eliminate correlated failure. If either cloud provider becomes unavailable, systems are usually unusable anyway. The result is a doubled risk surface, increased operational risk, and new inter cloud network dependencies (without a corresponding reduction in outage impact).

Multicloud increases complexity, weakens controls, and expands risk surface area without delivering meaningful resilience.

12. What Actually Improves Stability

There are better options than multicloud.

Hybrid cloud with anti-affinity on critical channels is one. For example, card rails can be placed in two physically separate data centres so that if cloud based digital channels are unavailable, clients can still transact via cards and ATMs. This provides real functional resilience rather than architectural illusion.

Multi region deployment within a single cloud provider is another. This provides geographic fault isolation without introducing heterogeneous complexity. However, this only works if the provider avoids globally scoped services that introduce hidden single points of failure. At present, only AWS consistently supports this model. Some providers expose global services (such as global front doors) that introduce global blast radius and correlated failure risk.

True resilience requires isolation of failure domains, not duplication of platforms.

13. Why Traditional Banks Still Struggle

Traditional banks remain constrained by entangled product graphs, vertically scaled cores, synchronous integration models, and architectural decisions from a different era. As product portfolios grow, quantum entanglement increases. Change slows, recovery degrades, and outages become harder to diagnose and contain.

Modernisation programmes often increase entanglement temporarily through dual run architectures, making systems more fragile before they become more stable (if they ever do).

The challenge is not talent or ambition. It is the accumulated cost of entanglement.

14. Stability at Scale Without the Traditional Trade Off

Capitec’s significance is not that it is small. It is that it is large and remains stable.

Despite operating at massive scale with a broad product surface and high transaction volumes, stability improves rather than degrades. Scale does not increase blast radius, recovery time, or change risk. It increases parallelism, isolation, and resilience.

This directly contradicts historical banking patterns where growth inevitably led to fragility. Capitec demonstrates that with the right architecture, scale and stability are no longer opposing forces.

15. Final Thought

Before cloud and autoscaling, scale and stability were inversely related. The more products a bank had, the more entangled and fragile it became.

In 2025, that relationship can be reversed (but only by breaking entanglement, isolating failure domains, and avoiding complexity masquerading as resilience).

Doing a deal with a cloud provider means nothing if transformation stalls inside the organisation. If dozens of people carry the title of CIO while quietly pulling the handbrake on the change that is required, the outcome is inevitable regardless of vendor selection.

There is also a strategic question that many institutions avoid. If forced to choose between operating in a jurisdiction that is hostile to public cloud or accessing the full advantages of cloud, waiting is not a strategy. When that jurisdiction eventually allows public cloud, the market will already be populated by banks that moved earlier, built cloud native platforms, and are now entering at scale.

Capitec is an engineering led bank whose stability and speed increase with scale. Traditional banks remain constrained by quantum entanglement baked into architectures from a different era.

These outcomes are not accidental. They are the inevitable result of architectural and organisational choices made years ago, now playing out under real world load.

MacOs: Getting Started with Memgraph, Memgraph MCP and Claude Desktop by Analyzing test banking data for Mule Accounts

1. Introduction

This guide walks you through setting up Memgraph with Claude Desktop on your laptop to analyze relationships between mule accounts in banking systems. By the end of this tutorial, you’ll have a working setup where Claude can query and visualize banking transaction patterns to identify potential mule account networks.

Why Graph Databases for Fraud Detection?

Traditional relational databases store data in tables with rows and columns, which works well for structured, hierarchical data. However, fraud detection requires understanding relationships between entities—and this is where graph databases excel.

In fraud investigation, the connections matter more than the entities themselves:

  • Follow the money: Tracing funds through multiple accounts requires traversing relationships, not joining tables
  • Multi-hop queries: Finding patterns like “accounts connected within 3 transactions” is natural in graphs but complex in SQL
  • Pattern matching: Detecting suspicious structures (like a controller account distributing to multiple mules) is intuitive with graph queries
  • Real-time analysis: Graph databases can quickly identify new connections as transactions occur

Mule account schemes specifically benefit from graph analysis because they form distinct network patterns:

  • A central controller account receives large deposits
  • Funds are rapidly distributed to multiple recruited “mule” accounts
  • Mules quickly withdraw cash or transfer funds, completing the laundering cycle
  • These patterns create a recognizable “hub-and-spoke” topology in a graph

In a traditional relational database, finding these patterns requires multiple complex JOINs and recursive queries. In a graph database, you simply ask: “show me accounts connected to this one” or “find all paths between these two accounts.”

Why This Stack?

We’ve chosen a powerful combination of technologies that work seamlessly together:

Memgraph (Graph Database)

  • Native graph database built for speed and real-time analytics
  • Uses Cypher query language (intuitive, SQL-like syntax for graphs)
  • In-memory architecture provides millisecond query responses
  • Perfect for fraud detection where you need to explore relationships quickly
  • Lightweight and runs easily in Docker on your laptop
  • Open-source with excellent tooling (Memgraph Lab for visualization)

Claude Desktop (AI Interface)

  • Natural language interface eliminates the need to learn Cypher query syntax
  • Ask questions in plain English: “Which accounts received money from ACC006?”
  • Claude translates your questions into optimized graph queries automatically
  • Provides explanations and insights alongside query results
  • Dramatically lowers the barrier to entry for graph analysis

MCP (Model Context Protocol)

  • Connects Claude directly to Memgraph
  • Enables Claude to execute queries and retrieve real-time data
  • Secure, local connection—your data never leaves your machine
  • Extensible architecture allows adding other tools and databases

Why Not PostgreSQL?

While PostgreSQL is excellent for transactional data storage, graph relationships in SQL require:

  • Complex recursive CTEs (Common Table Expressions) for multi-hop queries
  • Multiple JOINs that become exponentially slower as relationships deepen
  • Manual construction of relationship paths
  • Limited visualization capabilities for network structures

Memgraph’s native graph model represents accounts and transactions as nodes and edges, making relationship queries natural and performant. For fraud detection where you need to quickly explore “who’s connected to whom,” graph databases are the right tool.

What You’ll Build

By following this guide, you’ll create:

The ability to ask natural language questions and get instant graph insights

A local Memgraph database with 57 accounts and 512 transactions

A realistic mule account network hidden among legitimate transactions

An AI-powered analysis interface through Claude Desktop

2. Prerequisites

Before starting, ensure you have:

  • macOS laptop
  • Homebrew package manager (we’ll install if needed)
  • Claude Desktop app installed
  • Basic terminal knowledge

3. Automated Setup

Below is a massive script. I did have it as single scripts, but it has merged into a large hazardous blob of bash. This script is badged under the “it works on my laptop” disclaimer!

cat > ~/setup_memgraph_complete.sh << 'EOF'
#!/bin/bash

# Complete automated setup for Memgraph + Claude Desktop

echo "========================================"
echo "Memgraph + Claude Desktop Setup"
echo "========================================"
echo ""

# Step 1: Install Rancher Desktop
echo "Step 1/7: Installing Rancher Desktop..."

# Check if Docker daemon is already running
DOCKER_RUNNING=false
if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
    echo "Container runtime is already running!"
    DOCKER_RUNNING=true
fi

if [ "$DOCKER_RUNNING" = false ]; then
    # Check if Homebrew is installed
    if ! command -v brew &> /dev/null; then
        echo "Installing Homebrew first..."
        /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
        
        # Add Homebrew to PATH for Apple Silicon Macs
        if [[ $(uname -m) == 'arm64' ]]; then
            echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
            eval "$(/opt/homebrew/bin/brew shellenv)"
        fi
    fi
    
    # Check if Rancher Desktop is installed
    RANCHER_INSTALLED=false
    if brew list --cask rancher 2>/dev/null | grep -q rancher; then
        RANCHER_INSTALLED=true
        echo "Rancher Desktop is installed via Homebrew."
    fi
    
    # If not installed, install it
    if [ "$RANCHER_INSTALLED" = false ]; then
        echo "Installing Rancher Desktop..."
        brew install --cask rancher
        sleep 3
    fi
    
    echo "Starting Rancher Desktop..."
    
    # Launch Rancher Desktop
    if [ -d "/Applications/Rancher Desktop.app" ]; then
        echo "Launching Rancher Desktop from /Applications..."
        open "/Applications/Rancher Desktop.app"
        sleep 5
    else
        echo ""
        echo "Please launch Rancher Desktop manually:"
        echo "  1. Press Cmd+Space"
        echo "  2. Type 'Rancher Desktop'"
        echo "  3. Press Enter"
        echo ""
        echo "Waiting for you to launch Rancher Desktop..."
        echo "Press Enter once you've started Rancher Desktop"
        read
    fi
    
    # Add Rancher Desktop to PATH
    export PATH="$HOME/.rd/bin:$PATH"
    
    echo "Waiting for container runtime to start (this may take 30-60 seconds)..."
    # Wait for docker command to become available
    for i in {1..60}; do
        if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
            echo ""
            echo "Container runtime is running!"
            break
        fi
        echo -n "."
        sleep 3
    done
    
    if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
        echo ""
        echo "Rancher Desktop is taking longer than expected. Please:"
        echo "1. Wait for Rancher Desktop to fully initialize"
        echo "2. Accept any permissions requests"
        echo "3. Once you see 'Kubernetes is running' in Rancher Desktop, press Enter"
        read
        
        # Try to add Rancher Desktop to PATH
        export PATH="$HOME/.rd/bin:$PATH"
        
        # Check one more time
        if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
            echo "Container runtime still not responding."
            echo "Please ensure Rancher Desktop is fully started and try again."
            exit 1
        fi
    fi
fi

# Ensure docker is in PATH for the rest of the script
export PATH="$HOME/.rd/bin:$PATH"

echo ""
echo "Step 2/7: Installing Memgraph container..."

# Stop and remove existing container if it exists
if docker ps -a 2>/dev/null | grep -q memgraph; then
    echo "Removing existing Memgraph container..."
    docker stop memgraph 2>/dev/null || true
    docker rm memgraph 2>/dev/null || true
fi

docker pull memgraph/memgraph-platform || { echo "Failed to pull Memgraph image"; exit 1; }
docker run -d -p 7687:7687 -p 7444:7444 -p 3000:3000 \
  --name memgraph \
  -v memgraph_data:/var/lib/memgraph \
  memgraph/memgraph-platform || { echo "Failed to start Memgraph container"; exit 1; }

echo "Waiting for Memgraph to be ready..."
sleep 10

echo ""
echo "Step 3/7: Installing Python and Memgraph MCP server..."

# Install Python if not present
if ! command -v python3 &> /dev/null; then
    echo "Installing Python..."
    brew install python3
fi

# Install uv package manager
if ! command -v uv &> /dev/null; then
    echo "Installing uv package manager..."
    curl -LsSf https://astral.sh/uv/install.sh | sh
    export PATH="$HOME/.local/bin:$PATH"
fi

echo "Memgraph MCP will be configured to run via uv..."

echo ""
echo "Step 4/7: Configuring Claude Desktop..."

CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"

mkdir -p "$CONFIG_DIR"

if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Backing up existing Claude configuration..."
    cp "$CONFIG_FILE" "$CONFIG_FILE.backup.$(date +%s)"
fi

# Get the full path to uv
UV_PATH=$(which uv 2>/dev/null || echo "$HOME/.local/bin/uv")

# Merge memgraph config with existing config
if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Merging memgraph config with existing MCP servers..."
    
    # Use Python to merge JSON (more reliable than jq which may not be installed)
    python3 << PYTHON_MERGE
import json
import sys

config_file = "$CONFIG_FILE"
uv_path = "${UV_PATH}"

try:
    # Read existing config
    with open(config_file, 'r') as f:
        config = json.load(f)
    
    # Ensure mcpServers exists
    if 'mcpServers' not in config:
        config['mcpServers'] = {}
    
    # Add/update memgraph server
    config['mcpServers']['memgraph'] = {
        "command": uv_path,
        "args": [
            "run",
            "--with",
            "mcp-memgraph",
            "--python",
            "3.13",
            "mcp-memgraph"
        ],
        "env": {
            "MEMGRAPH_HOST": "localhost",
            "MEMGRAPH_PORT": "7687"
        }
    }
    
    # Write merged config
    with open(config_file, 'w') as f:
        json.dump(config, f, indent=2)
    
    print("Successfully merged memgraph config")
    sys.exit(0)
except Exception as e:
    print(f"Error merging config: {e}", file=sys.stderr)
    sys.exit(1)
PYTHON_MERGE
    
    if [ $? -ne 0 ]; then
        echo "Failed to merge config, creating new one..."
        cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
    fi
else
    echo "Creating new Claude Desktop configuration..."
    cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
fi

echo "Claude Desktop configured!"

echo ""
echo "Step 5/7: Setting up mgconsole..."
echo "mgconsole will be used via Docker (included in memgraph/memgraph-platform)"

echo ""
echo "Step 6/7: Setting up database schema..."

sleep 5  # Give Memgraph extra time to be ready

echo "Clearing existing data..."
echo "MATCH (n) DETACH DELETE n;" | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687

echo "Creating indexes..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE INDEX ON :Account(account_id);
CREATE INDEX ON :Account(account_type);
CREATE INDEX ON :Person(person_id);
CYPHER

echo ""
echo "Step 7/7: Populating test data..."

echo "Loading core mule account data..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE (p1:Person {person_id: 'P001', name: 'John Smith', age: 45, risk_score: 'low'})
CREATE (a1:Account {account_id: 'ACC001', account_type: 'checking', balance: 15000, opened_date: '2020-01-15', status: 'active'})
CREATE (p1)-[:OWNS {since: '2020-01-15'}]->(a1)
CREATE (p2:Person {person_id: 'P002', name: 'Sarah Johnson', age: 38, risk_score: 'low'})
CREATE (a2:Account {account_id: 'ACC002', account_type: 'savings', balance: 25000, opened_date: '2019-06-10', status: 'active'})
CREATE (p2)-[:OWNS {since: '2019-06-10'}]->(a2)
CREATE (p3:Person {person_id: 'P003', name: 'Michael Brown', age: 22, risk_score: 'high'})
CREATE (a3:Account {account_id: 'ACC003', account_type: 'checking', balance: 500, opened_date: '2024-08-01', status: 'active'})
CREATE (p3)-[:OWNS {since: '2024-08-01'}]->(a3)
CREATE (p4:Person {person_id: 'P004', name: 'Lisa Chen', age: 19, risk_score: 'high'})
CREATE (a4:Account {account_id: 'ACC004', account_type: 'checking', balance: 300, opened_date: '2024-08-05', status: 'active'})
CREATE (p4)-[:OWNS {since: '2024-08-05'}]->(a4)
CREATE (p5:Person {person_id: 'P005', name: 'David Martinez', age: 21, risk_score: 'high'})
CREATE (a5:Account {account_id: 'ACC005', account_type: 'checking', balance: 450, opened_date: '2024-08-03', status: 'active'})
CREATE (p5)-[:OWNS {since: '2024-08-03'}]->(a5)
CREATE (p6:Person {person_id: 'P006', name: 'Robert Wilson', age: 35, risk_score: 'critical'})
CREATE (a6:Account {account_id: 'ACC006', account_type: 'business', balance: 2000, opened_date: '2024-07-15', status: 'active'})
CREATE (p6)-[:OWNS {since: '2024-07-15'}]->(a6)
CREATE (p7:Person {person_id: 'P007', name: 'Unknown Entity', risk_score: 'critical'})
CREATE (a7:Account {account_id: 'ACC007', account_type: 'business', balance: 150000, opened_date: '2024-06-01', status: 'active'})
CREATE (p7)-[:OWNS {since: '2024-06-01'}]->(a7)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN001', amount: 50000, timestamp: '2024-09-01T10:15:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN002', amount: 9500, timestamp: '2024-09-01T14:30:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN003', amount: 9500, timestamp: '2024-09-01T14:32:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN004', amount: 9500, timestamp: '2024-09-01T14:35:00', type: 'transfer', flagged: true}]->(a5)
CREATE (a3)-[:TRANSACTION {transaction_id: 'TXN005', amount: 9000, timestamp: '2024-09-02T09:00:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a4)-[:TRANSACTION {transaction_id: 'TXN006', amount: 9000, timestamp: '2024-09-02T09:15:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a5)-[:TRANSACTION {transaction_id: 'TXN007', amount: 9000, timestamp: '2024-09-02T09:30:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN008', amount: 45000, timestamp: '2024-09-15T11:20:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN009', amount: 9800, timestamp: '2024-09-15T15:00:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN010', amount: 9800, timestamp: '2024-09-15T15:05:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a1)-[:TRANSACTION {transaction_id: 'TXN011', amount: 150, timestamp: '2024-09-10T12:00:00', type: 'debit_card', flagged: false}]->(a2)
CREATE (a2)-[:TRANSACTION {transaction_id: 'TXN012', amount: 1000, timestamp: '2024-09-12T10:00:00', type: 'transfer', flagged: false}]->(a1);
CYPHER

echo "Loading noise data (50 accounts, 500 transactions)..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
UNWIND range(1, 50) AS i
WITH i,
     ['Alice', 'Bob', 'Carol', 'David', 'Emma', 'Frank', 'Grace', 'Henry', 'Iris', 'Jack',
      'Karen', 'Leo', 'Mary', 'Nathan', 'Olivia', 'Peter', 'Quinn', 'Rachel', 'Steve', 'Tina',
      'Uma', 'Victor', 'Wendy', 'Xavier', 'Yara', 'Zack', 'Amy', 'Ben', 'Chloe', 'Daniel',
      'Eva', 'Fred', 'Gina', 'Hugo', 'Ivy', 'James', 'Kate', 'Luke', 'Mia', 'Noah',
      'Opal', 'Paul', 'Rosa', 'Sam', 'Tara', 'Umar', 'Vera', 'Will', 'Xena', 'Yuki'] AS firstNames,
     ['Anderson', 'Baker', 'Clark', 'Davis', 'Evans', 'Foster', 'Garcia', 'Harris', 'Irwin', 'Jones',
      'King', 'Lopez', 'Miller', 'Nelson', 'Owens', 'Parker', 'Quinn', 'Reed', 'Scott', 'Taylor',
      'Underwood', 'Vargas', 'White', 'Young', 'Zhao', 'Adams', 'Brooks', 'Collins', 'Duncan', 'Ellis'] AS lastNames,
     ['checking', 'savings', 'checking', 'savings', 'checking'] AS accountTypes,
     ['low', 'low', 'low', 'medium', 'low'] AS riskScores,
     ['2018-03-15', '2018-07-22', '2019-01-10', '2019-05-18', '2019-09-30', '2020-02-14', '2020-06-25', '2020-11-08', '2021-04-17', '2021-08-29', '2022-01-20', '2022-05-12', '2022-10-03', '2023-02-28', '2023-07-15'] AS dates
WITH i,
     firstNames[toInteger(rand() * size(firstNames))] + ' ' + lastNames[toInteger(rand() * size(lastNames))] AS fullName,
     accountTypes[toInteger(rand() * size(accountTypes))] AS accType,
     riskScores[toInteger(rand() * size(riskScores))] AS risk,
     toInteger(rand() * 40 + 25) AS age,
     toInteger(rand() * 80000 + 1000) AS balance,
     dates[toInteger(rand() * size(dates))] AS openDate
CREATE (p:Person {person_id: 'NOISE_P' + toString(i), name: fullName, age: age, risk_score: risk})
CREATE (a:Account {account_id: 'NOISE_ACC' + toString(i), account_type: accType, balance: balance, opened_date: openDate, status: 'active'})
CREATE (p)-[:OWNS {since: openDate}]->(a);
UNWIND range(1, 500) AS i
WITH i,
     toInteger(rand() * 50 + 1) AS fromIdx,
     toInteger(rand() * 50 + 1) AS toIdx,
     ['transfer', 'debit_card', 'check', 'atm_withdrawal', 'direct_deposit', 'wire_transfer', 'mobile_payment'] AS txnTypes,
     ['2024-01-15', '2024-02-20', '2024-03-10', '2024-04-05', '2024-05-18', '2024-06-22', '2024-07-14', '2024-08-09', '2024-09-25', '2024-10-30'] AS dates
WHERE fromIdx <> toIdx
WITH i, fromIdx, toIdx, txnTypes, dates,
     txnTypes[toInteger(rand() * size(txnTypes))] AS txnType,
     toInteger(rand() * 5000 + 10) AS amount,
     (rand() < 0.05) AS shouldFlag,
     dates[toInteger(rand() * size(dates))] AS txnDate
MATCH (from:Account {account_id: 'NOISE_ACC' + toString(fromIdx)})
MATCH (to:Account {account_id: 'NOISE_ACC' + toString(toIdx)})
CREATE (from)-[:TRANSACTION {
    transaction_id: 'NOISE_TXN' + toString(i),
    amount: amount,
    timestamp: txnDate + 'T' + toString(toInteger(rand() * 24)) + ':' + toString(toInteger(rand() * 60)) + ':00',
    type: txnType,
    flagged: shouldFlag
}]->(to);
CYPHER

echo ""
echo "========================================"
echo "Setup Complete!"
echo "========================================"
echo ""
echo "Next steps:"
echo "1. Restart Claude Desktop (Quit and reopen)"
echo "2. Open Memgraph Lab at https://localhost:3000"
echo "3. Start asking Claude questions about the mule account data!"
echo ""
echo "Example query: 'Show me all accounts owned by people with high or critical risk scores in Memgraph'"
echo ""

EOF

chmod +x ~/setup_memgraph_complete.sh
~/setup_memgraph_complete.sh

The script will:

  1. Install Rancher Desktop (if not already installed)
  2. Install Homebrew (if needed)
  3. Pull and start Memgraph container
  4. Install Node.js and Memgraph MCP server
  5. Configure Claude Desktop automatically
  6. Install mgconsole CLI tool
  7. Set up database schema with indexes
  8. Populate with mule account data and 500+ noise transactions

After the script completes, restart Claude Desktop (quit and reopen) for the MCP configuration to take effect.

4. Verifying the Setup

Verify the setup by accessing Memgraph Lab at https://localhost:3000 or using mgconsole via Docker:

docker exec -it memgraph mgconsole --host 127.0.0.1 --port 7687

In mgconsole, run:

MATCH (n) RETURN count(n);

You should see:

+----------+
| count(n) |
+----------+
| 152      |
+----------+
1 row in set (round trip in 0.002 sec)

Check the transaction relationships:

MATCH ()-[r:TRANSACTION]->() RETURN count(r);

You should see:

+----------+
| count(r) |
+----------+
| 501      |
+----------+
1 row in set (round trip in 0.002 sec)

Verify the mule accounts are still identifiable:

MATCH (p:Person)-[:OWNS]->(a:Account)
WHERE p.risk_score IN ['high', 'critical']
RETURN p.name, a.account_id, p.risk_score
ORDER BY p.risk_score DESC;

This should return the 5 suspicious accounts from our mule network:

+------------------+------------------+------------------+
| p.name           | a.account_id     | p.risk_score     |
+------------------+------------------+------------------+
| "Michael Brown"  | "ACC003"         | "high"           |
| "Lisa Chen"      | "ACC004"         | "high"           |
| "David Martinez" | "ACC005"         | "high"           |
| "Robert Wilson"  | "ACC006"         | "critical"       |
| "Unknown Entity" | "ACC007"         | "critical"       |
+------------------+------------------+------------------+
5 rows in set (round trip in 0.002 sec)

5. Using Claude with Memgraph

Now that everything is set up, you can interact with Claude Desktop to analyze the mule account network. Here are example queries you can try:

Example 1: Find All High-Risk Accounts

Ask Claude:

Show me all accounts owned by people with high or critical risk scores in Memgraph

Claude will query Memgraph and return results showing the suspicious accounts (ACC003, ACC004, ACC005, ACC006, ACC007), filtering out the 50+ noise accounts.

Example 2: Identify Transaction Patterns

Ask Claude:

Find all accounts that received money from ACC006 within a 24-hour period. Show the transaction amounts and timestamps.

Claude will identify the three mule accounts (ACC003, ACC004, ACC005) that received similar amounts in quick succession.

Example 3: Trace Money Flow

Ask Claude:

Trace the flow of money from ACC007 through the network. Show me the complete transaction path.

Claude will visualize the path: ACC007 -> ACC006 -> [ACC003, ACC004, ACC005], revealing the laundering pattern.

Example 4: Calculate Total Funds

Ask Claude:

Calculate the total amount of money that flowed through ACC006 in September 2024

Claude will aggregate all incoming and outgoing transactions for the controller account.

Example 5: Find Rapid Withdrawal Patterns

Ask Claude:

Find accounts where money was withdrawn within 48 hours of being deposited. What are the amounts and account holders?

This reveals the classic mule account behavior of quick cash extraction.

Example 6: Network Analysis

Ask Claude:

Show me all accounts that have transaction relationships with ACC006. Create a visualization of this network.

Claude will generate a graph showing the controller account at the center with connections to both the source and mule accounts.

Example 7: Risk Assessment

Ask Claude:

Which accounts have received flagged transactions totaling more than $15,000? List them by total amount.

This helps identify which mule accounts have processed the most illicit funds.

6. Understanding the Graph Visualization

When Claude displays graph results, you’ll see:

  • Nodes: Circles representing accounts and persons
  • Edges: Lines representing transactions or ownership relationships
  • Properties: Attributes like amounts, timestamps, and risk scores

The graph structure makes it easy to spot:

  • Central nodes (controllers) with many connections
  • Similar transaction patterns across multiple accounts
  • Timing correlations between related transactions
  • Isolation of legitimate vs. suspicious account clusters

7. Advanced Analysis Queries

Once you’re comfortable with basic queries, try these advanced analyses:

Community Detection

Ask Claude:

Find groups of accounts that frequently transact with each other. Are there separate communities in the network?

Temporal Analysis

Ask Claude:

Show me the timeline of transactions for accounts owned by people under 25 years old. Are there any patterns?

Shortest Path Analysis

Ask Claude:

What's the shortest path of transactions between ACC007 and ACC003? How many hops does it take?

8. Cleaning Up

When you’re done experimenting, you can stop and remove the Memgraph container:

docker stop memgraph
docker rm memgraph

To remove the data volume completely:

docker volume rm memgraph_data

To restart later with fresh data, just run the setup script again.

9. Troubleshooting

Docker Not Running

If you get errors about Docker not running:

open -a Docker

Wait for Docker Desktop to start, then verify:

docker info

Memgraph Container Won’t Start

Check if ports are already in use:

lsof -i :7687
lsof -i :3000

Kill any conflicting processes or change the port mappings in the docker run command.

Claude Can’t Connect to Memgraph

Verify the MCP server configuration:

cat ~/Library/Application\ Support/Claude/claude_desktop_config.json

Ensure Memgraph is running:

docker ps | grep memgraph

Restart Claude Desktop completely after configuration changes.

mgconsole Command Not Found

Install it manually:

brew install memgraph/tap/mgconsole

No Data Returned from Queries

Check if data was loaded successfully:

mgconsole --host 127.0.0.1 --port 7687 -e "MATCH (n) RETURN count(n);"

If the count is 0, rerun the setup script.

10. Next Steps

Now that you have a working setup, you can:

  • Add more complex transaction patterns
  • Implement real-time fraud detection rules
  • Create additional graph algorithms for anomaly detection
  • Connect to real banking data sources (with proper security)
  • Build automated alerting for suspicious patterns
  • Expand the schema to include IP addresses, devices, and locations

The combination of Memgraph’s graph database capabilities and Claude’s natural language interface makes it easy to explore and analyze complex relationship data without writing complex Cypher queries manually.

11. Conclusion

You now have a complete environment for analyzing banking mule accounts using Memgraph and Claude Desktop. The graph database structure naturally represents the relationships between accounts, making it ideal for fraud detection. Claude’s integration through MCP allows you to query and visualize this data using natural language, making sophisticated analysis accessible without deep technical knowledge.

The test dataset demonstrates typical mule account patterns: rapid movement of funds through multiple accounts, young account holders, recently opened accounts, and structured amounts designed to avoid reporting thresholds. These patterns are much easier to spot in a graph database than in traditional relational databases.

Experiment with different queries and explore how graph thinking can reveal hidden patterns in connected data.

Stablecoins: A Comprehensive Guide

1. What Are Stablecoins?

Stablecoins are a type of cryptocurrency designed to maintain a stable value by pegging themselves to a reserve asset, typically a fiat currency like the US dollar. Unlike volatile cryptocurrencies such as Bitcoin or Ethereum, which can experience dramatic price swings, stablecoins aim to provide the benefits of digital currency without the price volatility.

The most common types of stablecoins include:

Fiat collateralized stablecoins are backed by traditional currencies held in reserve at a 1:1 ratio. Examples include Tether (USDT) and USD Coin (USDC), which maintain reserves in US dollars or dollar equivalent assets.

Crypto collateralized stablecoins use other cryptocurrencies as collateral, often over collateralized to account for volatility. DAI is a prominent example, backed by Ethereum and other crypto assets.

Algorithmic stablecoins attempt to maintain their peg through automated supply adjustments based on market demand, without traditional collateral backing. These have proven to be the most controversial and risky category.

2. Why Do Stablecoins Exist?

Stablecoins emerged to solve several critical problems in both traditional finance and the cryptocurrency ecosystem.

In the crypto world, they provide a stable store of value and medium of exchange. Traders use stablecoins to move in and out of volatile positions without converting back to fiat currency, avoiding the delays and fees associated with traditional banking. They serve as a safe harbor during market turbulence and enable seamless transactions across different blockchain platforms.

For cross border payments and remittances, stablecoins offer significant advantages over traditional methods. International transfers that typically take days and cost substantial fees can be completed in minutes for a fraction of the cost. This makes them particularly valuable for workers sending money to families in other countries or businesses conducting international trade.

Stablecoins also address financial inclusion challenges. In countries with unstable currencies or limited banking infrastructure, they provide access to a stable digital currency that can be held and transferred using just a smartphone. This opens up financial services to the unbanked and underbanked populations worldwide.

2.1 How Do Stablecoins Move Money?

Stablecoins move between countries by riding on public or permissioned blockchains rather than correspondent banking rails. When a sender in one country initiates a payment, their bank or payment provider converts local currency into a regulated stablecoin (for example a USD or EUR backed token) and sends that token directly to the recipient bank’s blockchain address. The transaction settles globally in minutes with finality provided by the blockchain, not by intermediaries. To participate, a bank joins a stablecoin network by becoming an authorised issuer or distributor, integrating custody and wallet infrastructure, and connecting its core banking systems to blockchain rails via APIs. On the receiving side, the bank accepts the stablecoin, performs compliance checks (KYC, AML, sanctions screening), and redeems it back into local currency for the client’s account. Because value moves as tokens on chain rather than as messages between correspondent banks, there is no need for SWIFT messaging, nostro/vostro accounts, or multi-day settlement, resulting in faster, cheaper, and more transparent cross border payments.

If a bank does not want the operational and regulatory burden of running its own digital asset custody, it can partner with specialist technology and infrastructure providers that offer custody, wallet management, compliance tooling, and blockchain connectivity as managed services. In this model, the bank retains the customer relationship and regulatory accountability, while the tech partner handles private key security, smart-contract interaction, transaction monitoring, and network operations under strict service-level and audit agreements. Commonly used players in this space include Fireblocks and Copper for institutional custody and secure transaction orchestration; Anchorage Digital and BitGo for regulated custody and settlement services; Circle for stablecoin issuance and on-/off-ramps (USDC); Coinbase Institutional for custody and liquidity; and Stripe or Visa for fiat to stablecoin on-ramps and payment integration. This partnership approach allows banks to move quickly into stablecoin based cross-border payments without rebuilding their core infrastructure or taking on unnecessary operational risk.

3. How Do Stablecoins Make Money?

Stablecoin issuers have developed several revenue models that can be remarkably profitable.

The primary revenue source for fiat backed stablecoins is interest on reserves. When issuers hold billions of dollars in US Treasury bills or other interest bearing assets backing their stablecoins, they earn substantial returns. For instance, with interest rates at 5%, a stablecoin issuer with $100 billion in reserves could generate $5 billion annually while still maintaining the 1:1 peg. Users typically receive no interest on their stablecoin holdings, allowing issuers to pocket the entire yield.

Transaction fees represent another revenue stream. While often minimal, the sheer volume of stablecoin transactions generates significant income. Some issuers charge fees for minting (creating) or redeeming stablecoins, particularly for large institutional transactions.

Premium services for institutional clients provide additional revenue. Banks, payment processors, and large enterprises often pay for faster settlement, higher transaction limits, dedicated support, and integration services.

Many stablecoin platforms also generate revenue through their broader ecosystem. This includes charging fees on decentralized exchanges, lending protocols, or other financial services built around the stablecoin.

3.1 The Pendle Revenue Model: Yield Trading Innovation

Pendle represents an innovative evolution in the DeFi stablecoin ecosystem through its yield trading protocol. Rather than issuing stablecoins directly, Pendle creates markets for trading future yield on stablecoin deposits and other interest bearing assets.

The Pendle revenue model operates through several mechanisms. The protocol charges trading fees on its automated market makers (AMMs), typically around 0.1% to 0.3% per swap. When users trade yield tokens on Pendle’s platform, a portion of these fees goes to the protocol treasury while another portion rewards liquidity providers who supply capital to the trading pools.

Pendle’s unique approach involves splitting interest bearing tokens into two components: the principal token (PT) representing the underlying asset, and the yield token (YT) representing the future interest. This separation allows sophisticated users to speculate on interest rates, hedge yield exposure, or lock in fixed returns on their stablecoin holdings.

The protocol generates revenue through swap fees, redemption fees when tokens mature, and potential governance token value capture as the protocol grows. This model demonstrates how stablecoin adjacent services can create profitable businesses by adding layers of financial sophistication on top of basic stablecoin infrastructure. Pendle particularly benefits during periods of high interest rates, when demand for yield trading increases and the potential returns from separating yield rights become more valuable.

4. Security and Fraud Concerns

Stablecoins face several critical security and fraud challenges that potential users and regulators must consider.

Reserve transparency and verification remain the most significant concern. Issuers must prove they actually hold the assets backing their stablecoins. Several controversies have erupted when stablecoin companies failed to provide clear, audited proof of reserves. The risk is that an issuer might not have sufficient backing, leading to a bank run scenario where the peg collapses and users cannot redeem their coins.

Smart contract vulnerabilities pose technical risks. Stablecoins built on blockchain platforms rely on code that, if flawed, can be exploited by hackers. Major hacks have resulted in hundreds of millions of dollars in losses, and once stolen, blockchain transactions are typically irreversible.

Regulatory uncertainty creates ongoing challenges. Different jurisdictions treat stablecoins differently, and the lack of clear, consistent regulation creates risks for both issuers and users. There’s potential for sudden regulatory action that could freeze assets or shut down operations.

Counterparty risk is inherent in centralized stablecoins. Users must trust the issuing company to maintain reserves, operate honestly, and remain solvent. If the company fails or acts fraudulently, users may lose their funds with limited recourse.

The algorithmic stablecoin model has proven particularly vulnerable. The catastrophic collapse of TerraUSD in 2022, which lost over $40 billion in value, demonstrated that algorithmic mechanisms can fail spectacularly under market stress, creating devastating losses for holders.

Money laundering and sanctions evasion concerns have drawn regulatory scrutiny. The pseudonymous nature of cryptocurrency transactions makes stablecoins attractive for illicit finance, though blockchain’s transparent ledger also makes transactions traceable with proper tools and cooperation.

4.1 Monitoring Stablecoin Flows

Effective monitoring of stablecoin flows has become critical for financial institutions, regulators, and the issuers themselves to ensure compliance, detect fraud, and understand market dynamics.

On Chain Analytics Tools provide the foundation for stablecoin monitoring. Since most stablecoins operate on public blockchains, every transaction is recorded and traceable. Companies like Chainalysis, Elliptic, and TRM Labs specialize in blockchain analytics, offering platforms that track stablecoin movements across wallets and exchanges. These tools can identify patterns, flag suspicious activities, and trace funds through complex transaction chains.

Real Time Transaction Monitoring systems alert institutions to potentially problematic flows. These systems track large transfers, unusual transaction patterns, rapid movement between exchanges (potentially indicating wash trading or manipulation), and interactions with known illicit addresses. Financial institutions integrating stablecoins must implement monitoring comparable to traditional payment systems.

Wallet Clustering and Entity Attribution techniques help identify the real world entities behind blockchain addresses. By analyzing transaction patterns, timing, and common input addresses, analytics firms can cluster related wallets and often attribute them to specific exchanges, services, or even individuals. This capability is crucial for understanding who holds stablecoins and where they’re being used.

Reserve Monitoring and Attestation focuses on the issuer side. Independent auditors and blockchain analysis firms track the total supply of stablecoins and verify that corresponding reserves exist. Circle, for instance, publishes monthly attestations from accounting firms. Some advanced monitoring systems provide real time transparency by linking on chain supply data with bank account verification.

Cross Chain Tracking has become essential as stablecoins exist across multiple blockchains. USDC and USDT operate on Ethereum, Tron, Solana, and other chains, requiring monitoring solutions that aggregate data across these ecosystems to provide a complete picture of flows.

Market Intelligence and Risk Assessment platforms combine on chain data with off chain information to assess concentration risk, identify potential market manipulation, and provide early warning of potential instability. When a small number of addresses hold large stablecoin positions, it creates systemic risk that monitoring can help quantify.

Banks and financial institutions implementing stablecoins typically deploy a combination of commercial blockchain analytics platforms, custom monitoring systems, and compliance teams trained in cryptocurrency investigation. The goal is achieving the same level of financial crime prevention and risk management that exists in traditional banking while adapting to the unique characteristics of blockchain technology.

5. How Regulators View Stablecoins

Regulatory attitudes toward stablecoins vary significantly across jurisdictions, but common themes and concerns have emerged globally.

United States Regulatory Approach involves multiple agencies with overlapping jurisdictions. The Securities and Exchange Commission (SEC) has taken the position that some stablecoins may be securities, particularly those offering yield or governed by investment contracts. The Commodity Futures Trading Commission (CFTC) views certain stablecoins as commodities. The Treasury Department and the Financial Stability Oversight Council have identified stablecoins as potential systemic risks requiring bank like regulation.

Proposed legislation in the US Congress has sought to create a comprehensive framework requiring stablecoin issuers to maintain high quality liquid reserves, submit to regular audits, and potentially obtain banking charters or trust company licenses. The regulatory preference is clearly toward treating major stablecoin issuers as financial institutions subject to banking supervision.

European Union Regulation has taken a more structured approach through the Markets in Crypto Assets (MiCA) regulation, which came into effect in 2024. MiCA establishes clear requirements for stablecoin issuers including reserve asset quality standards, redemption rights for holders, capital requirements, and governance standards. The regulation distinguishes between smaller stablecoin operations and “significant” stablecoins that require more stringent oversight due to their systemic importance.

United Kingdom Regulators are developing a framework that treats stablecoins used for payments as similar to traditional payment systems. The Bank of England and Financial Conduct Authority have indicated that stablecoin issuers should meet standards comparable to commercial banks, including holding reserves in central bank accounts or high quality government securities.

Asian Regulatory Perspectives vary widely. Singapore’s Monetary Authority has created a licensing regime for stablecoin issuers focused on reserve management and redemption guarantees. Hong Kong is developing similar frameworks. China has banned private stablecoins entirely while developing its own central bank digital currency. Japan requires stablecoin issuers to be licensed banks or trust companies.

Key Regulatory Concerns consistently include systemic risk (the failure of a major stablecoin could trigger broader financial instability), consumer protection (ensuring holders can redeem stablecoins for fiat currency), anti money laundering compliance, reserve adequacy and quality, concentration risk in the Treasury market (if stablecoin reserves significantly increase holdings of government securities), and the potential for stablecoins to facilitate capital flight or undermine monetary policy.

Central Bank Digital Currencies (CBDCs) represent a regulatory response to private stablecoins. Many central banks are developing or piloting digital currencies partly to provide a public alternative to private stablecoins, allowing governments to maintain monetary sovereignty while capturing the benefits of digital currency.

The regulatory trend is clearly toward treating stablecoins as systemically important financial infrastructure requiring oversight comparable to banks or payment systems, with an emphasis on reserve quality, redemption rights, and anti money laundering compliance.

5.1 How Stablecoins Impact the Correspondent Banking Model

Stablecoins pose both opportunities and existential challenges to the traditional correspondent banking system that has dominated international payments for decades.

The Traditional Correspondent Banking Model relies on a network of banking relationships where banks hold accounts with each other to facilitate international transfers. When a business in Brazil wants to pay a supplier in Thailand, the payment typically flows through multiple intermediary banks, each taking fees and adding delays. This system involves currency conversion, compliance checks at multiple points, and settlement risk, making international payments slow and expensive.

Stablecoins as Direct Competition offer a fundamentally different model. A business can send USDC directly to a recipient anywhere in the world in minutes, bypassing the correspondent banking network entirely. The recipient can then convert to local currency through a local exchange or payment processor. This disintermediation threatens the fee generating correspondent banking relationships that have been profitable for banks, particularly in remittance corridors and business to business payments.

Cost and Speed Advantages are significant. Traditional correspondent banking involves fees at multiple layers, often totaling 3-7% for remittances and 1-3% for business payments, with settlement taking 1-5 days. Stablecoin transfers can cost less than 1% including conversion fees, with settlement in minutes. This efficiency gap puts pressure on banks to either adopt stablecoin technology or risk losing payment volume.

The Disintermediation Threat extends beyond just payments. Correspondent banking generates substantial revenue for major international banks through foreign exchange spreads, service fees, and liquidity management. If businesses and individuals can hold and transfer value in stablecoins, they become less dependent on banks for international transactions. This is particularly threatening in high volume, low margin corridors where efficiency matters most.

Banks Adapting Through Integration represents one response to this threat. Rather than being displaced, some banks are incorporating stablecoins into their service offerings. They can issue their own stablecoins, partner with stablecoin issuers to provide on ramps and off ramps, or offer custody and transaction services for corporate clients wanting to use stablecoins. JPMorgan’s JPM Coin exemplifies this approach, using blockchain technology and stablecoin principles for institutional payments within a bank controlled system.

The Hybrid Model Emerging in practice combines stablecoins with traditional banking. Banks provide the fiat on ramps and off ramps, regulatory compliance, customer relationships, and local currency conversion, while stablecoins handle the actual transfer of value. This partnership model allows banks to maintain their customer relationships and regulatory compliance role while capturing efficiency gains from blockchain technology.

Regulatory Arbitrage Concerns arise because stablecoins can sometimes operate with less regulatory burden than traditional correspondent banking. Banks face extensive anti money laundering requirements, capital requirements, and regulatory scrutiny. If stablecoins provide similar services with lighter regulation, they gain a competitive advantage that regulators are increasingly seeking to eliminate through tighter stablecoin oversight.

Settlement Risk and Liquidity Management change fundamentally with stablecoins. Traditional correspondent banking requires banks to maintain nostro accounts (accounts held in foreign banks) prefunded with liquidity. Stablecoins allow for near instant settlement without prefunding requirements, potentially freeing up billions in trapped liquidity that banks currently must maintain across the correspondent network.

The long term impact will likely involve correspondent banking evolving rather than disappearing. Banks will increasingly serve as regulated gateways between fiat currency and stablecoins, while stablecoins handle the actual transfer of value. The most vulnerable players are mid tier correspondent banks that primarily provide routing services without strong customer relationships or value added services.

5.2 How FATF Standards Apply to Stablecoins

The Financial Action Task Force (FATF) provides international standards for combating money laundering and terrorist financing, and these standards have been extended to cover stablecoins and other virtual assets.

The Travel Rule represents the most significant FATF requirement affecting stablecoins. Originally designed for traditional wire transfers, the Travel Rule requires that information about the originator and beneficiary of transfers above a certain threshold (typically $1,000) must travel with the transaction. For stablecoins, this means that Virtual Asset Service Providers (VASPs) such as exchanges, wallet providers, and payment processors must collect and transmit customer information when facilitating stablecoin transfers.

Implementing the Travel Rule on public blockchains creates technical challenges. While bank wire transfers pass through controlled systems where information can be attached, blockchain transactions are peer to peer and pseudonymous. The industry has developed solutions like the Travel Rule Information Sharing Architecture (TRISA) and other protocols that allow VASPs to exchange customer information securely off chain while the stablecoin transaction occurs on chain.

Know Your Customer (KYC) and Customer Due Diligence requirements apply to any entity that provides services for stablecoin transactions. Exchanges, wallet providers, and payment processors must verify customer identities, assess risk levels, and maintain records of transactions. This requirement creates a tension with the permissionless nature of blockchain technology, where anyone can hold a self hosted wallet and transact directly without intermediaries.

VASP Registration and Licensing is required in most jurisdictions following FATF guidance. Any business providing stablecoin custody, exchange, or transfer services must register with financial authorities, implement anti money laundering programs, and submit to regulatory oversight. This has created significant compliance burdens for smaller operators and driven consolidation toward larger, well capitalized platforms.

Stablecoin Issuers as VASPs are generally classified as Virtual Asset Service Providers under FATF standards, subjecting them to the full range of anti money laundering and counter terrorist financing obligations. This includes transaction monitoring, suspicious activity reporting, and sanctions screening. Major issuers like Circle and Paxos have built sophisticated compliance programs comparable to traditional financial institutions.

The Self Hosted Wallet Challenge represents a key friction point. FATF has expressed concern about transactions involving self hosted (non custodial) wallets where users control their own private keys without intermediary oversight. Some jurisdictions have proposed restricting or requiring enhanced due diligence for transactions between VASPs and self hosted wallets, though this remains controversial and difficult to enforce technically.

Cross Border Coordination is essential but challenging. Stablecoins operate globally and instantly, but regulatory enforcement is jurisdictional. FATF promotes information sharing between national financial intelligence units and encourages mutual legal assistance. However, gaps in enforcement across jurisdictions create opportunities for regulatory arbitrage, where bad actors operate from jurisdictions with weak oversight.

Sanctions Screening is mandatory for stablecoin service providers. They must screen transactions against lists of sanctioned individuals, entities, and countries maintained by organizations like the US Office of Foreign Assets Control (OFAC). Several stablecoin issuers have demonstrated the ability to freeze funds in wallets associated with sanctioned addresses, showing that even decentralized systems can implement centralized controls when required by law.

Risk Based Approach is fundamental to FATF methodology. Service providers must assess the money laundering and terrorist financing risks specific to their operations and implement controls proportionate to those risks. For stablecoins, this means considering factors like transaction volumes, customer types, geographic exposure, and the underlying blockchain’s anonymity features.

Challenges in Implementation are significant. The pseudonymous nature of blockchain transactions makes it difficult to identify ultimate beneficial owners. The speed and global reach of stablecoin transfers compress the time window for intervention. The prevalence of decentralized exchanges and peer to peer transactions creates enforcement gaps. Some argue that excessive regulation will drive activity to unregulated platforms or privacy focused cryptocurrencies, making financial crime harder rather than easier to detect.

The FATF framework essentially attempts to impose traditional financial system controls on a technology designed to operate without intermediaries. While large, regulated stablecoin platforms can implement these requirements, the tension between regulatory compliance and the permissionless nature of blockchain technology remains unresolved and continues to drive both technological innovation and regulatory evolution.

6. Good Use Cases for Stablecoins

Despite the risks, stablecoins excel in several legitimate applications that offer clear advantages over traditional alternatives.

Cross border payments and remittances benefit enormously from stablecoins. Workers sending money home can avoid high fees and long delays, with transactions settling in minutes rather than days. Businesses conducting international trade can reduce costs and streamline operations significantly.

Treasury management for crypto native companies provides a practical use case. Cryptocurrency exchanges, blockchain projects, and Web3 companies need stable assets for operations while staying within the crypto ecosystem. Stablecoins let them hold working capital without exposure to crypto volatility.

Decentralized finance (DeFi) applications rely heavily on stablecoins. They enable lending and borrowing, yield farming, liquidity provision, and trading without the complications of volatile assets. Users can earn interest on stablecoin deposits or use them as collateral for loans.

Hedging against local currency instability makes stablecoins valuable in countries experiencing hyperinflation or currency crises. Citizens can preserve purchasing power by holding dollar backed stablecoins instead of rapidly devalating local currencies.

Programmable payments and smart contracts benefit from stablecoins. Businesses can automate payments based on conditions (such as releasing funds when goods are received) or create subscription services, escrow arrangements, and other complex payment structures that execute automatically.

Ecommerce and online payments increasingly accept stablecoins as they combine the low fees of cryptocurrency with price stability. This is particularly valuable for digital goods, online services, and merchant payments where volatility would be problematic.

6.1 Companies Specializing in Banking Stablecoin Integration

Several companies have emerged as leaders in helping traditional banks launch and integrate stablecoin solutions into their existing infrastructure.

Paxos is a regulated blockchain infrastructure company that provides white label stablecoin solutions for financial institutions. They’ve partnered with major companies to issue stablecoins and offer compliance focused infrastructure that meets banking regulatory requirements. Paxos handles the technical complexity while allowing banks to maintain their customer relationships.

Circle offers comprehensive business account services and APIs that enable banks to integrate USD Coin (USDC) into their platforms. Their developer friendly tools and banking partnerships have made them a go to provider for institutions wanting to offer stablecoin services. Circle emphasizes regulatory compliance and transparency with regular reserve attestations.

Fireblocks provides institutional grade infrastructure for banks looking to offer digital asset services, including stablecoins. Their platform handles custody, treasury operations, and connectivity to various blockchains, allowing banks to offer stablecoin functionality without building everything from scratch.

Taurus specializes in digital asset infrastructure for banks, wealth managers, and other financial institutions in Europe. They provide technology for custody, tokenization, and trading that enables traditional financial institutions to offer stablecoin services within existing regulatory frameworks.

Sygnum operates as a Swiss digital asset bank and offers banking as a service solutions. They help other banks integrate digital assets including stablecoins while ensuring compliance with Swiss banking regulations. Their approach combines traditional banking security with blockchain innovation.

Ripple has expanded beyond its cryptocurrency focus to offer enterprise blockchain solutions for banks, including infrastructure for stablecoin issuance and cross border payment solutions. Their partnerships with financial institutions worldwide position them as a bridge between traditional banking and blockchain technology.

BBVA and JPMorgan have also developed proprietary solutions (JPM Coin for JPMorgan) that other institutions might license or use as models, though these are typically more focused on their own operations and select partners.

7.1 The Bid Offer Spread Challenge: Liquidity vs. True 1:1 Conversions

One of the hidden costs in stablecoin adoption that significantly impacts user economics is the bid offer spread applied during conversions between fiat currency and stablecoins. While stablecoins are designed to maintain a 1:1 peg with their underlying asset (typically the US dollar), the reality of converting between fiat and crypto introduces market dynamics that can erode this theoretical parity.

7.1 Understanding the Spread Problem

When users convert fiat currency to stablecoins or vice versa through most platforms, they encounter a bid offer spread the difference between the buying price and selling price. Even though USDC or USDT theoretically equals $1.00, a platform might effectively charge $1.008 to buy a stablecoin and offer only $0.992 when selling it back. This 0.8% to 1.5% spread represents a significant friction cost, particularly for businesses making frequent conversions or moving large amounts.

This spread exists because most platforms operate market making models where they must maintain liquidity on both sides of the transaction. Holding inventory of both fiat and stablecoins involves costs: capital tied up in reserves, exposure to brief depegging events, regulatory compliance overhead, and the operational expense of managing banking relationships for fiat on ramps and off ramps. Platforms traditionally recover these costs through the spread rather than explicit fees.

For cryptocurrency exchanges and most fintech platforms, the spread also serves as their primary revenue mechanism for stablecoin conversions. When a platform facilitates thousands or millions of conversions daily, even small spreads generate substantial income. The spread compensates for the risk that during periods of market stress, stablecoins might temporarily trade below their peg, leaving the platform holding depreciated assets.

7.2 The Impact on Users and Business Operations

The cumulative effect of bid offer spreads becomes particularly painful for certain use cases. Small and medium sized businesses operating across borders face multiple conversion points: exchanging local currency to USD, converting USD to stablecoins for cross border transfer, then converting stablecoins back to USD or local currency at the destination. Each conversion compounds the cost, potentially consuming 2% to 4% of the transaction value when combined with traditional banking fees.

For businesses using stablecoins as working capital converting payroll, managing treasury operations, or settling international invoices the spread can eliminate much of the cost advantage that stablecoins are supposed to provide over traditional correspondent banking. A company converting $100,000 might effectively pay $1,500 in spread costs on a round trip conversion, comparable to traditional wire transfer fees that stablecoins aimed to disrupt.

Individual users in countries with unstable currencies face similar challenges. While holding USDT or USDC protects against local currency devaluation, the cost of frequently moving between local currency and stablecoins can be prohibitive. The spread becomes a “tax” on financial stability that disproportionately affects those who can least afford it.

7.3 Revolut’s 1:1 Model: Internalizing the Cost

Revolut’s recent introduction of true 1:1 conversions between USD and stablecoins (USDC and USDT) represents a fundamentally different approach to solving the spread problem. Rather than passing market making costs to users, Revolut absorbs the spread internally, guaranteeing that $1.00 in fiat equals exactly 1.00 stablecoin units in both directions, with no hidden markups.

This model is economically viable for Revolut because of several structural advantages. First, as a neobank with 65 million users and existing banking infrastructure, Revolut already maintains substantial fiat currency liquidity and doesn’t need to rely on external banking partners for every stablecoin conversion. Second, the company generates revenue from other services within its ecosystem subscription fees, interchange fees from card spending, interest on deposits allowing it to treat stablecoin conversions as a loss leader or break even feature that enhances customer retention and platform stickiness.

Third, by setting a monthly limit of approximately $578,000 per customer, Revolut manages its risk exposure while still accommodating the vast majority of retail and small business use cases. This prevents arbitrage traders from exploiting the zero spread model to make risk free profits by moving large volumes between Revolut and other platforms where spreads exist.

Revolut essentially bets that the value of removing friction from fiat crypto conversions thereby making stablecoins genuinely useful as working capital rather than speculative assets will drive sufficient user engagement and platform growth to justify the cost of eliminating spreads. For users, this transforms the economics of stablecoin usage, particularly for frequent converters or those operating in high currency volatility environments.

7.4 Why Not Everyone Can Offer 1:1 Conversions

The challenge for smaller platforms and pure cryptocurrency exchanges is that they lack Revolut’s structural advantages. A standalone crypto exchange without banking licenses and integrated fiat services must partner with banks for fiat on ramps, pay fees to those partners, maintain separate liquidity pools, and manage the regulatory complexity of operating in multiple jurisdictions. These costs don’t disappear simply because users want better rates they must be recovered somehow.

Additionally, maintaining tight spreads or true 1:1 conversions requires deep liquidity and sophisticated risk management. When thousands of users simultaneously want to exit stablecoins during market stress, a platform must have sufficient reserves to honor redemptions instantly without moving the price. Smaller platforms operating with thin liquidity buffers cannot safely eliminate spreads without risking insolvency during volatile periods.

The market structure for stablecoins also presents challenges. While stablecoins theoretically maintain 1:1 pegs, secondary market prices on decentralized exchanges and between different platforms can vary by small amounts. A platform offering guaranteed 1:1 conversions must either hold sufficient reserves to absorb these variations or accept that arbitrage traders will exploit any price discrepancies, potentially draining liquidity.

7.5 The Competitive Implications

Revolut’s move to zero spread stablecoin conversions could trigger a competitive dynamic in the fintech space, similar to how its original zero fee foreign exchange offering disrupted traditional currency conversion. Established players like Coinbase, Kraken, and other major exchanges will face pressure to reduce their spreads or explain why their costs remain higher.

For traditional banks contemplating stablecoin integration, the spread question becomes strategic. Banks could follow the Revolut model, absorbing spread costs to drive adoption and maintain customer relationships in an increasingly crypto integrated financial system. Alternatively, they might maintain spreads but offer other value added services that justify the cost, such as enhanced compliance, insurance on holdings, or integration with business treasury management systems.

The long term outcome may be market segmentation. Large, integrated fintech platforms with diverse revenue streams can offer true 1:1 conversions as a competitive advantage. Smaller, specialized platforms will continue operating with spreads but may differentiate through speed, blockchain coverage, or serving specific niches like high volume traders who value depth of liquidity over tight spreads.

For stablecoin issuers like Circle and Tether, the spread dynamics affect their business indirectly. Wider spreads on third party platforms create friction that slows stablecoin adoption, reducing the total assets under management that generate interest income for issuers. Partnerships with platforms offering tighter spreads or true 1:1 conversions could accelerate growth, even if those partnerships involve revenue sharing or other commercial arrangements.

Ultimately, the bid offer spread challenge highlights a fundamental tension in stablecoin economics: the gap between the theoretical promise of 1:1 value stability and the practical costs of maintaining liquidity, managing risk, and operating the infrastructure that connects fiat currency to blockchain based assets. Platforms that can bridge this gap efficiently whether through scale, integration, or innovative business models will have significant competitive advantages as stablecoins move from crypto native use cases into mainstream financial infrastructure.

8. Conclusion

Stablecoins represent a significant innovation in digital finance, offering the benefits of cryptocurrency without extreme volatility. They’ve found genuine utility in payments, remittances, and decentralized finance while generating substantial revenue for issuers through interest on reserves. However, they also carry real risks around reserve transparency, regulatory uncertainty, and potential fraud that users and institutions must carefully consider.

The regulatory landscape is rapidly evolving, with authorities worldwide moving toward treating stablecoins as systemically important financial infrastructure requiring bank like oversight. FATF standards impose traditional anti money laundering requirements on stablecoin service providers, creating compliance obligations comparable to traditional finance. Meanwhile, sophisticated monitoring tools have emerged to track flows, detect illicit activity, and ensure reserve adequacy.

For traditional banks, stablecoins represent both a competitive threat to correspondent banking models and an opportunity to modernize payment infrastructure. Rather than being displaced entirely, banks are increasingly positioning themselves as regulated gateways between fiat currency and stablecoins, maintaining customer relationships and compliance functions while leveraging blockchain efficiency.

For banks considering stablecoin integration, working with established infrastructure providers can mitigate technical and compliance challenges. The key is choosing use cases where stablecoins offer clear advantages, particularly in cross border payments and treasury management, while implementing robust risk management, transaction monitoring, and ensuring regulatory compliance with both traditional financial regulations and emerging crypto specific frameworks.

As the regulatory landscape evolves and technology matures, stablecoins are likely to become increasingly integrated into mainstream financial services. Their success will depend on maintaining trust through transparency, security, and regulatory cooperation while continuing to deliver value that traditional financial rails cannot match. The future likely involves a hybrid model where stablecoins and traditional banking coexist, each playing to their respective strengths in a more efficient, global financial system.