Stablecoins: A Comprehensive Guide

1. What Are Stablecoins?

Stablecoins are a type of cryptocurrency designed to maintain a stable value by pegging themselves to a reserve asset, typically a fiat currency like the US dollar. Unlike volatile cryptocurrencies such as Bitcoin or Ethereum, which can experience dramatic price swings, stablecoins aim to provide the benefits of digital currency without the price volatility.

The most common types of stablecoins include:

Fiat collateralized stablecoins are backed by traditional currencies held in reserve at a 1:1 ratio. Examples include Tether (USDT) and USD Coin (USDC), which maintain reserves in US dollars or dollar equivalent assets.

Crypto collateralized stablecoins use other cryptocurrencies as collateral, often over collateralized to account for volatility. DAI is a prominent example, backed by Ethereum and other crypto assets.

Algorithmic stablecoins attempt to maintain their peg through automated supply adjustments based on market demand, without traditional collateral backing. These have proven to be the most controversial and risky category.

2. Why Do Stablecoins Exist?

Stablecoins emerged to solve several critical problems in both traditional finance and the cryptocurrency ecosystem.

In the crypto world, they provide a stable store of value and medium of exchange. Traders use stablecoins to move in and out of volatile positions without converting back to fiat currency, avoiding the delays and fees associated with traditional banking. They serve as a safe harbor during market turbulence and enable seamless transactions across different blockchain platforms.

For cross border payments and remittances, stablecoins offer significant advantages over traditional methods. International transfers that typically take days and cost substantial fees can be completed in minutes for a fraction of the cost. This makes them particularly valuable for workers sending money to families in other countries or businesses conducting international trade.

Stablecoins also address financial inclusion challenges. In countries with unstable currencies or limited banking infrastructure, they provide access to a stable digital currency that can be held and transferred using just a smartphone. This opens up financial services to the unbanked and underbanked populations worldwide.

3. How Do Stablecoins Make Money?

Stablecoin issuers have developed several revenue models that can be remarkably profitable.

The primary revenue source for fiat backed stablecoins is interest on reserves. When issuers hold billions of dollars in US Treasury bills or other interest bearing assets backing their stablecoins, they earn substantial returns. For instance, with interest rates at 5%, a stablecoin issuer with $100 billion in reserves could generate $5 billion annually while still maintaining the 1:1 peg. Users typically receive no interest on their stablecoin holdings, allowing issuers to pocket the entire yield.

Transaction fees represent another revenue stream. While often minimal, the sheer volume of stablecoin transactions generates significant income. Some issuers charge fees for minting (creating) or redeeming stablecoins, particularly for large institutional transactions.

Premium services for institutional clients provide additional revenue. Banks, payment processors, and large enterprises often pay for faster settlement, higher transaction limits, dedicated support, and integration services.

Many stablecoin platforms also generate revenue through their broader ecosystem. This includes charging fees on decentralized exchanges, lending protocols, or other financial services built around the stablecoin.

3.1 The Pendle Revenue Model: Yield Trading Innovation

Pendle represents an innovative evolution in the DeFi stablecoin ecosystem through its yield trading protocol. Rather than issuing stablecoins directly, Pendle creates markets for trading future yield on stablecoin deposits and other interest bearing assets.

The Pendle revenue model operates through several mechanisms. The protocol charges trading fees on its automated market makers (AMMs), typically around 0.1% to 0.3% per swap. When users trade yield tokens on Pendle’s platform, a portion of these fees goes to the protocol treasury while another portion rewards liquidity providers who supply capital to the trading pools.

Pendle’s unique approach involves splitting interest bearing tokens into two components: the principal token (PT) representing the underlying asset, and the yield token (YT) representing the future interest. This separation allows sophisticated users to speculate on interest rates, hedge yield exposure, or lock in fixed returns on their stablecoin holdings.

The protocol generates revenue through swap fees, redemption fees when tokens mature, and potential governance token value capture as the protocol grows. This model demonstrates how stablecoin adjacent services can create profitable businesses by adding layers of financial sophistication on top of basic stablecoin infrastructure. Pendle particularly benefits during periods of high interest rates, when demand for yield trading increases and the potential returns from separating yield rights become more valuable.

4. Security and Fraud Concerns

Stablecoins face several critical security and fraud challenges that potential users and regulators must consider.

Reserve transparency and verification remain the most significant concern. Issuers must prove they actually hold the assets backing their stablecoins. Several controversies have erupted when stablecoin companies failed to provide clear, audited proof of reserves. The risk is that an issuer might not have sufficient backing, leading to a bank run scenario where the peg collapses and users cannot redeem their coins.

Smart contract vulnerabilities pose technical risks. Stablecoins built on blockchain platforms rely on code that, if flawed, can be exploited by hackers. Major hacks have resulted in hundreds of millions of dollars in losses, and once stolen, blockchain transactions are typically irreversible.

Regulatory uncertainty creates ongoing challenges. Different jurisdictions treat stablecoins differently, and the lack of clear, consistent regulation creates risks for both issuers and users. There’s potential for sudden regulatory action that could freeze assets or shut down operations.

Counterparty risk is inherent in centralized stablecoins. Users must trust the issuing company to maintain reserves, operate honestly, and remain solvent. If the company fails or acts fraudulently, users may lose their funds with limited recourse.

The algorithmic stablecoin model has proven particularly vulnerable. The catastrophic collapse of TerraUSD in 2022, which lost over $40 billion in value, demonstrated that algorithmic mechanisms can fail spectacularly under market stress, creating devastating losses for holders.

Money laundering and sanctions evasion concerns have drawn regulatory scrutiny. The pseudonymous nature of cryptocurrency transactions makes stablecoins attractive for illicit finance, though blockchain’s transparent ledger also makes transactions traceable with proper tools and cooperation.

4.1 Monitoring Stablecoin Flows

Effective monitoring of stablecoin flows has become critical for financial institutions, regulators, and the issuers themselves to ensure compliance, detect fraud, and understand market dynamics.

On Chain Analytics Tools provide the foundation for stablecoin monitoring. Since most stablecoins operate on public blockchains, every transaction is recorded and traceable. Companies like Chainalysis, Elliptic, and TRM Labs specialize in blockchain analytics, offering platforms that track stablecoin movements across wallets and exchanges. These tools can identify patterns, flag suspicious activities, and trace funds through complex transaction chains.

Real Time Transaction Monitoring systems alert institutions to potentially problematic flows. These systems track large transfers, unusual transaction patterns, rapid movement between exchanges (potentially indicating wash trading or manipulation), and interactions with known illicit addresses. Financial institutions integrating stablecoins must implement monitoring comparable to traditional payment systems.

Wallet Clustering and Entity Attribution techniques help identify the real world entities behind blockchain addresses. By analyzing transaction patterns, timing, and common input addresses, analytics firms can cluster related wallets and often attribute them to specific exchanges, services, or even individuals. This capability is crucial for understanding who holds stablecoins and where they’re being used.

Reserve Monitoring and Attestation focuses on the issuer side. Independent auditors and blockchain analysis firms track the total supply of stablecoins and verify that corresponding reserves exist. Circle, for instance, publishes monthly attestations from accounting firms. Some advanced monitoring systems provide real time transparency by linking on chain supply data with bank account verification.

Cross Chain Tracking has become essential as stablecoins exist across multiple blockchains. USDC and USDT operate on Ethereum, Tron, Solana, and other chains, requiring monitoring solutions that aggregate data across these ecosystems to provide a complete picture of flows.

Market Intelligence and Risk Assessment platforms combine on chain data with off chain information to assess concentration risk, identify potential market manipulation, and provide early warning of potential instability. When a small number of addresses hold large stablecoin positions, it creates systemic risk that monitoring can help quantify.

Banks and financial institutions implementing stablecoins typically deploy a combination of commercial blockchain analytics platforms, custom monitoring systems, and compliance teams trained in cryptocurrency investigation. The goal is achieving the same level of financial crime prevention and risk management that exists in traditional banking while adapting to the unique characteristics of blockchain technology.

5. How Regulators View Stablecoins

Regulatory attitudes toward stablecoins vary significantly across jurisdictions, but common themes and concerns have emerged globally.

United States Regulatory Approach involves multiple agencies with overlapping jurisdictions. The Securities and Exchange Commission (SEC) has taken the position that some stablecoins may be securities, particularly those offering yield or governed by investment contracts. The Commodity Futures Trading Commission (CFTC) views certain stablecoins as commodities. The Treasury Department and the Financial Stability Oversight Council have identified stablecoins as potential systemic risks requiring bank like regulation.

Proposed legislation in the US Congress has sought to create a comprehensive framework requiring stablecoin issuers to maintain high quality liquid reserves, submit to regular audits, and potentially obtain banking charters or trust company licenses. The regulatory preference is clearly toward treating major stablecoin issuers as financial institutions subject to banking supervision.

European Union Regulation has taken a more structured approach through the Markets in Crypto Assets (MiCA) regulation, which came into effect in 2024. MiCA establishes clear requirements for stablecoin issuers including reserve asset quality standards, redemption rights for holders, capital requirements, and governance standards. The regulation distinguishes between smaller stablecoin operations and “significant” stablecoins that require more stringent oversight due to their systemic importance.

United Kingdom Regulators are developing a framework that treats stablecoins used for payments as similar to traditional payment systems. The Bank of England and Financial Conduct Authority have indicated that stablecoin issuers should meet standards comparable to commercial banks, including holding reserves in central bank accounts or high quality government securities.

Asian Regulatory Perspectives vary widely. Singapore’s Monetary Authority has created a licensing regime for stablecoin issuers focused on reserve management and redemption guarantees. Hong Kong is developing similar frameworks. China has banned private stablecoins entirely while developing its own central bank digital currency. Japan requires stablecoin issuers to be licensed banks or trust companies.

Key Regulatory Concerns consistently include systemic risk (the failure of a major stablecoin could trigger broader financial instability), consumer protection (ensuring holders can redeem stablecoins for fiat currency), anti money laundering compliance, reserve adequacy and quality, concentration risk in the Treasury market (if stablecoin reserves significantly increase holdings of government securities), and the potential for stablecoins to facilitate capital flight or undermine monetary policy.

Central Bank Digital Currencies (CBDCs) represent a regulatory response to private stablecoins. Many central banks are developing or piloting digital currencies partly to provide a public alternative to private stablecoins, allowing governments to maintain monetary sovereignty while capturing the benefits of digital currency.

The regulatory trend is clearly toward treating stablecoins as systemically important financial infrastructure requiring oversight comparable to banks or payment systems, with an emphasis on reserve quality, redemption rights, and anti money laundering compliance.

5.1 How Stablecoins Impact the Correspondent Banking Model

Stablecoins pose both opportunities and existential challenges to the traditional correspondent banking system that has dominated international payments for decades.

The Traditional Correspondent Banking Model relies on a network of banking relationships where banks hold accounts with each other to facilitate international transfers. When a business in Brazil wants to pay a supplier in Thailand, the payment typically flows through multiple intermediary banks, each taking fees and adding delays. This system involves currency conversion, compliance checks at multiple points, and settlement risk, making international payments slow and expensive.

Stablecoins as Direct Competition offer a fundamentally different model. A business can send USDC directly to a recipient anywhere in the world in minutes, bypassing the correspondent banking network entirely. The recipient can then convert to local currency through a local exchange or payment processor. This disintermediation threatens the fee generating correspondent banking relationships that have been profitable for banks, particularly in remittance corridors and business to business payments.

Cost and Speed Advantages are significant. Traditional correspondent banking involves fees at multiple layers, often totaling 3-7% for remittances and 1-3% for business payments, with settlement taking 1-5 days. Stablecoin transfers can cost less than 1% including conversion fees, with settlement in minutes. This efficiency gap puts pressure on banks to either adopt stablecoin technology or risk losing payment volume.

The Disintermediation Threat extends beyond just payments. Correspondent banking generates substantial revenue for major international banks through foreign exchange spreads, service fees, and liquidity management. If businesses and individuals can hold and transfer value in stablecoins, they become less dependent on banks for international transactions. This is particularly threatening in high volume, low margin corridors where efficiency matters most.

Banks Adapting Through Integration represents one response to this threat. Rather than being displaced, some banks are incorporating stablecoins into their service offerings. They can issue their own stablecoins, partner with stablecoin issuers to provide on ramps and off ramps, or offer custody and transaction services for corporate clients wanting to use stablecoins. JPMorgan’s JPM Coin exemplifies this approach, using blockchain technology and stablecoin principles for institutional payments within a bank controlled system.

The Hybrid Model Emerging in practice combines stablecoins with traditional banking. Banks provide the fiat on ramps and off ramps, regulatory compliance, customer relationships, and local currency conversion, while stablecoins handle the actual transfer of value. This partnership model allows banks to maintain their customer relationships and regulatory compliance role while capturing efficiency gains from blockchain technology.

Regulatory Arbitrage Concerns arise because stablecoins can sometimes operate with less regulatory burden than traditional correspondent banking. Banks face extensive anti money laundering requirements, capital requirements, and regulatory scrutiny. If stablecoins provide similar services with lighter regulation, they gain a competitive advantage that regulators are increasingly seeking to eliminate through tighter stablecoin oversight.

Settlement Risk and Liquidity Management change fundamentally with stablecoins. Traditional correspondent banking requires banks to maintain nostro accounts (accounts held in foreign banks) prefunded with liquidity. Stablecoins allow for near instant settlement without prefunding requirements, potentially freeing up billions in trapped liquidity that banks currently must maintain across the correspondent network.

The long term impact will likely involve correspondent banking evolving rather than disappearing. Banks will increasingly serve as regulated gateways between fiat currency and stablecoins, while stablecoins handle the actual transfer of value. The most vulnerable players are mid tier correspondent banks that primarily provide routing services without strong customer relationships or value added services.

5.2 How FATF Standards Apply to Stablecoins

The Financial Action Task Force (FATF) provides international standards for combating money laundering and terrorist financing, and these standards have been extended to cover stablecoins and other virtual assets.

The Travel Rule represents the most significant FATF requirement affecting stablecoins. Originally designed for traditional wire transfers, the Travel Rule requires that information about the originator and beneficiary of transfers above a certain threshold (typically $1,000) must travel with the transaction. For stablecoins, this means that Virtual Asset Service Providers (VASPs) such as exchanges, wallet providers, and payment processors must collect and transmit customer information when facilitating stablecoin transfers.

Implementing the Travel Rule on public blockchains creates technical challenges. While bank wire transfers pass through controlled systems where information can be attached, blockchain transactions are peer to peer and pseudonymous. The industry has developed solutions like the Travel Rule Information Sharing Architecture (TRISA) and other protocols that allow VASPs to exchange customer information securely off chain while the stablecoin transaction occurs on chain.

Know Your Customer (KYC) and Customer Due Diligence requirements apply to any entity that provides services for stablecoin transactions. Exchanges, wallet providers, and payment processors must verify customer identities, assess risk levels, and maintain records of transactions. This requirement creates a tension with the permissionless nature of blockchain technology, where anyone can hold a self hosted wallet and transact directly without intermediaries.

VASP Registration and Licensing is required in most jurisdictions following FATF guidance. Any business providing stablecoin custody, exchange, or transfer services must register with financial authorities, implement anti money laundering programs, and submit to regulatory oversight. This has created significant compliance burdens for smaller operators and driven consolidation toward larger, well capitalized platforms.

Stablecoin Issuers as VASPs are generally classified as Virtual Asset Service Providers under FATF standards, subjecting them to the full range of anti money laundering and counter terrorist financing obligations. This includes transaction monitoring, suspicious activity reporting, and sanctions screening. Major issuers like Circle and Paxos have built sophisticated compliance programs comparable to traditional financial institutions.

The Self Hosted Wallet Challenge represents a key friction point. FATF has expressed concern about transactions involving self hosted (non custodial) wallets where users control their own private keys without intermediary oversight. Some jurisdictions have proposed restricting or requiring enhanced due diligence for transactions between VASPs and self hosted wallets, though this remains controversial and difficult to enforce technically.

Cross Border Coordination is essential but challenging. Stablecoins operate globally and instantly, but regulatory enforcement is jurisdictional. FATF promotes information sharing between national financial intelligence units and encourages mutual legal assistance. However, gaps in enforcement across jurisdictions create opportunities for regulatory arbitrage, where bad actors operate from jurisdictions with weak oversight.

Sanctions Screening is mandatory for stablecoin service providers. They must screen transactions against lists of sanctioned individuals, entities, and countries maintained by organizations like the US Office of Foreign Assets Control (OFAC). Several stablecoin issuers have demonstrated the ability to freeze funds in wallets associated with sanctioned addresses, showing that even decentralized systems can implement centralized controls when required by law.

Risk Based Approach is fundamental to FATF methodology. Service providers must assess the money laundering and terrorist financing risks specific to their operations and implement controls proportionate to those risks. For stablecoins, this means considering factors like transaction volumes, customer types, geographic exposure, and the underlying blockchain’s anonymity features.

Challenges in Implementation are significant. The pseudonymous nature of blockchain transactions makes it difficult to identify ultimate beneficial owners. The speed and global reach of stablecoin transfers compress the time window for intervention. The prevalence of decentralized exchanges and peer to peer transactions creates enforcement gaps. Some argue that excessive regulation will drive activity to unregulated platforms or privacy focused cryptocurrencies, making financial crime harder rather than easier to detect.

The FATF framework essentially attempts to impose traditional financial system controls on a technology designed to operate without intermediaries. While large, regulated stablecoin platforms can implement these requirements, the tension between regulatory compliance and the permissionless nature of blockchain technology remains unresolved and continues to drive both technological innovation and regulatory evolution.

6. Good Use Cases for Stablecoins

Despite the risks, stablecoins excel in several legitimate applications that offer clear advantages over traditional alternatives.

Cross border payments and remittances benefit enormously from stablecoins. Workers sending money home can avoid high fees and long delays, with transactions settling in minutes rather than days. Businesses conducting international trade can reduce costs and streamline operations significantly.

Treasury management for crypto native companies provides a practical use case. Cryptocurrency exchanges, blockchain projects, and Web3 companies need stable assets for operations while staying within the crypto ecosystem. Stablecoins let them hold working capital without exposure to crypto volatility.

Decentralized finance (DeFi) applications rely heavily on stablecoins. They enable lending and borrowing, yield farming, liquidity provision, and trading without the complications of volatile assets. Users can earn interest on stablecoin deposits or use them as collateral for loans.

Hedging against local currency instability makes stablecoins valuable in countries experiencing hyperinflation or currency crises. Citizens can preserve purchasing power by holding dollar backed stablecoins instead of rapidly devalating local currencies.

Programmable payments and smart contracts benefit from stablecoins. Businesses can automate payments based on conditions (such as releasing funds when goods are received) or create subscription services, escrow arrangements, and other complex payment structures that execute automatically.

Ecommerce and online payments increasingly accept stablecoins as they combine the low fees of cryptocurrency with price stability. This is particularly valuable for digital goods, online services, and merchant payments where volatility would be problematic.

6.1 Companies Specializing in Banking Stablecoin Integration

Several companies have emerged as leaders in helping traditional banks launch and integrate stablecoin solutions into their existing infrastructure.

Paxos is a regulated blockchain infrastructure company that provides white label stablecoin solutions for financial institutions. They’ve partnered with major companies to issue stablecoins and offer compliance focused infrastructure that meets banking regulatory requirements. Paxos handles the technical complexity while allowing banks to maintain their customer relationships.

Circle offers comprehensive business account services and APIs that enable banks to integrate USD Coin (USDC) into their platforms. Their developer friendly tools and banking partnerships have made them a go to provider for institutions wanting to offer stablecoin services. Circle emphasizes regulatory compliance and transparency with regular reserve attestations.

Fireblocks provides institutional grade infrastructure for banks looking to offer digital asset services, including stablecoins. Their platform handles custody, treasury operations, and connectivity to various blockchains, allowing banks to offer stablecoin functionality without building everything from scratch.

Taurus specializes in digital asset infrastructure for banks, wealth managers, and other financial institutions in Europe. They provide technology for custody, tokenization, and trading that enables traditional financial institutions to offer stablecoin services within existing regulatory frameworks.

Sygnum operates as a Swiss digital asset bank and offers banking as a service solutions. They help other banks integrate digital assets including stablecoins while ensuring compliance with Swiss banking regulations. Their approach combines traditional banking security with blockchain innovation.

Ripple has expanded beyond its cryptocurrency focus to offer enterprise blockchain solutions for banks, including infrastructure for stablecoin issuance and cross border payment solutions. Their partnerships with financial institutions worldwide position them as a bridge between traditional banking and blockchain technology.

BBVA and JPMorgan have also developed proprietary solutions (JPM Coin for JPMorgan) that other institutions might license or use as models, though these are typically more focused on their own operations and select partners.

7.1 The Bid Offer Spread Challenge: Liquidity vs. True 1:1 Conversions

One of the hidden costs in stablecoin adoption that significantly impacts user economics is the bid offer spread applied during conversions between fiat currency and stablecoins. While stablecoins are designed to maintain a 1:1 peg with their underlying asset (typically the US dollar), the reality of converting between fiat and crypto introduces market dynamics that can erode this theoretical parity.

7.1 Understanding the Spread Problem

When users convert fiat currency to stablecoins or vice versa through most platforms, they encounter a bid offer spread the difference between the buying price and selling price. Even though USDC or USDT theoretically equals $1.00, a platform might effectively charge $1.008 to buy a stablecoin and offer only $0.992 when selling it back. This 0.8% to 1.5% spread represents a significant friction cost, particularly for businesses making frequent conversions or moving large amounts.

This spread exists because most platforms operate market making models where they must maintain liquidity on both sides of the transaction. Holding inventory of both fiat and stablecoins involves costs: capital tied up in reserves, exposure to brief depegging events, regulatory compliance overhead, and the operational expense of managing banking relationships for fiat on ramps and off ramps. Platforms traditionally recover these costs through the spread rather than explicit fees.

For cryptocurrency exchanges and most fintech platforms, the spread also serves as their primary revenue mechanism for stablecoin conversions. When a platform facilitates thousands or millions of conversions daily, even small spreads generate substantial income. The spread compensates for the risk that during periods of market stress, stablecoins might temporarily trade below their peg, leaving the platform holding depreciated assets.

7.2 The Impact on Users and Business Operations

The cumulative effect of bid offer spreads becomes particularly painful for certain use cases. Small and medium sized businesses operating across borders face multiple conversion points: exchanging local currency to USD, converting USD to stablecoins for cross border transfer, then converting stablecoins back to USD or local currency at the destination. Each conversion compounds the cost, potentially consuming 2% to 4% of the transaction value when combined with traditional banking fees.

For businesses using stablecoins as working capital converting payroll, managing treasury operations, or settling international invoices the spread can eliminate much of the cost advantage that stablecoins are supposed to provide over traditional correspondent banking. A company converting $100,000 might effectively pay $1,500 in spread costs on a round trip conversion, comparable to traditional wire transfer fees that stablecoins aimed to disrupt.

Individual users in countries with unstable currencies face similar challenges. While holding USDT or USDC protects against local currency devaluation, the cost of frequently moving between local currency and stablecoins can be prohibitive. The spread becomes a “tax” on financial stability that disproportionately affects those who can least afford it.

7.3 Revolut’s 1:1 Model: Internalizing the Cost

Revolut’s recent introduction of true 1:1 conversions between USD and stablecoins (USDC and USDT) represents a fundamentally different approach to solving the spread problem. Rather than passing market making costs to users, Revolut absorbs the spread internally, guaranteeing that $1.00 in fiat equals exactly 1.00 stablecoin units in both directions, with no hidden markups.

This model is economically viable for Revolut because of several structural advantages. First, as a neobank with 65 million users and existing banking infrastructure, Revolut already maintains substantial fiat currency liquidity and doesn’t need to rely on external banking partners for every stablecoin conversion. Second, the company generates revenue from other services within its ecosystem subscription fees, interchange fees from card spending, interest on deposits allowing it to treat stablecoin conversions as a loss leader or break even feature that enhances customer retention and platform stickiness.

Third, by setting a monthly limit of approximately $578,000 per customer, Revolut manages its risk exposure while still accommodating the vast majority of retail and small business use cases. This prevents arbitrage traders from exploiting the zero spread model to make risk free profits by moving large volumes between Revolut and other platforms where spreads exist.

Revolut essentially bets that the value of removing friction from fiat crypto conversions thereby making stablecoins genuinely useful as working capital rather than speculative assets will drive sufficient user engagement and platform growth to justify the cost of eliminating spreads. For users, this transforms the economics of stablecoin usage, particularly for frequent converters or those operating in high currency volatility environments.

7.4 Why Not Everyone Can Offer 1:1 Conversions

The challenge for smaller platforms and pure cryptocurrency exchanges is that they lack Revolut’s structural advantages. A standalone crypto exchange without banking licenses and integrated fiat services must partner with banks for fiat on ramps, pay fees to those partners, maintain separate liquidity pools, and manage the regulatory complexity of operating in multiple jurisdictions. These costs don’t disappear simply because users want better rates they must be recovered somehow.

Additionally, maintaining tight spreads or true 1:1 conversions requires deep liquidity and sophisticated risk management. When thousands of users simultaneously want to exit stablecoins during market stress, a platform must have sufficient reserves to honor redemptions instantly without moving the price. Smaller platforms operating with thin liquidity buffers cannot safely eliminate spreads without risking insolvency during volatile periods.

The market structure for stablecoins also presents challenges. While stablecoins theoretically maintain 1:1 pegs, secondary market prices on decentralized exchanges and between different platforms can vary by small amounts. A platform offering guaranteed 1:1 conversions must either hold sufficient reserves to absorb these variations or accept that arbitrage traders will exploit any price discrepancies, potentially draining liquidity.

7.5 The Competitive Implications

Revolut’s move to zero spread stablecoin conversions could trigger a competitive dynamic in the fintech space, similar to how its original zero fee foreign exchange offering disrupted traditional currency conversion. Established players like Coinbase, Kraken, and other major exchanges will face pressure to reduce their spreads or explain why their costs remain higher.

For traditional banks contemplating stablecoin integration, the spread question becomes strategic. Banks could follow the Revolut model, absorbing spread costs to drive adoption and maintain customer relationships in an increasingly crypto integrated financial system. Alternatively, they might maintain spreads but offer other value added services that justify the cost, such as enhanced compliance, insurance on holdings, or integration with business treasury management systems.

The long term outcome may be market segmentation. Large, integrated fintech platforms with diverse revenue streams can offer true 1:1 conversions as a competitive advantage. Smaller, specialized platforms will continue operating with spreads but may differentiate through speed, blockchain coverage, or serving specific niches like high volume traders who value depth of liquidity over tight spreads.

For stablecoin issuers like Circle and Tether, the spread dynamics affect their business indirectly. Wider spreads on third party platforms create friction that slows stablecoin adoption, reducing the total assets under management that generate interest income for issuers. Partnerships with platforms offering tighter spreads or true 1:1 conversions could accelerate growth, even if those partnerships involve revenue sharing or other commercial arrangements.

Ultimately, the bid offer spread challenge highlights a fundamental tension in stablecoin economics: the gap between the theoretical promise of 1:1 value stability and the practical costs of maintaining liquidity, managing risk, and operating the infrastructure that connects fiat currency to blockchain based assets. Platforms that can bridge this gap efficiently whether through scale, integration, or innovative business models will have significant competitive advantages as stablecoins move from crypto native use cases into mainstream financial infrastructure.

8. Conclusion

Stablecoins represent a significant innovation in digital finance, offering the benefits of cryptocurrency without extreme volatility. They’ve found genuine utility in payments, remittances, and decentralized finance while generating substantial revenue for issuers through interest on reserves. However, they also carry real risks around reserve transparency, regulatory uncertainty, and potential fraud that users and institutions must carefully consider.

The regulatory landscape is rapidly evolving, with authorities worldwide moving toward treating stablecoins as systemically important financial infrastructure requiring bank like oversight. FATF standards impose traditional anti money laundering requirements on stablecoin service providers, creating compliance obligations comparable to traditional finance. Meanwhile, sophisticated monitoring tools have emerged to track flows, detect illicit activity, and ensure reserve adequacy.

For traditional banks, stablecoins represent both a competitive threat to correspondent banking models and an opportunity to modernize payment infrastructure. Rather than being displaced entirely, banks are increasingly positioning themselves as regulated gateways between fiat currency and stablecoins, maintaining customer relationships and compliance functions while leveraging blockchain efficiency.

For banks considering stablecoin integration, working with established infrastructure providers can mitigate technical and compliance challenges. The key is choosing use cases where stablecoins offer clear advantages, particularly in cross border payments and treasury management, while implementing robust risk management, transaction monitoring, and ensuring regulatory compliance with both traditional financial regulations and emerging crypto specific frameworks.

As the regulatory landscape evolves and technology matures, stablecoins are likely to become increasingly integrated into mainstream financial services. Their success will depend on maintaining trust through transparency, security, and regulatory cooperation while continuing to deliver value that traditional financial rails cannot match. The future likely involves a hybrid model where stablecoins and traditional banking coexist, each playing to their respective strengths in a more efficient, global financial system.

Technology Culture: The Sinking Car Syndrome

This is (hopefully) a short blog that will give you back a small piece of your life…

In technology, we rightly spend hours pouring over failure in order that we might understand it and therefore fix it and avoid it in the future. This seems a reasonable approach, learn from your mistakes, understand failure, plan your remediation etc etc. But is it possible that there are some instances where doing this is inappropriate? To answer this simple question, let me give you an analogy…

You decide that you want to travel from London to New York. Sounds reasonable so far…. But you decide you want to go by car! The reasoning for this is as follows:

  1. Cars are “tried and tested”.
  2. We have an existing deal with multiple car suppliers and we get great discounts.
  3. The key decision maker is a car enthusiast.
  4. The incumbent team understand cars and can support this choice.
  5. Cars are what we have available right now and we want to start execution tomorrow, so lets just make it work.

You first try a small hatchback and only manage to get around 3m off the coast of Scotland. Next up you figure you will get a more durable car, so you get a truck – but sadly this only makes 2m headway from the beach. You report back to the team and they send you a brand new Porsche and this time you give yourself an even bigger run up at the sea and you manage to make a whopping 4m, before the car sinks. The team now analyse all the data to figure out why each car sunk and what they can do to make this better. The team continue to experiment with various cars and progress is observed over time. After 6 months the team has managed to travel 12m towards their goal of driving to New York. The main reason for the progress is that the sunken cars are starting to form a land bridge. The leadership has now spent over 200m USD on this venture and don’t feel they can pivot, so they start to brainstorm how to make this work.

Maybe wind the windows up a little tighter, maybe the cars need more underseal, maybe over inflate the tyres or maybe we simply need way more cars? All of these may or may not make a difference. But here’s the challenge: you made a bad engineering choice and anything you do will just be a variant of bad. It will never be good and you cannot win with your choice.

The above obviously sounds a bit daft (and it is), but the point is that I am often called in after downtime to review an architecture to find a route cause and suggest remediation. But what is not always understood is that bad technology choices can be as likely to succeed as driving from London to New York. Sometimes you simply need to look at alternatives, you need a boat or a plane. The product architecture can be terminal, it wont ever be what you want it to be and no amount of analysis or spend will change this. The trick is to accept the brutal reality of your situation and move your focus towards choosing the technology that you need to transition to. Next try and figure out how quickly can you can do this pivot…

How to Install Apps From Anywhere on Apple Mac

Previously Macs would allow you to install software from anywhere. Now you will see the error message “NMAPxx.mpkg cannot be opened because its from an unidentified developer”. If you want to fix this and enable apps to be install from anywhere, you will need to run the following command line:

sudo spctl --master-disable

Once you have run the script you should then see the “Anywhere” option in the System Preferences > Security & Privacy Tab!

Part 2: Increasing your Cloud consumption (the sane way)

Introduction

This article follows on from the “Cloud Migrations Crusade” blog post…

A single tenancy datacenter is a fixed scale, fixed price service on a closed network. The costs of the resources in the datacenter are divided up and shared out to the enterprise constituents on a semi-random basis. If anyone uses less resources than the forecast this generates waste which is shared back to the enterprise. If there is more demand than forecasted, it will either generate service degradation, panic or an outage! This model is clearly fragile and doesn’t respond quickly to change; it is also wasteful as it requires a level of overprovisioning based on forecast consumption (otherwise you will experience delays in projects, service degradation or have reduced resilience).

Cloud, on the other hand is a multi-tenanted on demand software service which you pay for as you use. But surely having multiple tenants running on the same fixed capacity actually increases the risks, and just because its in the cloud it doesn’t mean that you can get away without over provisioning – so who sits with the over provisioned costs? The cloud providers have to build this into their rates. So cloud providers have to deal with a balance sheet of fixed capacity shared amongst customers running on demand infrastructure. They do this with very clever forecasting, very short provisioning cycles and asking their customers for forecasts and then offering discounts for pre-commits.

Anything that moves you back towards managing resources levels / forecasting will destroy a huge portion of the value of moving to the cloud in the first instance. For example, if you have ever been to a Re:Invent you will be flawed by the rate of innovation and also how easy it is to absorb these new innovative products. But wait – you just signed a 5yr cost commit and now you learn about Aurura’s new serverless database model. You realise that you can save millions of dollars; but you have to wait for your 5yr commits to expire before you adopt or maybe start mining bitcoin with all your excess commits! This is anti-innovation and anti-customer.

Whats even worse is that pre-commits are typically signed up front on day 1- this is total madness!!! At the point where you know nothing about your brave new world, you use the old costs as a proxy to predict the new costs so that you can squeeze a lousy 5px saving at the risk of 100px of the commit size! What you will start to learn is that your cloud success is NOT based on the commercial contract that you sign with your cloud provider; its actually based on the quality of the engineering talent that your organisation is able to attract. Cloud is a IP war – its not a legal/sourcing war. Allow yourself to learn, don’t box yourself in on day 1. When you sign the pre-commit you will notice your first year utilisation projections are actually tiny and therefore the savings are small. So whats the point of signing so early on when the risk is at a maximum and the gains are at a minimum? When you sign this deal you are essentially turning the cloud into a “financial data center” – you have destroyed the cloud before you even started!

A Lesson from the field – Solving Hadoop Compute Demand Spike:

We moved 7000 cores of burst compute to AWS to solve a capacity issue on premise. That’s expensive, so lets “fix the costs”! We can go a sign a RI (reserved instance), play with spot, buy savings plans or even beg / barter for some EDP relief. But instead we plugged the service usuage into Quicksight and analysed the queries. We found one query was using 60 percent of the entire banks compute! Nobody confessed to owning the query, so we just disabled it (if you need a reason for your change management; describe the change as “disabling a financial DDOS”). We quickly found the service owner and explained that running a table scan across billions of rows to return a report with just last months data is not a good idea. We also explained that if they don’t fix this we will start billing them in 6 weeks time (a few million dollars). The team deployed a fix and now we run the banks big data stack at half the costs – just by tuning one query!!!

So the point of the above is that there is no substitute for engineering excellence. You have to understand and engineer the cloud to win, you cannot contract yourself into the cloud. The more contracts you sign the more failures you will experience. This leads me to point 2…

Step 2: Training, Training, Training

Start the biggest training campaign you possibly can – make this your crusade. Train everyone; business, finance, security, infrastructure – you name it, you train it. Don’t limit what anyone can train on, training is cheap – feast as much as you can. Look at Udemy, ACloudGuru, Youtube, WhizLabs etc etc etc. If you get this wrong then you will find your organisation fills up with expensive consultants and bespoke migration products that you don’t need ++ can easily do yourself, via opensource or with your cloud provider toolsets. In fact I would go one step further – if your not prepared to learn about the cloud, your not ready to go there.

Step 3: The OS Build

When you do start your cloud migration and begin to review your base OS images – go right back to the very beginning, remove every single product in all of these base builds. Look at what you can get out the box from your cloud provider and really push yourself hard on what do I really need vs nice to have. But the trick is that to get the real benefit from a cloud migration, you have to start by making your builds as “naked” as possible. Nothing should move into the base build without a good reason. Ownership and report lines are not a good enough reason for someones special “tool” to make it into the build. This process, if done correctly, should deliver you between 20-40px of your cloud migration savings. Do this badly and your costs, complexity and support will all head in the wrong direction.

Security HAS to be a first class citizen of your new world. In most organizations this will likely make for some awkward cultural collisions (control and ownership vs agility) and some difficult dialogs. The cloud, by definition, should be liberating – so how do you secure it without creating a “cloud bunker” that nobody can actually use? More on this later… 🙂

Step 4: Hybrid Networking

For any organisation with data centers – make no mistake, if you get this wrong its over before it starts.

The Least Privileged Lie

In technology, there is a tendency to solve a problem badly by using gross simplification, then come up with a catchy one liner and then broadcast this as doctrine or a principle. Nothing ticks more boxes in this regard, than the principle of least privileges. The ensuing enterprise scale deadlocks created by a crippling implementation of least privileges, is almost certainly lost on its evangelists. This blog will try to put an end to the slavish efforts of many security teams that are trying to ration out micro permissions and hope the digital revolution can fit into some break glass approval process.

What is this “Least Privileged” thing? Why does it exist? What are the alternatives? Wikipedia gives you a good overview of this here. The first line contains an obvious and glaring issue: “The principle means giving a user account or process only those privileges which are essential to perform its intended function”. Here the principle is being applied equally to users and processes/code. The principle also states only give privileges that are essential. What this principle is trying to say, is that we should treat human beings and code as the same thing and that we should only give humans “essential” permissions. Firstly, who on earth figures out what that bar for essential is and how do they ascertain what is and what is not essential? Do you really need to use storage? Do you really need an API? If I give you an API, do you need Puts and Gets?

Human beings are NOT deterministic. If I have a team of humans that can operate under the principle of least privileges then I don’t need them in the first place. I can simply replace them with some AI/RPA. Imagine the brutal pain of a break glass activity every time someone needed to do something “unexpected”. “Hi boss, I need to use the bathroom on the 1st floor – can you approve this? <Gulp> Boss you took too long… I no longer need your approval!”. Applying least privileges to code would seem to make some sense; BUT only if you never updated the code and if did update the code you need to make sure you have 100px test coverage.

So why did some bright spark want to duck tape the world to such a brittle pain yielding principle? At the heart of this are three issues. Identity, Immutability, and Trust. If there are other ways to solve these issues then we don’t need to pain and risks of trying to implement something that will never actually work, creates friction and critically creates a false sense of security. Least Privileges will never save anyone, you will just be told that if you could have performed this security miracle then you would have been fine. But you cannot and so you are not.

Whats interesting to me is that the least privileged lie is so widely ignored. For example, just think about how we implement user access. If we truly believed in least privileges then every user would have a unique set of privileges assigned to them. Instead, because we acknowledge this is burdensome we approximate the privileges that a user will need using policies which we attach to groups. The moment we add a user to one of these groups, we are approximating their required privileges and start to become overly permissive.

Lets be clear with each other, anyone trying to implement least privileges is living a lie. The extent of the lie normally only becomes clear after the event. So this blog post is designed to re-point energy towards sustainable alternatives that work, and additionally remove the need for the myriad of micro permissive handbrakes (that routinely get switched off to debug outages and issues).

Who are you?

This is the biggest issue and still remains the largest risk in technology today. If I don’t know who you are then I really really want to limit what you can do. Experiencing a root/super user account take over, is a doomsday scenario for any organisation. So lets limit the blast zone of these accounts right?

This applies equally to code and humans. For code this problem has been solved a long time ago, and if you look

Is this really my code?

The DAO Ethereum Recursion Bug: El Gordo!

If you found my article, I would consider it a reasonable assumption that you already understand the importance of this

Brief Introduction

The splitDAO function was created in order for some members of the DAO to separate themselves and their tokens from the main DAO, creating a new ‘child DAO’, for example in case they found themselves in disagreement with the majority.

The child DAO goes through the same 27 day creation period as the original DAO. Pre-requisite steps in order to call a splitDAO function are the creation of a new proposal on the original DAO and designation of a curator for the child DAO.

The child DAO created by the attacker has been referred as ‘darkDAO’ on reddit and the name seems to have stuck. The proposal and split process necessary for the attack was initiated at least 7 days prior to the incident.

The exploit one alone would have been economically unviable (attacker would have needed to put up 1/20th of the stolen amount upfront in the original DAO) and the second one alone would have been time intensive because normally only one splitDAO call could be done per transaction.

One way to see this is that the attacker performed a fraud, or a theft. Another way, more interesting for its implications, is that the attacker took the contract literally and followed the rule of code.

In their (allegedly) own words:

@stevecalifornia on Hacker News – https://news.ycombinator.com/item?id=11926150

“DAO, I closely read your contract and agreed to execute the clause where I can withdraw eth repeatedly and only be charged for my initial withdraw.

Thank you for the $70 million. Let me know if you draw up any other contracts I can participate in.

Regards, 0x304a554a310c7e546dfe434669c62820b7d83490″

The HACK

An “attacker” managed to combine two “exploits” in the DAO.

1) The attacker called the splitDAO function recursively (up to 20 times).

2) To make the attack more efficient, the attacker also managed to replicate the incident from the same two addresses (using the same tokens over and over again (approx 250 times).

Quote from “Luigi Renna”: To put this instance in a natural language perspective the Attacker requested the DAO “I want to withdraw all my tokens, and before that I want to withdraw all my tokens, and before that I want to… etc.” And be charged only once.

The Code That was Hacked

Below is the now infamous SplitDAO function in all its glory:

function splitDAO(
uint _proposalID,
address _newCurator
) noEther onlyTokenholders returns (bool _success) {
...
// Get the ether to be moved. Notice that this is done first!
uint fundsToBeMoved =
(balances[msg.sender] * p.splitData[0].splitBalance) /
p.splitData[0].totalSupply;
if (p.splitData[0].newDAO.createTokenProxy.value(fundsToBeMoved)(msg.sender) == false) // << This is
the line the attacker wants to run more than once
throw;
...
// Burn DAO Tokens
Transfer(msg.sender, 0, balances[msg.sender]);
withdrawRewardFor(msg.sender); // be nice, and get his rewards
// XXXXX Notice the preceding line is critically before the next few
•http://hackingdistributed.com/2016/06/18/analysis-of-the-dao-exploit/ Question: So what about Blockchain/DLs?
totalSupply -= balances[msg.sender]; // THIS IS DONE LAST
balances[msg.sender] = 0; // THIS IS DONE LAST TOO
paidOut[msg.sender] = 0;
return true;
}

The basic idea behind the hack was:

1) Propose a split.
2) Execute the split.
3) When the DAO goes to withdraw your reward, call the function to execute a split before that withdrawal
finishes (ie recursion).
4) The function will start running again without updating your balance!!!
5) The line we marked above as “This is the line the attacker wants to run more than once“ will run more
than once!

Thoughts on the Hack

The code is easy to fix, you can just simply zero the balances immediately after calculating the fundsToBeMoved. But this bug is not the real issue for me – the main problem can be split into two areas:

  1. Ethereum is a Turing complete language with a stack/heap, exceptions and recursion. This means that even with the best intentions, we will create a vast number of routes through the code base to allow similar recursion and other bugs to be exposed. The only barrier really is how much ether it will cost to expose the bugs.
  2. There is no Escape Hatch for “bad” contracts. Emin Gun Sirer wrote a great article on this here: Escape hatches for smart contracts

Whilst a lot of focus is going in 2) – I believe that more focus needs to be put into making the language “safer”. For me, this would involve basic changes like:

  1. Blocking recursion. Blocking recursion at compile and runtime would be a big step forward. I struggle to see a downside to this, as in general recursion doesn’t scale, is slow and can always be replaced with iteration. Replace Recursion with Iteration
  2. Sandbox parameterisation. Currently there are no sandbox parameters to sit the contracts in. This means contracts the economic impact of these contracts are more empirical than deterministic. If you could abstract core parameters of the contract like the amount, wallet addresses etc etc and place these key values in an immutable wrapper than unwanted outcomes would be harder to achieve.
  3. Transactionality. Currently there in no obvious mechanism to peform economic functions wraped in an ATOMIC transaction. This ability would mean that economic values could be copied from the heap to the stack and moved as desired; but critically recursive calls could no revist the heap to essentially “duplicate” the value. It would also mean that if a benign fault occured the execution of the contract would be idempotent. Obviously blocking recursive calls goes some way towards this; but adding transactionality would be a material improvement.

I have other ideas including segregation of duties using API layers between core and contract code – but am still getting my head around Ethereum’s architecture.