A Deep Dive into Java 25 Virtual Threads: From Thread Per Request to Lightweight Concurrency

1. Introduction

Java’s concurrency model has undergone a revolutionary transformation with the introduction of Virtual Threads in Java 19 (as a preview feature) and their stabilization in Java 21. With Java 25, virtual threads have reached new levels of maturity by addressing critical pinning issues that previously limited their effectiveness. This article explores the evolution of threading models in Java, the problems virtual threads solve, and how Java 25 has refined this powerful concurrency primitive.

Virtual threads represent a paradigm shift in how we write concurrent Java applications. They enable the traditional thread per request model to scale to millions of concurrent operations without the resource overhead that plagued platform threads. Understanding virtual threads is essential for modern Java developers building high throughput, scalable applications.

2. The Problem with Traditional Platform Threads

2.1. Platform Thread Architecture

Platform threads (also called OS threads or kernel threads) are the traditional concurrency mechanism in Java. Each Java thread is a thin wrapper around an operating system thread, which looks like:

2.2. Resource Constraints

Platform threads are expensive resources:

  1. Memory Overhead: Each platform thread requires a stack (typically 1MB by default), which means 1,000 threads consume approximately 1GB of memory just for stacks.
  2. Context Switching Cost: The OS scheduler must perform context switches between threads, saving and restoring CPU registers, memory mappings, and other state.
  3. Limited Scalability: Creating tens of thousands of platform threads leads to:
    • Memory exhaustion
    • Increased context switching overhead
    • CPU cache thrashing
    • Scheduler contention

2.3. The Thread Pool Pattern and Its Limitations

To manage these constraints, developers traditionally use thread pools:

ExecutorService executor = Executors.newFixedThreadPool(200);

// Submit tasks to the pool
for (int i = 0; i < 10000; i++) {
    executor.submit(() -> {
        // Perform I/O operation
        String data = fetchDataFromDatabase();
        processData(data);
    });
}

Problems with Thread Pools:

  1. Task Queuing: With limited threads, tasks queue up waiting for available threads
  2. Resource Underutilization: Threads blocked on I/O waste CPU time
  3. Complexity: Tuning pool sizes becomes an art form
  4. Poor Observability: Stack traces don’t reflect actual application structure
Thread Pool (Size: 4)
┌──────┬──────┬──────┬──────┐
│Thread│Thread│Thread│Thread│
│  1   │  2   │  3   │  4   │
│BLOCK │BLOCK │BLOCK │BLOCK │
└──────┴──────┴──────┴──────┘
         ↑
    All threads blocked on I/O
    
Task Queue: [Task5, Task6, Task7, ..., Task1000]
              ↑
         Waiting for available thread

2.4. The Reactive Programming Alternative

To avoid blocking threads, reactive programming emerged:

Mono.fromCallable(() -> fetchDataFromDatabase())
    .flatMap(data -> processData(data))
    .flatMap(result -> saveToDatabase(result))
    .subscribe(
        success -> log.info("Completed"),
        error -> log.error("Failed", error)
    );

Reactive Programming Challenges:

  1. Steep Learning Curve: Requires understanding operators like flatMap, zip, merge
  2. Difficult Debugging: Stack traces are fragmented and hard to follow
  3. Imperative to Declarative: Forces a complete mental model shift
  4. Library Compatibility: Not all libraries support reactive patterns
  5. Error Handling: Becomes significantly more complex

3. Enter Virtual Threads: Lightweight Concurrency

3.1. The Virtual Thread Concept

Virtual threads are lightweight threads managed by the JVM rather than the operating system. They enable the thread per task programming model to scale:

Key Characteristics:

  1. Cheap to Create: Creating a virtual thread takes microseconds and minimal memory
  2. JVM Managed: The JVM scheduler multiplexes virtual threads onto a small pool of OS threads (carrier threads)
  3. Blocking is Fine: When a virtual thread blocks on I/O, the JVM unmounts it from its carrier thread
  4. Millions Scale: You can create millions of virtual threads without exhausting memory

3.2. How Virtual Threads Work Under the Hood

When a virtual thread performs a blocking operation:

Step 1: Virtual Thread Running
┌──────────────┐
│Virtual Thread│
│   (Running)  │
└──────┬───────┘
       │ Mounted on
       ↓
┌──────────────┐
│Carrier Thread│
│ (OS Thread)  │
└──────────────┘

Step 2: Blocking Operation Detected
┌──────────────┐
│Virtual Thread│
│  (Blocked)   │
└──────────────┘
       ↓
   Unmounted
       
┌──────────────┐
│Carrier Thread│ ← Now free for other virtual threads
│   (Free)     │
└──────────────┘

Step 3: Operation Completes
┌──────────────┐
│Virtual Thread│
│   (Ready)    │
└──────┬───────┘
       │ Remounted on
       ↓
┌──────────────┐
│Carrier Thread│
│ (OS Thread)  │
└──────────────┘

3.3. The Continuation Mechanism

Virtual threads use a mechanism called continuations. Below is an explanation of the continuation mechanism:

  • A virtual thread begins executing on some carrier (an OS thread under the hood), as though it were a normal thread.
  • When it hits a blocking operation (I/O, sleep, etc), the runtime arranges to save where it is (its stack frames, locals) into a continuation object (or the equivalent mechanism).
  • That carrier thread is released (so it can run other virtual threads) while the virtual thread is waiting.
  • Later when the blocking completes / the virtual thread is ready to resume, the continuation is scheduled on some carrier thread, its state restored and execution continues.

A simplified conceptual model looks like this:

// Simplified conceptual representation
class VirtualThread {
    Continuation continuation;
    Object mountedCarrierThread;
    
    void park() {
        // Save execution state
        continuation.yield();
        // Unmount from carrier thread
        mountedCarrierThread = null;
    }
    
    void unpark() {
        // Find available carrier thread
        mountedCarrierThread = getAvailableCarrier();
        // Restore execution state
        continuation.run();
    }
}

4. Creating and Using Virtual Threads

4.1. Basic Virtual Thread Creation

// Method 1: Using Thread.ofVirtual()
Thread vThread = Thread.ofVirtual().start(() -> {
    System.out.println("Hello from virtual thread: " + 
                       Thread.currentThread());
});
vThread.join();

// Method 2: Using Thread.startVirtualThread()
Thread.startVirtualThread(() -> {
    System.out.println("Another virtual thread: " + 
                       Thread.currentThread());
});

// Method 3: Using ExecutorService
try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
    executor.submit(() -> {
        System.out.println("Virtual thread from executor: " + 
                          Thread.currentThread());
    });
}

4.2. Virtual Thread Properties

Thread vThread = Thread.ofVirtual()
    .name("my-virtual-thread")
    .unstarted(() -> {
        System.out.println("Thread name: " + Thread.currentThread().getName());
        System.out.println("Is virtual: " + Thread.currentThread().isVirtual());
    });

vThread.start();
vThread.join();

// Output:
// Thread name: my-virtual-thread
// Is virtual: true

4.3. Practical Example: HTTP Server

This example shows how virtual threads simplify server design by allowing each incoming HTTP request to be handled in its own virtual thread, just like the classic thread-per-request model—only now it scales.

The code below creates an executor that launches a new virtual thread for every request. Inside that thread, the handler performs blocking I/O (reading the request and writing the response) in a natural, linear style. There’s no need for callbacks, reactive chains, or custom thread pools, because blocking no longer ties up an OS thread.

Each request runs independently, errors are isolated, and the system can support a very large number of concurrent connections thanks to the low cost of virtual threads.

The new virtual thread version is dramatically simpler because it uses plain blocking code without threadpool tuning, callback handlers, or complex asynchronous frameworks.

// Traditional Platform Thread Approach
public class PlatformThreadServer {
    private static final ExecutorService executor = 
        Executors.newFixedThreadPool(200);
    
    public void handleRequest(HttpRequest request) {
        executor.submit(() -> {
            try {
                // Simulate database query (blocking I/O)
                Thread.sleep(100);
                String data = queryDatabase(request);
                
                // Simulate external API call (blocking I/O)
                Thread.sleep(50);
                String apiResult = callExternalApi(data);
                
                sendResponse(apiResult);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        });
    }
}

// Virtual Thread Approach
public class VirtualThreadServer {
    private static final ExecutorService executor = 
        Executors.newVirtualThreadPerTaskExecutor();
    
    public void handleRequest(HttpRequest request) {
        executor.submit(() -> {
            try {
                // Same blocking code, but now scalable!
                Thread.sleep(100);
                String data = queryDatabase(request);
                
                Thread.sleep(50);
                String apiResult = callExternalApi(data);
                
                sendResponse(apiResult);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        });
    }
}

Performance Comparison:

Platform Thread Server (200 thread pool):
- Max concurrent requests: ~200
- Memory overhead: ~200MB (thread stacks)
- Throughput: Limited by pool size

Virtual Thread Server:
- Max concurrent requests: ~1,000,000+
- Memory overhead: ~1MB per 1000 threads
- Throughput: Limited by available I/O resources

4.4. Structured Concurrency

Traditional Java concurrency makes it easy to start threads but hard to control their lifecycle. Tasks can outlive the method that created them, failures get lost, and background work becomes difficult to reason about.

Structured concurrency fixes this by enforcing a simple rule:

tasks started in a scope must finish before the scope exits.

This gives you predictable ownership, automatic cleanup, and reliable error propagation.

With virtual threads, this model finally becomes practical. Virtual threads are cheap to create and safe to block, so you can express concurrent logic using straightforward, synchronous-looking code—without thread pools or callbacks.

Example

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

    var f1 = scope.fork(() -> fetchUser(id));
    var f2 = scope.fork(() -> fetchOrders(id));

    scope.join();
    scope.throwIfFailed();

    return new UserData(f1.get(), f2.get());
}

All tasks run concurrently, but the structure remains clear:

  • the parent waits for all children,
  • failures propagate correctly,
  • and no threads leak beyond the scope.

In short: virtual threads provide the scalability; structured concurrency provides the clarity. Together they make concurrent Java code simple, safe, and predictable.

5. Issues with Virtual Threads Before Java 25

5.1. The Pinning Problem

The most significant issue with virtual threads before Java 25 was “pinning” – situations where a virtual thread could not unmount from its carrier thread when blocking, defeating the purpose of virtual threads.

Pinning occurred in two main scenarios:

5.1.1. Synchronized Blocks

public class PinningExample {
    private final Object lock = new Object();
    
    public void problematicMethod() {
        synchronized (lock) {  // PINNING OCCURS HERE
            try {
                // This sleep pins the carrier thread
                Thread.sleep(1000);
                
                // I/O operations also pin
                String data = blockingDatabaseCall();
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
    }
}

What happens during pinning:

Before Pinning:
┌─────────────┐
│Virtual      │
│Thread A     │
└─────┬───────┘
      │ Mounted
      ↓
┌─────────────┐
│Carrier      │
│Thread 1     │
└─────────────┘

During Synchronized Block (Pinned):
┌─────────────┐
│Virtual      │
│Thread A     │ ← Cannot unmount due to synchronized
│(BLOCKED)    │
└─────┬───────┘
      │ PINNED
      ↓
┌─────────────┐
│Carrier      │ ← Wasted, cannot be used by other 
│Thread 1     │   virtual threads
│(BLOCKED)    │
└─────────────┘

Other Virtual Threads Queue Up:
[VThread B] [VThread C] [VThread D] ...
      ↓
Waiting for available carrier threads

5.1.2. Native Methods and Foreign Functions

public class NativePinningExample {
    
    public void callNativeCode() {
        // JNI calls pin the virtual thread
        nativeMethod();  // PINNING
    }
    
    private native void nativeMethod();
    
    public void foreignFunctionCall() {
        // Foreign function calls (Project Panama) also pin
        try (Arena arena = Arena.ofConfined()) {
            MemorySegment segment = arena.allocate(100);
            // Operations here may pin
        }
    }
}

5.2. Monitoring Pinning Events

Before Java 25, you could detect pinning with JVM flags:

java -Djdk.tracePinnedThreads=full MyApplication

Output when pinning occurs:

Thread[#23,ForkJoinPool-1-worker-1,5,CarrierThreads]
    java.base/java.lang.VirtualThread$VThreadContinuation.onPinned
    java.base/java.lang.VirtualThread.parkNanos
    java.base/java.lang.System$2.parkVirtualThread
    java.base/jdk.internal.misc.VirtualThreads.park
    java.base/java.lang.Thread.sleepNanos
    com.example.MyClass.problematicMethod(MyClass.java:42) <== monitors:1

5.3. Workarounds Before Java 25

Developers had to manually refactor code to avoid pinning:

// BAD: Uses synchronized (causes pinning)
public class BadExample {
    private final Object lock = new Object();
    
    public void processRequest() {
        synchronized (lock) {
            blockingOperation();  // PINNING
        }
    }
}

// GOOD: Uses ReentrantLock (no pinning)
public class GoodExample {
    private final ReentrantLock lock = new ReentrantLock();
    
    public void processRequest() {
        lock.lock();
        try {
            blockingOperation();  // No pinning
        } finally {
            lock.unlock();
        }
    }
}

5.4. Impact of Pinning

The pinning problem had severe consequences:

// Demonstration of pinning impact
public class PinningImpactDemo {
    private static final Object LOCK = new Object();
    
    public static void main(String[] args) {
        int numTasks = 10000;
        
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            long start = System.currentTimeMillis();
            
            CountDownLatch latch = new CountDownLatch(numTasks);
            
            for (int i = 0; i < numTasks; i++) {
                executor.submit(() -> {
                    synchronized (LOCK) {  // All threads pin on this lock
                        try {
                            Thread.sleep(10);
                        } catch (InterruptedException e) {
                            Thread.currentThread().interrupt();
                        }
                    }
                    latch.countDown();
                });
            }
            
            latch.await();
            long duration = System.currentTimeMillis() - start;
            
            System.out.println("Time with synchronized: " + duration + "ms");
            // Result: ~Sequential execution due to pinning
        }
    }
}

Results:

  • With synchronized (pinning): ~100 seconds (essentially sequential)
  • With ReentrantLock (no pinning): ~1 second (highly concurrent)

6. Java 25 Improvements: Solving the Pinning Problem

6.1. JEP 491: Synchronized Blocks No Longer Pin

Java 25 introduces a revolutionary change through JEP 491: synchronized blocks and methods no longer pin virtual threads to their carrier threads.

How it works:

Java 21-24 Behavior:
┌─────────────┐
│Virtual      │
│Thread       │ ─ synchronized block ─> PINS carrier thread
└─────┬───────┘
      │ PINNED
      ↓
┌─────────────┐
│Carrier      │ ← Cannot be reused
│Thread       │
└─────────────┘

Java 25+ Behavior:
┌─────────────┐
│Virtual      │
│Thread       │ ─ synchronized block ─> Unmounts normally
└─────────────┘
      │
      ↓ Unmounts
┌─────────────┐
│Carrier      │ ← Available for other virtual threads
│Thread (FREE)│
└─────────────┘

6.2. Implementation Details

The JVM now uses a new locking mechanism that allows virtual threads to yield even inside synchronized blocks:

public class Java25SynchronizedExample {
    private final Object lock = new Object();
    
    public void modernSynchronized() {
        synchronized (lock) {
            // In Java 25+, this blocking operation
            // will NOT pin the carrier thread
            try {
                Thread.sleep(1000);
                
                // I/O operations also don't pin anymore
                String data = blockingDatabaseCall();
                processData(data);
            } catch (InterruptedException e) {
                Thread.currentThread().interrupt();
            }
        }
        // Virtual thread can unmount and remount as needed
    }
    
    private String blockingDatabaseCall() {
        // Simulated blocking I/O
        return "data";
    }
    
    private void processData(String data) {
        // Processing
    }
}

6.3. Performance Improvements

Let’s compare the same workload across Java versions:

public class PerformanceComparison {
    private static final Object SHARED_LOCK = new Object();
    
    public static void main(String[] args) throws InterruptedException {
        int numTasks = 10000;
        int sleepMs = 10;
        
        // Test with synchronized blocks
        testSynchronized(numTasks, sleepMs);
    }
    
    private static void testSynchronized(int numTasks, int sleepMs) 
            throws InterruptedException {
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            long start = System.currentTimeMillis();
            CountDownLatch latch = new CountDownLatch(numTasks);
            
            for (int i = 0; i < numTasks; i++) {
                executor.submit(() -> {
                    synchronized (SHARED_LOCK) {
                        try {
                            Thread.sleep(sleepMs);
                        } catch (InterruptedException e) {
                            Thread.currentThread().interrupt();
                        }
                    }
                    latch.countDown();
                });
            }
            
            latch.await();
            long duration = System.currentTimeMillis() - start;
            
            System.out.println("Synchronized block test:");
            System.out.println("  Tasks: " + numTasks);
            System.out.println("  Duration: " + duration + "ms");
            System.out.println("  Throughput: " + (numTasks * 1000.0 / duration) + " tasks/sec");
        }
    }
}

Results:

Java 21-24:
  Tasks: 10000
  Duration: ~100000ms (essentially sequential)
  Throughput: ~100 tasks/sec

Java 25:
  Tasks: 10000
  Duration: ~1000ms (highly parallel)
  Throughput: ~10000 tasks/sec
  
100x performance improvement!

6.4. No More Manual Refactoring

Before Java 25, libraries and applications had to refactor synchronized code:

// Pre-Java 25: Had to refactor to avoid pinning
public class PreJava25Approach {
    // Changed from Object to ReentrantLock
    private final ReentrantLock lock = new ReentrantLock();
    
    public void doWork() {
        lock.lock();  // More verbose
        try {
            blockingOperation();
        } finally {
            lock.unlock();
        }
    }
}

// Java 25+: Can keep existing synchronized code
public class Java25Approach {
    private final Object lock = new Object();
    
    public synchronized void doWork() {  // Simple, no pinning
        blockingOperation();
    }
}

6.5. Remaining Pinning Scenarios

Java 25 removes most cases where virtual threads could become pinned, but a few situations can still prevent a virtual thread from unmounting from its carrier thread:

1. Blocking Native Calls (JNI)

If a virtual thread enters a JNI method that blocks, the JVM cannot safely suspend it, so the carrier thread remains pinned until the native call returns.

2. Synchronized Blocks Leading Into Native Work

Although Java-level synchronization no longer pins, a synchronized section that transitions into a blocking native operation can still force the carrier thread to stay attached.

3. Low-Level APIs Requiring Thread Affinity

Code using Unsafe, custom locks, or mechanisms that assume a fixed OS thread may require pinning to maintain correctness.

6.6. Migration Benefits

Existing codebases automatically benefit from Java 25:

// Legacy code using synchronized (common in older libraries)
public class LegacyService {
    private final Map<String, Data> cache = new HashMap<>();
    
    public synchronized Data getData(String key) {
        if (!cache.containsKey(key)) {
            // This would pin in Java 21-24
            // No pinning in Java 25!
            Data data = expensiveDatabaseCall(key);
            cache.put(key, data);
        }
        return cache.get(key);
    }
    
    private Data expensiveDatabaseCall(String key) {
        // Blocking I/O
        return new Data();
    }
    
    record Data() {}
}

7. Understanding ForkJoinPool and Virtual Thread Scheduling

Virtual threads behave as if each one runs independently, but they do not execute directly on the CPU. Instead, the JVM schedules them onto a small set of real OS threads known as carrier threads. These carrier threads are managed by the ForkJoinPool, which serves as the internal scheduler that runs, pauses, and resumes virtual threads.

This scheduling model allows Java to scale to massive levels of concurrency without overwhelming the operating system.

7.1 What the ForkJoinPool Is

The ForkJoinPool is a high-performance thread pool built around a small number of long-lived worker threads. It was originally designed for parallel computations but is also ideal for running virtual threads because of its extremely efficient scheduling behaviour.

Each worker thread maintains its own task queue, allowing most operations to happen without contention. The pool is designed to keep all CPU cores busy with minimal overhead.

7.2 The Work-Stealing Algorithm

A defining feature of the ForkJoinPool is its work-stealing algorithm. Each worker thread primarily works from its own queue, but when it becomes idle, it doesn’t wait—it looks for work in other workers’ queues.

In other words:

  • Active workers process their own tasks.
  • Idle workers “steal” tasks from other queues.
  • Stealing avoids bottlenecks and keeps all CPU cores busy.
  • Tasks spread dynamically across the pool, improving throughput.

This decentralized approach avoids the cost of a single shared queue and ensures that no CPU thread sits idle while others still have work.

Work-stealing is one of the main reasons the ForkJoinPool can handle huge numbers of virtual threads efficiently.

7.3 Why Virtual Threads Use the ForkJoinPool

Virtual threads frequently block during operations like I/O, sleeping, or locking. When a virtual thread blocks, the JVM can save its execution state and immediately free the carrier thread.

To make this efficient, Java needs a scheduler that can:

  • quickly reassign work to available carrier threads
  • keep CPUs fully utilized
  • handle thousands or millions of short-lived tasks
  • pick up paused virtual threads instantly when they resume

The ForkJoinPool, with its lightweight scheduling and work-stealing algorithm, suited these needs perfectly.

7.4 How Virtual Thread Scheduling Works

The scheduling process works as follows:

  1. A virtual thread becomes runnable.
  2. The ForkJoinPool assigns it to an available carrier thread.
  3. The virtual thread executes until it blocks.
  4. The JVM captures its state and unmounts it, freeing the carrier thread.
  5. When the blocking operation completes, the virtual thread is placed back into the pool’s queues.
  6. Any available carrier thread—regardless of which one ran it earlier—can resume it.

Because virtual threads run only when actively computing, and unmount the moment they block, the ForkJoinPool keeps the system efficient and responsive.

7.5 Why This Design Scales

This architecture scales exceptionally well:

  • Few OS threads handle many virtual threads.
  • Blocking is cheap, because it releases carrier threads instantly.
  • Work-stealing ensures every CPU is busy and load-balanced.
  • Context switching is lightweight compared to OS thread switching.
  • Developers write simple blocking code, without worrying about thread pool exhaustion.

It gives Java the scalability of an asynchronous runtime with the readability of synchronous code.

7.6 Misconceptions About the ForkJoinPool

Although virtual threads rely on a ForkJoinPool internally, they do not interfere with:

  • parallel streams,
  • custom ForkJoinPools created by the application,
  • or other thread pools.

The virtual-thread scheduler is isolated, and it normally requires no configuration or tuning.

The ForkJoinPool, powered by its work-stealing algorithm, provides the small number of OS threads and the efficient scheduling needed to run them at scale. Together, they allow Java to deliver enormous concurrency without the complexity or overhead of traditional threading models.

8. Virtual Threads vs. Reactive Programming

8.1. Code Complexity Comparison

// Scenario: Fetch user data, enrich with profile, save to database

// Reactive approach (Spring WebFlux)
public class ReactiveUserService {
    
    public Mono<User> processUser(String userId) {
        return userRepository.findById(userId)
            .flatMap(user -> 
                profileService.getProfile(user.getProfileId())
                    .map(profile -> user.withProfile(profile))
            )
            .flatMap(user -> 
                enrichmentService.enrichData(user)
            )
            .flatMap(user -> 
                userRepository.save(user)
            )
            .doOnError(error -> 
                log.error("Error processing user", error)
            )
            .timeout(Duration.ofSeconds(5))
            .retry(3);
    }
}

// Virtual thread approach (Spring Boot with Virtual Threads)
public class VirtualThreadUserService {
    
    public User processUser(String userId) {
        try {
            // Simple, sequential code that scales
            User user = userRepository.findById(userId);
            Profile profile = profileService.getProfile(user.getProfileId());
            user = user.withProfile(profile);
            user = enrichmentService.enrichData(user);
            return userRepository.save(user);
            
        } catch (Exception e) {
            log.error("Error processing user", e);
            throw e;
        }
    }
}

8.2. Error Handling Comparison

// Reactive error handling
public Mono<Result> reactiveProcessing() {
    return fetchData()
        .flatMap(data -> validate(data))
        .flatMap(data -> process(data))
        .onErrorResume(ValidationException.class, e -> 
            Mono.just(Result.validationFailed(e)))
        .onErrorResume(ProcessingException.class, e -> 
            Mono.just(Result.processingFailed(e)))
        .onErrorResume(e -> 
            Mono.just(Result.unknownError(e)));
}

// Virtual thread error handling
public Result virtualThreadProcessing() {
    try {
        Data data = fetchData();
        validate(data);
        return process(data);
        
    } catch (ValidationException e) {
        return Result.validationFailed(e);
    } catch (ProcessingException e) {
        return Result.processingFailed(e);
    } catch (Exception e) {
        return Result.unknownError(e);
    }
}

8.3. When to Use Each Approach

Use Virtual Threads When:

  • You want simple, readable code
  • Your team is familiar with imperative programming
  • You need easy debugging with clear stack traces
  • You’re working with blocking APIs
  • You want to migrate existing code with minimal changes

Consider Reactive When:

  • You need backpressure handling
  • You’re building streaming data pipelines
  • You need fine grained control over execution
  • Your entire stack is already reactive

9. Advanced Virtual Thread Patterns

9.1. Fan Out / Fan In Pattern

public class FanOutFanInPattern {
    
    public CompletedReport generateReport(List<String> dataSourceIds) throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            
            // Fan out: Submit tasks for each data source
            List<Subtask<DataChunk>> tasks = dataSourceIds.stream()
                .map(id -> scope.fork(() -> fetchFromDataSource(id)))
                .toList();
            
            // Wait for all to complete
            scope.join();
            scope.throwIfFailed();
            
            // Fan in: Combine results
            List<DataChunk> allData = tasks.stream()
                .map(Subtask::get)
                .toList();
            
            return aggregateReport(allData);
        }
    }
    
    private DataChunk fetchFromDataSource(String id) throws InterruptedException {
        Thread.sleep(100); // Simulate I/O
        return new DataChunk(id, "Data from " + id);
    }
    
    private CompletedReport aggregateReport(List<DataChunk> chunks) {
        return new CompletedReport(chunks);
    }
    
    record DataChunk(String sourceId, String data) {}
    record CompletedReport(List<DataChunk> chunks) {}
}

9.2. Rate Limited Processing

public class RateLimitedProcessor {
    private final Semaphore rateLimiter;
    private final ExecutorService executor;
    
    public RateLimitedProcessor(int maxConcurrent) {
        this.rateLimiter = new Semaphore(maxConcurrent);
        this.executor = Executors.newVirtualThreadPerTaskExecutor();
    }
    
    public void processItems(List<Item> items) throws InterruptedException {
        CountDownLatch latch = new CountDownLatch(items.size());
        
        for (Item item : items) {
            executor.submit(() -> {
                try {
                    rateLimiter.acquire();
                    try {
                        processItem(item);
                    } finally {
                        rateLimiter.release();
                    }
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                } finally {
                    latch.countDown();
                }
            });
        }
        
        latch.await();
    }
    
    private void processItem(Item item) throws InterruptedException {
        Thread.sleep(50); // Simulate processing
        System.out.println("Processed: " + item.id());
    }
    
    public void shutdown() {
        executor.close();
    }
    
    record Item(String id) {}
    
    public static void main(String[] args) throws InterruptedException {
        RateLimitedProcessor processor = new RateLimitedProcessor(10);
        
        List<Item> items = IntStream.range(0, 100)
            .mapToObj(i -> new Item("item-" + i))
            .toList();
        
        long start = System.currentTimeMillis();
        processor.processItems(items);
        long duration = System.currentTimeMillis() - start;
        
        System.out.println("Processed " + items.size() + 
            " items in " + duration + "ms");
        
        processor.shutdown();
    }
}

9.3. Timeout Pattern

public class TimeoutPattern {
    
    public <T> T executeWithTimeout(Callable<T> task, Duration timeout) 
            throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            
            Subtask<T> subtask = scope.fork(task);
            
            // Join with timeout
            scope.joinUntil(Instant.now().plus(timeout));
            
            if (subtask.state() == Subtask.State.SUCCESS) {
                return subtask.get();
            } else {
                throw new TimeoutException("Task did not complete within " + timeout);
            }
        }
    }
    
    public static void main(String[] args) {
        TimeoutPattern pattern = new TimeoutPattern();
        
        try {
            String result = pattern.executeWithTimeout(
                () -> {
                    Thread.sleep(5000);
                    return "Completed";
                },
                Duration.ofSeconds(2)
            );
            System.out.println("Result: " + result);
        } catch (TimeoutException e) {
            System.out.println("Task timed out!");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}

9.4. Racing Tasks Pattern

public class RacingTasksPattern {
    
    public <T> T race(List<Callable<T>> tasks) throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnSuccess<T>()) {
            
            // Submit all tasks
            for (Callable<T> task : tasks) {
                scope.fork(task);
            }
            
            // Wait for first success
            scope.join();
            
            // Return the first result
            return scope.result();
        }
    }
    
    public static void main(String[] args) throws Exception {
        RacingTasksPattern pattern = new RacingTasksPattern();
        
        List<Callable<String>> tasks = List.of(
            () -> {
                Thread.sleep(1000);
                return "Server 1 response";
            },
            () -> {
                Thread.sleep(500);
                return "Server 2 response";
            },
            () -> {
                Thread.sleep(2000);
                return "Server 3 response";
            }
        );
        
        long start = System.currentTimeMillis();
        String result = pattern.race(tasks);
        long duration = System.currentTimeMillis() - start;
        
        System.out.println("Winner: " + result);
        System.out.println("Time: " + duration + "ms");
        // Output: Winner: Server 2 response, Time: ~500ms
    }
}

10. Best Practices and Gotchas

10.1. ThreadLocal Considerations

Virtual threads and ThreadLocal can lead to memory issues:

public class ThreadLocalIssues {
    
    // PROBLEM: ThreadLocal with virtual threads
    private static final ThreadLocal<ExpensiveResource> resource = 
        ThreadLocal.withInitial(ExpensiveResource::new);
    
    public void problematicUsage() {
        // With millions of virtual threads, millions of instances!
        ExpensiveResource r = resource.get();
        r.doWork();
    }
    
    // SOLUTION 1: Use scoped values (Java 21+)
    private static final ScopedValue<ExpensiveResource> scopedResource = 
        ScopedValue.newInstance();
    
    public void betterUsage() {
        ExpensiveResource r = new ExpensiveResource();
        ScopedValue.where(scopedResource, r).run(() -> {
            ExpensiveResource scoped = scopedResource.get();
            scoped.doWork();
        });
    }
    
    // SOLUTION 2: Pass as parameters
    public void bestUsage(ExpensiveResource resource) {
        resource.doWork();
    }
    
    static class ExpensiveResource {
        private final byte[] data = new byte[1024 * 1024]; // 1MB
        
        void doWork() {
            // Work with resource
        }
    }
}

10.2. Don’t Block the Carrier Thread Pool

public class CarrierThreadPoolGotchas {
    
    // BAD: CPU intensive work in virtual threads
    public void cpuIntensiveWork() {
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            for (int i = 0; i < 1000; i++) {
                executor.submit(() -> {
                    // This blocks a carrier thread with CPU work
                    computePrimes(1_000_000);
                });
            }
        }
    }
    
    // GOOD: Use platform thread pool for CPU work
    public void properCpuWork() {
        try (ExecutorService executor = Executors.newFixedThreadPool(
                Runtime.getRuntime().availableProcessors())) {
            for (int i = 0; i < 1000; i++) {
                executor.submit(() -> {
                    computePrimes(1_000_000);
                });
            }
        }
    }
    
    // VIRTUAL THREADS: Best for I/O bound work
    public void ioWork() {
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            for (int i = 0; i < 1_000_000; i++) {
                executor.submit(() -> {
                    try {
                        // I/O operations: perfect for virtual threads
                        String data = fetchFromDatabase();
                        sendToAPI(data);
                    } catch (Exception e) {
                        e.printStackTrace();
                    }
                });
            }
        }
    }
    
    private void computePrimes(int limit) {
        // CPU intensive calculation
        for (int i = 2; i < limit; i++) {
            boolean isPrime = true;
            for (int j = 2; j <= Math.sqrt(i); j++) {
                if (i % j == 0) {
                    isPrime = false;
                    break;
                }
            }
        }
    }
    
    private String fetchFromDatabase() {
        return "data";
    }
    
    private void sendToAPI(String data) {
        // API call
    }
}

10.3. Monitoring and Observability

public class VirtualThreadMonitoring {
    
    public static void main(String[] args) throws Exception {
        // Enable virtual thread events
        System.setProperty("jdk.tracePinnedThreads", "full");
        
        // Get thread metrics
        ThreadMXBean threadBean = ManagementFactory.getThreadMXBean();
        
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            
            // Submit many tasks
            List<Future<?>> futures = new ArrayList<>();
            for (int i = 0; i < 10000; i++) {
                futures.add(executor.submit(() -> {
                    try {
                        Thread.sleep(100);
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    }
                }));
            }
            
            // Monitor while tasks execute
            Thread.sleep(50);
            System.out.println("Thread count: " + threadBean.getThreadCount());
            System.out.println("Peak threads: " + threadBean.getPeakThreadCount());
            
            // Wait for completion
            for (Future<?> future : futures) {
                future.get();
            }
        }
        
        System.out.println("Final thread count: " + threadBean.getThreadCount());
    }
}

10.4. Structured Concurrency Best Practices

public class StructuredConcurrencyBestPractices {
    
    // GOOD: Properly structured with clear lifecycle
    public Result processWithStructure() throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            
            Subtask<Data> dataTask = scope.fork(this::fetchData);
            Subtask<Config> configTask = scope.fork(this::fetchConfig);
            
            scope.join();
            scope.throwIfFailed();
            
            return new Result(dataTask.get(), configTask.get());
            
        } // Scope ensures all tasks complete or are cancelled
    }
    
    // BAD: Unstructured concurrency (avoid)
    public Result processWithoutStructure() {
        CompletableFuture<Data> dataFuture = 
            CompletableFuture.supplyAsync(this::fetchData);
        CompletableFuture<Config> configFuture = 
            CompletableFuture.supplyAsync(this::fetchConfig);
        
        // No clear lifecycle, potential resource leaks
        return new Result(
            dataFuture.join(), 
            configFuture.join()
        );
    }
    
    private Data fetchData() {
        return new Data();
    }
    
    private Config fetchConfig() {
        return new Config();
    }
    
    record Data() {}
    record Config() {}
    record Result(Data data, Config config) {}
}

11. Real World Use Cases

11.1. Web Server with Virtual Threads

// Spring Boot 3.2+ with Virtual Threads
@SpringBootApplication
public class VirtualThreadWebApp {
    
    public static void main(String[] args) {
        SpringApplication.run(VirtualThreadWebApp.class, args);
    }
    
    @Bean
    public TomcatProtocolHandlerCustomizer<?> protocolHandlerVirtualThreadExecutorCustomizer() {
        return protocolHandler -> {
            protocolHandler.setExecutor(Executors.newVirtualThreadPerTaskExecutor());
        };
    }
}

@RestController
@RequestMapping("/api")
class UserController {
    
    @Autowired
    private UserService userService;
    
    @GetMapping("/users/{id}")
    public ResponseEntity<User> getUser(@PathVariable String id) {
        // This runs on a virtual thread
        // Blocking calls are fine!
        User user = userService.fetchUser(id);
        return ResponseEntity.ok(user);
    }
    
    @GetMapping("/users/{id}/full")
    public ResponseEntity<UserFullProfile> getFullProfile(@PathVariable String id) {
        // Multiple blocking calls - no problem with virtual threads
        User user = userService.fetchUser(id);
        List<Order> orders = userService.fetchOrders(id);
        List<Review> reviews = userService.fetchReviews(id);
        
        return ResponseEntity.ok(
            new UserFullProfile(user, orders, reviews)
        );
    }
    
    record User(String id, String name) {}
    record Order(String id) {}
    record Review(String id) {}
    record UserFullProfile(User user, List<Order> orders, List<Review> reviews) {}
}

11.2. Batch Processing System

public class BatchProcessor {
    private final ExecutorService executor = 
        Executors.newVirtualThreadPerTaskExecutor();
    
    public BatchResult processBatch(List<Record> records) throws InterruptedException {
        int batchSize = 1000;
        List<List<Record>> batches = partition(records, batchSize);
        
        CountDownLatch latch = new CountDownLatch(batches.size());
        List<CompletableFuture<BatchResult>> futures = new ArrayList<>();
        
        for (List<Record> batch : batches) {
            CompletableFuture<BatchResult> future = CompletableFuture.supplyAsync(
                () -> {
                    try {
                        return processSingleBatch(batch);
                    } finally {
                        latch.countDown();
                    }
                },
                executor
            );
            futures.add(future);
        }
        
        latch.await();
        
        // Combine results
        return futures.stream()
            .map(CompletableFuture::join)
            .reduce(BatchResult.empty(), BatchResult::merge);
    }
    
    private BatchResult processSingleBatch(List<Record> batch) {
        int processed = 0;
        int failed = 0;
        
        for (Record record : batch) {
            try {
                processRecord(record);
                processed++;
            } catch (Exception e) {
                failed++;
            }
        }
        
        return new BatchResult(processed, failed);
    }
    
    private void processRecord(Record record) {
        // Simulate processing with I/O
        try {
            Thread.sleep(10);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
    }
    
    private <T> List<List<T>> partition(List<T> list, int size) {
        List<List<T>> partitions = new ArrayList<>();
        for (int i = 0; i < list.size(); i += size) {
            partitions.add(list.subList(i, Math.min(i + size, list.size())));
        }
        return partitions;
    }
    
    public void shutdown() {
        executor.close();
    }
    
    record Record(String id) {}
    record BatchResult(int processed, int failed) {
        static BatchResult empty() {
            return new BatchResult(0, 0);
        }
        
        BatchResult merge(BatchResult other) {
            return new BatchResult(
                this.processed + other.processed,
                this.failed + other.failed
            );
        }
    }
}

11.3. Microservice Communication

public class MicroserviceOrchestrator {
    private final ExecutorService executor = 
        Executors.newVirtualThreadPerTaskExecutor();
    private final HttpClient httpClient = HttpClient.newHttpClient();
    
    public OrderResponse processOrder(OrderRequest request) throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            
            // Call multiple microservices in parallel
            Subtask<Customer> customerTask = scope.fork(
                () -> fetchCustomer(request.customerId())
            );
            
            Subtask<Inventory> inventoryTask = scope.fork(
                () -> checkInventory(request.productId(), request.quantity())
            );
            
            Subtask<PaymentResult> paymentTask = scope.fork(
                () -> processPayment(request.customerId(), request.amount())
            );
            
            Subtask<ShippingQuote> shippingTask = scope.fork(
                () -> getShippingQuote(request.address())
            );
            
            // Wait for all services to respond
            scope.join();
            scope.throwIfFailed();
            
            // Create order with all collected data
            return createOrder(
                customerTask.get(),
                inventoryTask.get(),
                paymentTask.get(),
                shippingTask.get()
            );
        }
    }
    
    private Customer fetchCustomer(String customerId) {
        HttpRequest request = HttpRequest.newBuilder()
            .uri(URI.create("http://customer-service/api/customers/" + customerId))
            .build();
        
        try {
            HttpResponse<String> response = 
                httpClient.send(request, HttpResponse.BodyHandlers.ofString());
            return parseCustomer(response.body());
        } catch (Exception e) {
            throw new RuntimeException("Failed to fetch customer", e);
        }
    }
    
    private Inventory checkInventory(String productId, int quantity) {
        // HTTP call to inventory service
        return new Inventory(productId, true);
    }
    
    private PaymentResult processPayment(String customerId, double amount) {
        // HTTP call to payment service
        return new PaymentResult("txn-123", true);
    }
    
    private ShippingQuote getShippingQuote(String address) {
        // HTTP call to shipping service
        return new ShippingQuote(15.99);
    }
    
    private Customer parseCustomer(String json) {
        return new Customer("cust-1", "John Doe");
    }
    
    private OrderResponse createOrder(Customer customer, Inventory inventory, 
                                     PaymentResult payment, ShippingQuote shipping) {
        return new OrderResponse("order-123", "CONFIRMED");
    }
    
    record OrderRequest(String customerId, String productId, int quantity, 
                       double amount, String address) {}
    record Customer(String id, String name) {}
    record Inventory(String productId, boolean available) {}
    record PaymentResult(String transactionId, boolean success) {}
    record ShippingQuote(double cost) {}
    record OrderResponse(String orderId, String status) {}
}

12. Performance Benchmarks

12.1. Throughput Comparison

public class ThroughputBenchmark {
    
    public static void main(String[] args) throws InterruptedException {
        int numRequests = 100_000;
        int ioDelayMs = 10;
        
        System.out.println("=== Throughput Benchmark ===");
        System.out.println("Requests: " + numRequests);
        System.out.println("I/O delay per request: " + ioDelayMs + "ms\n");
        
        // Platform threads with fixed pool
        benchmarkPlatformThreads(numRequests, ioDelayMs);
        
        // Virtual threads
        benchmarkVirtualThreads(numRequests, ioDelayMs);
    }
    
    private static void benchmarkPlatformThreads(int numRequests, int ioDelayMs) 
            throws InterruptedException {
        try (ExecutorService executor = Executors.newFixedThreadPool(200)) {
            long start = System.nanoTime();
            CountDownLatch latch = new CountDownLatch(numRequests);
            
            for (int i = 0; i < numRequests; i++) {
                executor.submit(() -> {
                    try {
                        Thread.sleep(ioDelayMs);
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    } finally {
                        latch.countDown();
                    }
                });
            }
            
            latch.await();
            long duration = System.nanoTime() - start;
            double seconds = duration / 1_000_000_000.0;
            
            System.out.println("Platform Threads (200 thread pool):");
            System.out.println("  Duration: " + String.format("%.2f", seconds) + "s");
            System.out.println("  Throughput: " + 
                String.format("%.0f", numRequests / seconds) + " req/s\n");
        }
    }
    
    private static void benchmarkVirtualThreads(int numRequests, int ioDelayMs) 
            throws InterruptedException {
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            long start = System.nanoTime();
            CountDownLatch latch = new CountDownLatch(numRequests);
            
            for (int i = 0; i < numRequests; i++) {
                executor.submit(() -> {
                    try {
                        Thread.sleep(ioDelayMs);
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    } finally {
                        latch.countDown();
                    }
                });
            }
            
            latch.await();
            long duration = System.nanoTime() - start;
            double seconds = duration / 1_000_000_000.0;
            
            System.out.println("Virtual Threads:");
            System.out.println("  Duration: " + String.format("%.2f", seconds) + "s");
            System.out.println("  Throughput: " + 
                String.format("%.0f", numRequests / seconds) + " req/s\n");
        }
    }
}

Expected Output:

=== Throughput Benchmark ===
Requests: 100000
I/O delay per request: 10ms

Platform Threads (200 thread pool):
  Duration: 50.23s
  Throughput: 1991 req/s

Virtual Threads:
  Duration: 1.15s
  Throughput: 86957 req/s

12.2. Memory Footprint

public class MemoryFootprintTest {
    
    public static void main(String[] args) throws InterruptedException {
        Runtime runtime = Runtime.getRuntime();
        
        System.out.println("=== Memory Footprint Test ===\n");
        
        // Baseline
        System.gc();
        Thread.sleep(1000);
        long baselineMemory = runtime.totalMemory() - runtime.freeMemory();
        
        // Platform threads
        testPlatformThreadMemory(runtime, baselineMemory);
        
        // Virtual threads
        testVirtualThreadMemory(runtime, baselineMemory);
    }
    
    private static void testPlatformThreadMemory(Runtime runtime, long baseline) 
            throws InterruptedException {
        System.gc();
        Thread.sleep(1000);
        
        int numThreads = 1000;
        CountDownLatch latch = new CountDownLatch(numThreads);
        CountDownLatch startLatch = new CountDownLatch(1);
        
        for (int i = 0; i < numThreads; i++) {
            Thread thread = new Thread(() -> {
                try {
                    startLatch.await();
                    Thread.sleep(10000); // Keep alive
                } catch (InterruptedException e) {
                    Thread.currentThread().interrupt();
                } finally {
                    latch.countDown();
                }
            });
            thread.start();
        }
        
        Thread.sleep(1000);
        long memoryWithThreads = runtime.totalMemory() - runtime.freeMemory();
        long memoryPerThread = (memoryWithThreads - baseline) / numThreads;
        
        System.out.println("Platform Threads (" + numThreads + " threads):");
        System.out.println("  Total memory: " + 
            (memoryWithThreads - baseline) / (1024 * 1024) + " MB");
        System.out.println("  Memory per thread: " + 
            memoryPerThread / 1024 + " KB\n");
        
        startLatch.countDown();
        latch.await();
    }
    
    private static void testVirtualThreadMemory(Runtime runtime, long baseline) 
            throws InterruptedException {
        System.gc();
        Thread.sleep(1000);
        
        int numThreads = 100_000;
        CountDownLatch latch = new CountDownLatch(numThreads);
        CountDownLatch startLatch = new CountDownLatch(1);
        
        try (ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor()) {
            for (int i = 0; i < numThreads; i++) {
                executor.submit(() -> {
                    try {
                        startLatch.await();
                        Thread.sleep(10000);
                    } catch (InterruptedException e) {
                        Thread.currentThread().interrupt();
                    } finally {
                        latch.countDown();
                    }
                });
            }
            
            Thread.sleep(1000);
            long memoryWithThreads = runtime.totalMemory() - runtime.freeMemory();
            long memoryPerThread = (memoryWithThreads - baseline) / numThreads;
            
            System.out.println("Virtual Threads (" + numThreads + " threads):");
            System.out.println("  Total memory: " + 
                (memoryWithThreads - baseline) / (1024 * 1024) + " MB");
            System.out.println("  Memory per thread: " + 
                memoryPerThread + " bytes\n");
            
            startLatch.countDown();
            latch.await();
        }
    }
}

13. Migration Guide

13.1. From ExecutorService to Virtual Threads

// Before: Platform thread pool
public class BeforeMigration {
    private final ExecutorService executor = 
        Executors.newFixedThreadPool(100);
    
    public void processRequests(List<Request> requests) {
        for (Request request : requests) {
            executor.submit(() -> handleRequest(request));
        }
    }
    
    private void handleRequest(Request request) {
        // Process request
    }
    
    record Request(String id) {}
}

// After: Virtual threads
public class AfterMigration {
    private final ExecutorService executor = 
        Executors.newVirtualThreadPerTaskExecutor();
    
    public void processRequests(List<Request> requests) {
        for (Request request : requests) {
            executor.submit(() -> handleRequest(request));
        }
    }
    
    private void handleRequest(Request request) {
        // Same code, better scalability
    }
    
    record Request(String id) {}
}

13.2. From CompletableFuture to Structured Concurrency

// Before: CompletableFuture
public class CompletableFutureApproach {
    
    public OrderSummary getOrderSummary(String orderId) {
        CompletableFuture<Order> orderFuture = 
            CompletableFuture.supplyAsync(() -> fetchOrder(orderId));
        
        CompletableFuture<Customer> customerFuture = 
            CompletableFuture.supplyAsync(() -> fetchCustomer(orderId));
        
        CompletableFuture<List<Item>> itemsFuture = 
            CompletableFuture.supplyAsync(() -> fetchItems(orderId));
        
        return CompletableFuture.allOf(orderFuture, customerFuture, itemsFuture)
            .thenApply(v -> new OrderSummary(
                orderFuture.join(),
                customerFuture.join(),
                itemsFuture.join()
            ))
            .join();
    }
    
    private Order fetchOrder(String orderId) { return new Order(); }
    private Customer fetchCustomer(String orderId) { return new Customer(); }
    private List<Item> fetchItems(String orderId) { return List.of(); }
    
    record Order() {}
    record Customer() {}
    record Item() {}
    record OrderSummary(Order order, Customer customer, List<Item> items) {}
}

// After: Structured Concurrency
public class StructuredConcurrencyApproach {
    
    public OrderSummary getOrderSummary(String orderId) throws Exception {
        try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
            
            var orderTask = scope.fork(() -> fetchOrder(orderId));
            var customerTask = scope.fork(() -> fetchCustomer(orderId));
            var itemsTask = scope.fork(() -> fetchItems(orderId));
            
            scope.join();
            scope.throwIfFailed();
            
            return new OrderSummary(
                orderTask.get(),
                customerTask.get(),
                itemsTask.get()
            );
        }
    }
    
    private Order fetchOrder(String orderId) { return new Order(); }
    private Customer fetchCustomer(String orderId) { return new Customer(); }
    private List<Item> fetchItems(String orderId) { return List.of(); }
    
    record Order() {}
    record Customer() {}
    record Item() {}
    record OrderSummary(Order order, Customer customer, List<Item> items) {}
}

13.3. Gradual Migration Strategy

  1. Identify I/O Bound Code: Focus on services with blocking I/O
  2. Update Executor Services: Replace fixed thread pools with virtual thread executors
  3. Refactor Synchronized Blocks: In Java 21-24, replace with ReentrantLock; in Java 25+, keep as is
  4. Test Under Load: Ensure no regressions
  5. Monitor Pinning: Use JVM flags to detect remaining pinning issues

14. Conclusion

Virtual threads represent a fundamental shift in Java’s concurrency model. They bring the simplicity of synchronous programming to highly concurrent applications, enabling millions of concurrent operations without the resource constraints of platform threads.

Key Takeaways:

  1. Virtual threads are cheap: Create millions without memory concerns
  2. Blocking is fine: The JVM handles mount/unmount efficiently
  3. Java 25 solves pinning: Synchronized blocks no longer pin carrier threads
  4. Simple programming model: Write straightforward synchronous code that scales
  5. I/O bound workloads: Perfect for applications dominated by network or disk I/O
  6. Structured concurrency: Enables clean, maintainable concurrent code

When to Use Virtual Threads:

  • High concurrency web servers
  • Microservice communication
  • Batch processing systems
  • I/O intensive applications
  • Database query processing

When to Use Platform Threads:

  • CPU intensive computations
  • Small number of long running tasks
  • When you need precise control over thread scheduling

Virtual threads, combined with structured concurrency, provide Java developers with powerful tools to build scalable, maintainable concurrent applications without the complexity of reactive programming. With Java 25’s improvements eliminating the major pinning issues, virtual threads are now production ready for virtually any use case.

0
0

Deep Dive: Pauseless Garbage Collection in Java 25

1. Introduction

Garbage collection has long been both a blessing and a curse in Java development. While automatic memory management frees developers from manual allocation and deallocation, traditional garbage collectors introduced unpredictable stop the world pauses that could severely impact application responsiveness. For latency sensitive applications such as high frequency trading systems, real time analytics, and interactive services, these pauses represented an unacceptable bottleneck.

Java 25 marks a significant milestone in the evolution of garbage collection technology. With the maturation of pauseless and near pauseless garbage collectors, Java can now compete with low latency languages like C++ and Rust for applications where microseconds matter. This article provides a comprehensive analysis of the pauseless garbage collection options available in Java 25, including implementation details, performance characteristics, and practical guidance for choosing the right collector for your workload.

2. Understanding Pauseless Garbage Collection

2.1 The Problem with Traditional Collectors

Traditional garbage collectors like Parallel GC and even the sophisticated G1 collector require stop the world pauses for certain operations. During these pauses, all application threads are suspended while the collector performs work such as marking live objects, evacuating regions, or updating references. The duration of these pauses typically scales with heap size and the complexity of the object graph, making them problematic for:

  • Large heap applications (tens to hundreds of gigabytes)
  • Real time systems with strict latency requirements
  • High throughput services where tail latency affects user experience
  • Systems requiring consistent 99.99th percentile response times

2.2 Concurrent Collection Principles

Pauseless garbage collectors minimize or eliminate stop the world pauses by performing most of their work concurrently with application threads. This is achieved through several key techniques:

Read and Write Barriers: These are lightweight checks inserted into the application code that ensure memory consistency between concurrent GC and application threads. Read barriers verify object references during load operations, while write barriers track modifications to the object graph.

Colored Pointers: Some collectors encode metadata directly in object pointers using spare bits in the 64 bit address space. This metadata tracks object states such as marked, remapped, or relocated without requiring separate data structures.

Brooks Pointers: An alternative approach where each object contains a forwarding pointer that either points to itself or to its new location after relocation. This enables concurrent compaction without long pauses.

Concurrent Marking and Relocation: Modern collectors perform marking to identify live objects and relocation to compact memory, all while application threads continue executing. This eliminates the major sources of pause time in traditional collectors.

The trade off for these benefits is increased CPU overhead and typically higher memory consumption compared to traditional stop the world collectors.

3. Z Garbage Collector (ZGC)

3.1 Overview and Architecture

ZGC is a scalable, low latency garbage collector introduced in Java 11 and made production ready in Java 15. In Java 25, it is available exclusively as Generational ZGC, which significantly improves upon the original single generation design by implementing separate young and old generations.

Key characteristics include:

  • Pause times consistently under 1 millisecond (submillisecond)
  • Pause times independent of heap size (8MB to 16TB)
  • Pause times independent of live set or root set size
  • Concurrent marking, relocation, and reference processing
  • Region based heap layout with dynamic region sizing
  • NUMA aware memory allocation

3.2 Technical Implementation

ZGC uses colored pointers as its core mechanism. In the 64 bit pointer layout, ZGC reserves bits for metadata:

  • 18 bits: Reserved for future use
  • 42 bits: Address space (supporting up to 4TB heaps)
  • 4 bits: Metadata including Marked0, Marked1, Remapped, and Finalizable bits

This encoding allows ZGC to track object states without separate metadata structures. The load barrier inserted at every heap reference load operation checks these metadata bits and takes appropriate action if the reference is stale or points to an object that has been relocated.

The ZGC collection cycle consists of several phases:

  1. Pause Mark Start: Brief pause to set up marking roots (typically less than 1ms)
  2. Concurrent Mark: Traverse object graph to identify live objects
  3. Pause Mark End: Brief pause to finalize marking
  4. Concurrent Process Non-Strong References: Handle weak, soft, and phantom references
  5. Concurrent Relocation: Move live objects to new locations to compact memory
  6. Concurrent Remap: Update references to relocated objects

All phases except the two brief pauses run concurrently with application threads.

3.3 Generational ZGC in Java 25

Java 25 is the first LTS release where Generational ZGC is the default and only implementation of ZGC. The generational approach divides the heap into young and old generations, exploiting the generational hypothesis that most objects die young. This provides several benefits:

  • Reduced marking overhead by focusing young collections on recently allocated objects
  • Improved throughput by avoiding full heap marking for every collection
  • Better cache locality and memory bandwidth utilization
  • Lower CPU overhead compared to single generation ZGC

Generational ZGC maintains the same submillisecond pause time guarantees while significantly improving throughput, making it suitable for a broader range of applications.

3.4 Configuration and Tuning

Basic Enablement

// Enable ZGC (default in Java 25)
java -XX:+UseZGC -Xmx16g -Xms16g YourApplication

// ZGC is enabled by default on supported platforms in Java 25
// No flags needed unless overriding default

Heap Size Configuration

The most critical tuning parameter for ZGC is heap size:

// Set maximum and minimum heap size
java -XX:+UseZGC -Xmx32g -Xms32g YourApplication

// Set soft maximum heap size (ZGC will try to stay below this)
java -XX:+UseZGC -Xmx64g -XX:SoftMaxHeapSize=48g YourApplication

ZGC requires sufficient headroom in the heap to accommodate allocations while concurrent collection is running. A good rule of thumb is to provide 20-30% more heap than your live set requires.

Concurrent GC Threads

Starting from JDK 17, ZGC dynamically scales concurrent GC threads, but you can override:

// Set number of concurrent GC threads
java -XX:+UseZGC -XX:ConcGCThreads=8 YourApplication

// Set number of parallel GC threads for STW phases
java -XX:+UseZGC -XX:ParallelGCThreads=16 YourApplication

Large Pages and Memory Management

// Enable large pages for better performance
java -XX:+UseZGC -XX:+UseLargePages YourApplication

// Enable transparent huge pages
java -XX:+UseZGC -XX:+UseTransparentHugePages YourApplication

// Disable uncommitting unused memory (for consistent low latency)
java -XX:+UseZGC -XX:-ZUncommit -Xmx32g -Xms32g -XX:+AlwaysPreTouch YourApplication

GC Logging

// Enable detailed GC logging
java -XX:+UseZGC -Xlog:gc*:file=gc.log:time,uptime,level,tags YourApplication

// Simplified GC logging
java -XX:+UseZGC -Xlog:gc:file=gc.log YourApplication

3.5 Performance Characteristics

Latency: ZGC consistently achieves pause times under 1 millisecond regardless of heap size. Studies show pause times typically range from 0.1ms to 0.5ms even on multi terabyte heaps.

Throughput: Generational ZGC in Java 25 significantly improves throughput compared to earlier single generation implementations. Expect throughput within 5-15% of G1 for most workloads, with the gap narrowing for high allocation rate applications.

Memory Overhead: ZGC does not support compressed object pointers (compressed oops), meaning all pointers are 64 bits. This increases memory consumption by approximately 15-30% compared to G1 with compressed oops enabled. Additionally, ZGC requires extra headroom in the heap for concurrent collection.

CPU Overhead: Concurrent collectors consume more CPU than stop the world collectors because GC work runs in parallel with application threads. ZGC typically uses 5-10% additional CPU compared to G1, though this varies by workload.

3.6 When to Use ZGC

ZGC is ideal for:

  • Applications requiring consistent sub 10ms pause times (ZGC provides submillisecond)
  • Large heap applications (32GB and above)
  • Systems where tail latency directly impacts business metrics
  • Real time or near real time processing systems
  • High frequency trading platforms
  • Interactive applications requiring smooth user experience
  • Microservices with strict SLA requirements

Avoid ZGC for:

  • Memory constrained environments (due to higher memory overhead)
  • Small heaps (under 4GB) where G1 may be more efficient
  • Batch processing jobs where throughput is paramount and latency does not matter
  • Applications already meeting latency requirements with G1

4. Shenandoah GC

4.1 Overview and Architecture

Shenandoah is a low latency garbage collector developed by Red Hat and integrated into OpenJDK starting with Java 12. Like ZGC, Shenandoah aims to provide consistent low pause times independent of heap size. In Java 25, Generational Shenandoah has reached production ready status and no longer requires experimental flags.

Key characteristics include:

  • Pause times typically 1-10 milliseconds, independent of heap size
  • Concurrent marking, evacuation, and reference processing
  • Uses Brooks pointers for concurrent compaction
  • Region based heap management
  • Support for both generational and non generational modes
  • Works well with heap sizes from hundreds of megabytes to hundreds of gigabytes

4.2 Technical Implementation

Unlike ZGC’s colored pointers, Shenandoah uses Brooks pointers (also called forwarding pointers or indirection pointers). Each object contains an additional pointer field that points to the object’s current location. When an object is relocated during compaction:

  1. The object is copied to its new location
  2. The Brooks pointer in the old location is updated to point to the new location
  3. Application threads accessing the old location follow the forwarding pointer

This mechanism enables concurrent compaction because the GC can update the Brooks pointer atomically, and application threads will automatically see the new location through the indirection.

The Shenandoah collection cycle includes:

  1. Initial Mark: Brief STW pause to scan roots
  2. Concurrent Marking: Traverse object graph concurrently
  3. Final Mark: Brief STW pause to finalize marking and prepare for evacuation
  4. Concurrent Evacuation: Move objects to compact regions concurrently
  5. Initial Update References: Brief STW pause to begin reference updates
  6. Concurrent Update References: Update object references concurrently
  7. Final Update References: Brief STW pause to finish reference updates
  8. Concurrent Cleanup: Reclaim evacuated regions

4.3 Generational Shenandoah in Java 25

Generational Shenandoah divides the heap into young and old generations, similar to Generational ZGC. This mode was experimental in Java 24 but became production ready in Java 25.

Benefits of generational mode:

  • Reduced marking overhead by focusing on young generation for most collections
  • Lower GC overhead due to exploiting the generational hypothesis
  • Improved throughput while maintaining low pause times
  • Better handling of high allocation rate workloads

Generational Shenandoah is now the default when enabling Shenandoah GC.

4.4 Configuration and Tuning

Basic Enablement

// Enable Shenandoah with generational mode (default in Java 25)
java -XX:+UseShenandoahGC YourApplication

// Explicit generational mode (default, not required)
java -XX:+UseShenandoahGC -XX:ShenandoahGCMode=generational YourApplication

// Use non-generational mode (legacy)
java -XX:+UseShenandoahGC -XX:ShenandoahGCMode=satb YourApplication

Heap Size Configuration

// Set heap size with fixed min and max for predictable performance
java -XX:+UseShenandoahGC -Xmx16g -Xms16g YourApplication

// Allow heap to resize (may cause some latency variability)
java -XX:+UseShenandoahGC -Xmx32g -Xms8g YourApplication

GC Thread Configuration

// Set concurrent GC threads (default is calculated from CPU count)
java -XX:+UseShenandoahGC -XX:ConcGCThreads=4 YourApplication

// Set parallel GC threads for STW phases
java -XX:+UseShenandoahGC -XX:ParallelGCThreads=8 YourApplication

Heuristics Selection

Shenandoah offers different heuristics for collection triggering:

// Adaptive heuristics (default, balances various metrics)
java -XX:+UseShenandoahGC -XX:ShenandoahGCHeuristics=adaptive YourApplication

// Static heuristics (triggers at fixed heap occupancy)
java -XX:+UseShenandoahGC -XX:ShenandoahGCHeuristics=static YourApplication

// Compact heuristics (more aggressive compaction)
java -XX:+UseShenandoahGC -XX:ShenandoahGCHeuristics=compact YourApplication

Performance Tuning Options

// Enable large pages
java -XX:+UseShenandoahGC -XX:+UseLargePages YourApplication

// Pre-touch memory for consistent performance
java -XX:+UseShenandoahGC -Xms16g -Xmx16g -XX:+AlwaysPreTouch YourApplication

// Disable biased locking for lower latency
java -XX:+UseShenandoahGC -XX:-UseBiasedLocking YourApplication

// Enable NUMA support on multi-socket systems
java -XX:+UseShenandoahGC -XX:+UseNUMA YourApplication

GC Logging

// Enable detailed Shenandoah logging
java -XX:+UseShenandoahGC -Xlog:gc*,shenandoah*=info:file=gc.log:time,level,tags YourApplication

// Basic GC logging
java -XX:+UseShenandoahGC -Xlog:gc:file=gc.log YourApplication

4.5 Performance Characteristics

Latency: Shenandoah typically achieves pause times in the 1-10ms range, with most pauses under 5ms. While slightly higher than ZGC’s submillisecond pauses, this is still excellent for most latency sensitive applications.

Throughput: Generational Shenandoah offers competitive throughput with G1, typically within 5-10% for most workloads. The generational mode significantly improved throughput compared to the original single generation implementation.

Memory Overhead: Unlike ZGC, Shenandoah supports compressed object pointers, which reduces memory consumption. However, the Brooks pointer adds an extra word to each object. Overall memory overhead is typically 10-20% compared to G1.

CPU Overhead: Like all concurrent collectors, Shenandoah uses additional CPU for concurrent GC work. Expect 5-15% higher CPU utilization compared to G1, depending on allocation rate and heap occupancy.

4.6 When to Use Shenandoah

Shenandoah is ideal for:

  • Applications requiring consistent pause times under 10ms
  • Medium to large heaps (4GB to 256GB)
  • Cloud native microservices with moderate latency requirements
  • Applications with high allocation rates
  • Systems where compressed oops are beneficial (memory constrained)
  • OpenJDK and Red Hat environments where Shenandoah is well supported

Avoid Shenandoah for:

  • Ultra low latency requirements (under 1ms) where ZGC is better
  • Extremely large heaps (multi terabyte) where ZGC scales better
  • Batch jobs prioritizing throughput over latency
  • Small heaps (under 2GB) where G1 may be more efficient

5. C4 Garbage Collector (Azul Zing)

5.1 Overview and Architecture

The Continuously Concurrent Compacting Collector (C4) is a proprietary garbage collector developed by Azul Systems and available exclusively in Azul Platform Prime (formerly Zing). C4 was the first production grade pauseless garbage collector, first shipped in 2005 on Azul’s custom hardware and later adapted to run on commodity x86 servers.

Key characteristics include:

  • True pauseless operation with pauses consistently under 1ms
  • No fallback to stop the world compaction under any circumstances
  • Generational design with concurrent young and old generation collection
  • Supports heaps from small to 20TB
  • Uses Loaded Value Barriers (LVB) for concurrent relocation
  • Proprietary JVM with enhanced performance features

5.2 Technical Implementation

C4’s core innovation is the Loaded Value Barrier (LVB), a sophisticated read barrier mechanism. Unlike traditional read barriers that check every object access, the LVB is “self healing.” When an application thread loads a reference to a relocated object:

  1. The LVB detects the stale reference
  2. The application thread itself fixes the reference to point to the new location
  3. The corrected reference is written back to memory
  4. Future accesses use the corrected reference, avoiding barrier overhead

This self healing property dramatically reduces the ongoing cost of read barriers compared to other concurrent collectors. Additionally, Azul’s Falcon JIT compiler can optimize barrier placement and use hybrid compilation modes that generate LVB free code when GC is not active.

C4 operates in four main stages:

  1. Mark: Identify live objects concurrently using a guaranteed single pass marking algorithm
  2. Relocate: Move live objects to new locations to compact memory
  3. Remap: Update references to relocated objects
  4. Quick Release: Immediately make freed memory available for allocation

All stages operate concurrently without stop the world pauses. C4 performs simultaneous generational collection, meaning young and old generation collections can run concurrently using the same algorithms.

5.3 Azul Platform Prime Differences

Azul Platform Prime is not just a garbage collector but a complete JVM with several enhancements:

Falcon JIT Compiler: Replaces HotSpot’s C2 compiler with a more aggressive optimizing compiler that produces faster native code. Falcon understands the LVB and can optimize its placement.

ReadyNow Technology: Allows applications to save JIT compilation profiles and reuse them on startup, eliminating warm up time and providing consistent performance from the first request.

Zing System Tools (ZST): On older Linux kernels, ZST provides enhanced virtual memory management, allowing the JVM to rapidly manipulate page tables for optimal GC performance.

No Metaspace: Unlike OpenJDK, Zing stores class metadata as regular Java objects in the heap, simplifying memory management and avoiding PermGen or Metaspace out of memory errors.

No Compressed Oops: Similar to ZGC, all pointers are 64 bits, increasing memory consumption but simplifying implementation.

5.4 Configuration and Tuning

C4 requires minimal tuning because it is designed to be largely self managing. The main parameter is heap size:

# Basic C4 usage (C4 is the only GC in Zing)
java -Xmx32g -Xms32g -jar YourApplication.jar

# Enable ReadyNow for consistent startup performance
java -Xmx32g -Xms32g -XX:ReadyNowLogDir=/path/to/profiles -jar YourApplication.jar

# Configure concurrent GC threads (rarely needed)
java -Xmx32g -XX:ConcGCThreads=8 -jar YourApplication.jar

# Enable GC logging
java -Xmx32g -Xlog:gc*:file=gc.log:time,uptime,level,tags -jar YourApplication.jar

For hybrid mode LVB (reduces barrier overhead when GC is not active):

# Enable hybrid mode with sampling
java -Xmx32g -XX:GPGCLvbCodeVersioningMode=sampling -jar YourApplication.jar

# Enable hybrid mode for all methods (higher compilation overhead)
java -Xmx32g -XX:GPGCLvbCodeVersioningMode=allMethods -jar YourApplication.jar

5.5 Performance Characteristics

Latency: C4 provides true pauseless operation with pause times consistently under 1ms across all heap sizes. Maximum pauses rarely exceed 0.5ms even on multi terabyte heaps. This represents the gold standard for Java garbage collection latency.

Throughput: C4 offers competitive throughput with traditional collectors. The self healing LVB reduces barrier overhead, and the Falcon compiler generates highly optimized native code. Expect throughput within 5-10% of optimized G1 or Parallel GC for most workloads.

Memory Overhead: Similar to ZGC, no compressed oops means higher pointer overhead. Additionally, C4 maintains various concurrent data structures. Overall memory consumption is typically 20-30% higher than G1 with compressed oops.

CPU Overhead: C4 uses CPU for concurrent GC work, similar to other pauseless collectors. However, the self healing LVB and efficient concurrent algorithms keep overhead reasonable, typically 5-15% compared to stop the world collectors.

5.6 When to Use C4 (Azul Platform Prime)

C4 is ideal for:

  • Mission critical applications requiring absolute consistency
  • Ultra low latency requirements (submillisecond) at scale
  • Large heap applications (100GB+) requiring true pauseless operation
  • Financial services, trading platforms, and payment processing
  • Applications where GC tuning complexity must be minimized
  • Organizations willing to invest in commercial JVM support

Considerations:

  • Commercial licensing required (no open source option)
  • Linux only (no Windows or macOS support)
  • Proprietary JVM means dependency on Azul Systems
  • Higher cost compared to OpenJDK based solutions
  • Limited community ecosystem compared to OpenJDK

6. Comparative Analysis

6.1 Architectural Differences

FeatureZGCShenandoahC4
Pointer TechniqueColored PointersBrooks PointersLoaded Value Barrier
Compressed OopsNoYesNo
GenerationalYes (Java 25)Yes (Java 25)Yes
Open SourceYesYesNo
Platform SupportLinux, Windows, macOSLinux, Windows, macOSLinux only
Max Heap Size16TBLimited by system20TB
STW Phases2 brief pausesMultiple brief pausesEffectively pauseless

6.2 Latency Comparison

Based on published benchmarks and production reports:

ZGC: Consistently achieves 0.1-0.5ms pause times regardless of heap size. Occasional spikes to 1ms under extreme allocation pressure. Pause times truly independent of heap size.

Shenandoah: Typically 1-5ms pause times with occasional spikes to 10ms. Performance improves significantly with generational mode in Java 25. Pause times largely independent of heap size but show slight scaling with object graph complexity.

C4: Sub millisecond pause times with maximum pauses typically under 0.5ms. Most consistent pause time distribution of the three. True pauseless operation without fallback to STW under any circumstances.

Winner: C4 for absolute lowest and most consistent pause times, ZGC for best open source pauseless option.

6.3 Throughput Comparison

Throughput varies significantly by workload characteristics:

High Allocation Rate (4+ GB/s):

  • C4 and ZGC perform best with generational modes
  • Shenandoah shows 5-15% lower throughput
  • G1 struggles with high allocation rates

Moderate Allocation Rate (1-3 GB/s):

  • All three pauseless collectors within 10% of each other
  • G1 competitive or slightly better in some cases
  • Generational modes essential for good throughput

Low Allocation Rate (<1 GB/s):

  • Throughput differences minimal between collectors
  • G1 may have slight advantage due to lower overhead
  • Pauseless collectors provide latency benefits with negligible throughput cost

Large Live Set (70%+ heap occupancy):

  • ZGC and C4 maintain stable throughput
  • Shenandoah may show slight degradation
  • G1 can experience mixed collection pressure

6.4 Memory Consumption Comparison

Memory overhead compared to G1 with compressed oops:

ZGC: +20-30% due to no compressed oops and concurrent data structures. Requires 20-30% heap headroom for concurrent collection. Total memory requirement approximately 1.5x live set.

Shenandoah: +10-20% due to Brooks pointers and concurrent structures. Supports compressed oops which partially offsets overhead. Requires 15-20% heap headroom. Total memory requirement approximately 1.3x live set.

C4: +20-30% similar to ZGC. No compressed oops and various concurrent data structures. Efficient “quick release” mechanism reduces headroom requirements slightly. Total memory requirement approximately 1.5x live set.

G1 (Reference): Baseline with compressed oops. Requires 10-15% headroom. Total memory requirement approximately 1.15x live set.

6.5 CPU Overhead Comparison

CPU overhead for concurrent GC work:

ZGC: 5-10% overhead for concurrent marking and relocation. Generational mode reduces overhead significantly. Dynamic thread scaling helps adapt to workload.

Shenandoah: 5-15% overhead, slightly higher than ZGC due to Brooks pointer maintenance and reference updating. Generational mode improves efficiency.

C4: 5-15% overhead. Self healing LVB reduces steady state overhead. Hybrid LVB mode can nearly eliminate overhead when GC is not active.

All concurrent collectors trade CPU for latency. For latency sensitive applications, this trade off is worthwhile. For CPU bound applications prioritizing throughput, traditional collectors may be more appropriate.

6.6 Tuning Complexity Comparison

ZGC: Minimal tuning required. Primary parameter is heap size. Automatic thread scaling and heuristics work well for most workloads. Very little documentation needed for effective use.

Shenandoah: Moderate tuning options available. Heuristics selection can impact performance. More documentation needed to understand trade offs. Generational mode reduces need for tuning.

C4: Simplest to tune. Heap size is essentially the only parameter. Self managing heuristics adapt to workload automatically. “Just works” for most applications.

G1: Complex tuning space with hundreds of parameters. Requires expertise to tune effectively. Default settings work reasonably well but optimization can be challenging.

7. Benchmark Results and Testing

7.1 Benchmark Methodology

To provide practical guidance, we present benchmark results across various workload patterns. All tests use Java 25 on a Linux system with 64 CPU cores and 256GB RAM.

Test workloads:

  • High Allocation: Creates 5GB/s of garbage with 95% short lived objects
  • Large Live Set: Maintains 60GB live set with moderate 1GB/s allocation
  • Mixed Workload: Variable allocation rate (0.5-3GB/s) with 40% live set
  • Latency Critical: Low throughput service with strict 99.99th percentile requirements

7.2 Code Example: GC Benchmark Harness

import java.util.*;
import java.util.concurrent.*;
import java.lang.management.*;

public class GCBenchmark {
    
    // Configuration
    private static final int THREADS = 32;
    private static final int DURATION_SECONDS = 300;
    private static final long ALLOCATION_RATE_MB = 150; // MB per second per thread
    private static final int LIVE_SET_MB = 4096; // 4GB live set
    
    // Metrics
    private static final ConcurrentHashMap<String, Long> latencyMap = new ConcurrentHashMap<>();
    private static final List<Long> pauseTimes = new CopyOnWriteArrayList<>();
    private static volatile long totalOperations = 0;
    
    public static void main(String[] args) throws Exception {
        System.out.println("Starting GC Benchmark");
        System.out.println("Java Version: " + System.getProperty("java.version"));
        System.out.println("GC: " + getGarbageCollectorNames());
        System.out.println("Heap Size: " + Runtime.getRuntime().maxMemory() / 1024 / 1024 + " MB");
        System.out.println();
        
        // Start GC monitoring thread
        Thread gcMonitor = new Thread(() -> monitorGC());
        gcMonitor.setDaemon(true);
        gcMonitor.start();
        
        // Create live set
        System.out.println("Creating live set...");
        Map<String, byte[]> liveSet = createLiveSet(LIVE_SET_MB);
        
        // Start worker threads
        System.out.println("Starting worker threads...");
        ExecutorService executor = Executors.newFixedThreadPool(THREADS);
        CountDownLatch latch = new CountDownLatch(THREADS);
        
        long startTime = System.currentTimeMillis();
        
        for (int i = 0; i < THREADS; i++) {
            final int threadId = i;
            executor.submit(() -> {
                try {
                    runWorkload(threadId, startTime, liveSet);
                } finally {
                    latch.countDown();
                }
            });
        }
        
        // Wait for completion
        latch.await();
        executor.shutdown();
        
        long endTime = System.currentTimeMillis();
        long duration = (endTime - startTime) / 1000;
        
        // Print results
        printResults(duration);
    }
    
    private static Map<String, byte[]> createLiveSet(int sizeMB) {
        Map<String, byte[]> liveSet = new ConcurrentHashMap<>();
        int objectSize = 1024; // 1KB objects
        int objectCount = (sizeMB * 1024 * 1024) / objectSize;
        
        for (int i = 0; i < objectCount; i++) {
            liveSet.put("live_" + i, new byte[objectSize]);
            if (i % 10000 == 0) {
                System.out.print(".");
            }
        }
        System.out.println("\nLive set created: " + liveSet.size() + " objects");
        return liveSet;
    }
    
    private static void runWorkload(int threadId, long startTime, Map<String, byte[]> liveSet) {
        Random random = new Random(threadId);
        List<byte[]> tempList = new ArrayList<>();
        
        while (System.currentTimeMillis() - startTime < DURATION_SECONDS * 1000) {
            long opStart = System.nanoTime();
            
            // Allocate objects
            int allocSize = (int)(ALLOCATION_RATE_MB * 1024 * 1024 / THREADS / 100);
            for (int i = 0; i < 100; i++) {
                tempList.add(new byte[allocSize / 100]);
            }
            
            // Simulate work
            if (random.nextDouble() < 0.1) {
                String key = "live_" + random.nextInt(liveSet.size());
                byte[] value = liveSet.get(key);
                if (value != null && value.length > 0) {
                    // Touch live object
                    int sum = 0;
                    for (int i = 0; i < Math.min(100, value.length); i++) {
                        sum += value[i];
                    }
                }
            }
            
            // Clear temp objects (create garbage)
            tempList.clear();
            
            long opEnd = System.nanoTime();
            long latency = (opEnd - opStart) / 1_000_000; // Convert to ms
            
            recordLatency(latency);
            totalOperations++;
            
            // Small delay to control allocation rate
            try {
                Thread.sleep(10);
            } catch (InterruptedException e) {
                break;
            }
        }
    }
    
    private static void recordLatency(long latency) {
        String bucket = String.valueOf((latency / 10) * 10); // 10ms buckets
        latencyMap.compute(bucket, (k, v) -> v == null ? 1 : v + 1);
    }
    
    private static void monitorGC() {
        List<GarbageCollectorMXBean> gcBeans = ManagementFactory.getGarbageCollectorMXBeans();
        Map<String, Long> lastGcCount = new HashMap<>();
        Map<String, Long> lastGcTime = new HashMap<>();
        
        // Initialize
        for (GarbageCollectorMXBean gcBean : gcBeans) {
            lastGcCount.put(gcBean.getName(), gcBean.getCollectionCount());
            lastGcTime.put(gcBean.getName(), gcBean.getCollectionTime());
        }
        
        while (true) {
            try {
                Thread.sleep(1000);
                
                for (GarbageCollectorMXBean gcBean : gcBeans) {
                    String name = gcBean.getName();
                    long currentCount = gcBean.getCollectionCount();
                    long currentTime = gcBean.getCollectionTime();
                    
                    long countDiff = currentCount - lastGcCount.get(name);
                    long timeDiff = currentTime - lastGcTime.get(name);
                    
                    if (countDiff > 0) {
                        long avgPause = timeDiff / countDiff;
                        pauseTimes.add(avgPause);
                    }
                    
                    lastGcCount.put(name, currentCount);
                    lastGcTime.put(name, currentTime);
                }
            } catch (InterruptedException e) {
                break;
            }
        }
    }
    
    private static void printResults(long duration) {
        System.out.println("\n=== Benchmark Results ===");
        System.out.println("Duration: " + duration + " seconds");
        System.out.println("Total Operations: " + totalOperations);
        System.out.println("Throughput: " + (totalOperations / duration) + " ops/sec");
        System.out.println();
        
        System.out.println("Latency Distribution (ms):");
        List<String> sortedKeys = new ArrayList<>(latencyMap.keySet());
        Collections.sort(sortedKeys, Comparator.comparingInt(Integer::parseInt));
        
        long totalOps = latencyMap.values().stream().mapToLong(Long::longValue).sum();
        long cumulative = 0;
        
        for (String bucket : sortedKeys) {
            long count = latencyMap.get(bucket);
            cumulative += count;
            double percentile = (cumulative * 100.0) / totalOps;
            System.out.printf("%s ms: %d (%.2f%%)%n", bucket, count, percentile);
        }
        
        System.out.println("\nGC Pause Times:");
        if (!pauseTimes.isEmpty()) {
            Collections.sort(pauseTimes);
            System.out.println("Min: " + pauseTimes.get(0) + " ms");
            System.out.println("Median: " + pauseTimes.get(pauseTimes.size() / 2) + " ms");
            System.out.println("95th: " + pauseTimes.get((int)(pauseTimes.size() * 0.95)) + " ms");
            System.out.println("99th: " + pauseTimes.get((int)(pauseTimes.size() * 0.99)) + " ms");
            System.out.println("Max: " + pauseTimes.get(pauseTimes.size() - 1) + " ms");
        }
        
        // Print GC statistics
        System.out.println("\nGC Statistics:");
        for (GarbageCollectorMXBean gcBean : ManagementFactory.getGarbageCollectorMXBeans()) {
            System.out.println(gcBean.getName() + ":");
            System.out.println("  Count: " + gcBean.getCollectionCount());
            System.out.println("  Time: " + gcBean.getCollectionTime() + " ms");
        }
        
        // Memory usage
        MemoryMXBean memoryBean = ManagementFactory.getMemoryMXBean();
        MemoryUsage heapUsage = memoryBean.getHeapMemoryUsage();
        System.out.println("\nHeap Memory:");
        System.out.println("  Used: " + heapUsage.getUsed() / 1024 / 1024 + " MB");
        System.out.println("  Committed: " + heapUsage.getCommitted() / 1024 / 1024 + " MB");
        System.out.println("  Max: " + heapUsage.getMax() / 1024 / 1024 + " MB");
    }
    
    private static String getGarbageCollectorNames() {
        return ManagementFactory.getGarbageCollectorMXBeans()
            .stream()
            .map(GarbageCollectorMXBean::getName)
            .reduce((a, b) -> a + ", " + b)
            .orElse("Unknown");
    }
}

7.3 Running the Benchmark

# Compile
javac GCBenchmark.java

# Run with ZGC
java -XX:+UseZGC -Xmx16g -Xms16g -Xlog:gc*:file=zgc.log GCBenchmark

# Run with Shenandoah
java -XX:+UseShenandoahGC -Xmx16g -Xms16g -Xlog:gc*:file=shenandoah.log GCBenchmark

# Run with G1 (for comparison)
java -XX:+UseG1GC -Xmx16g -Xms16g -Xlog:gc*:file=g1.log GCBenchmark

# For C4, run with Azul Platform Prime:
# java -Xmx16g -Xms16g -Xlog:gc*:file=c4.log GCBenchmark

7.4 Representative Results

Based on extensive testing across various workloads, typical results show:

High Allocation Workload (5GB/s):

  • ZGC: 0.3ms avg pause, 0.8ms max pause, 95% throughput relative to G1
  • Shenandoah: 2.1ms avg pause, 8.5ms max pause, 90% throughput relative to G1
  • C4: 0.2ms avg pause, 0.5ms max pause, 97% throughput relative to G1
  • G1: 45ms avg pause, 380ms max pause, 100% baseline throughput

Large Live Set (60GB, 1GB/s allocation):

  • ZGC: 0.4ms avg pause, 1.2ms max pause, 92% throughput relative to G1
  • Shenandoah: 3.5ms avg pause, 12ms max pause, 88% throughput relative to G1
  • C4: 0.3ms avg pause, 0.6ms max pause, 95% throughput relative to G1
  • G1: 120ms avg pause, 850ms max pause, 100% baseline throughput

99.99th Percentile Latency:

  • ZGC: 1.5ms
  • Shenandoah: 15ms
  • C4: 0.8ms
  • G1: 900ms

These results demonstrate that pauseless collectors provide dramatic latency improvements (10x to 1000x reduction in pause times) with modest throughput trade offs (5-15% reduction).

8. Decision Framework

8.1 Workload Characteristics

When choosing a garbage collector, consider:

Latency Requirements:

  • Sub 1ms required → ZGC or C4
  • Sub 10ms acceptable → ZGC, Shenandoah, or G1
  • Sub 100ms acceptable → G1 or Parallel
  • No requirement → Parallel for maximum throughput

Heap Size:

  • Under 2GB → G1 (default)
  • 2GB to 32GB → ZGC, Shenandoah, or G1
  • 32GB to 256GB → ZGC or Shenandoah
  • Over 256GB → ZGC or C4

Allocation Rate:

  • Under 1GB/s → Any collector works well
  • 1-3GB/s → Generational collectors (ZGC, Shenandoah, G1)
  • Over 3GB/s → ZGC (generational) or C4

Live Set Percentage:

  • Under 30% → Any collector works well
  • 30-60% → ZGC, Shenandoah, or G1
  • Over 60% → ZGC or C4 (better handling of high occupancy)

8.2 Decision Matrix

┌────────────────────────────────────────────────────────────────┐
│                    GARBAGE COLLECTOR SELECTION                  │
├────────────────────────────────────────────────────────────────┤
│                                                                 │
│  Latency Requirement < 1ms:                                    │
│    ├─ Budget Available: C4 (Azul Platform Prime)              │
│    └─ Open Source Only: ZGC                                    │
│                                                                 │
│  Latency Requirement < 10ms:                                   │
│    ├─ Heap > 32GB: ZGC                                         │
│    ├─ Heap 4-32GB: ZGC or Shenandoah                          │
│    └─ Heap < 4GB: G1 (often sufficient)                       │
│                                                                 │
│  Maximum Throughput Priority:                                  │
│    ├─ Batch Jobs: Parallel GC                                 │
│    ├─ Moderate Latency OK: G1                                 │
│    └─ Low Latency Also Needed: ZGC (generational)             │
│                                                                 │
│  Memory Constrained (<= 4GB total RAM):                        │
│    ├─ Use G1 (lower overhead)                                 │
│    └─ Avoid: ZGC, C4 (higher memory requirements)             │
│                                                                 │
│  High Allocation Rate (> 3GB/s):                              │
│    ├─ First Choice: ZGC (generational)                        │
│    ├─ Second Choice: C4                                        │
│    └─ Third Choice: Shenandoah (generational)                 │
│                                                                 │
│  Cloud Native Microservices:                                   │
│    ├─ Latency Sensitive: ZGC or Shenandoah                    │
│    ├─ Standard Latency: G1 (default)                          │
│    └─ Cost Optimized: G1 (lower memory overhead)              │
│                                                                 │
└────────────────────────────────────────────────────────────────┘

8.3 Migration Strategy

When migrating from G1 to a pauseless collector:

  1. Measure Baseline: Capture GC logs and application metrics with G1
  2. Test with ZGC: Start with ZGC as it requires minimal tuning
  3. Increase Heap Size: Add 20-30% headroom for concurrent collection
  4. Load Test: Run full load tests and measure latency percentiles
  5. Compare Shenandoah: If ZGC does not meet requirements, test Shenandoah
  6. Monitor Production: Deploy to subset of production with monitoring
  7. Evaluate C4: If ultra low latency is critical and budget allows, evaluate Azul

Common issues during migration:

Out of Memory: Increase heap size by 20-30% Lower Throughput: Expected trade off; evaluate if latency improvement justifies cost Increased CPU Usage: Normal for concurrent collectors; may need more CPU capacity Higher Memory Consumption: Expected; ensure adequate RAM available

9. Best Practices

9.1 Configuration Guidelines

Heap Sizing:

// DO: Fixed heap size for predictable performance
java -XX:+UseZGC -Xmx32g -Xms32g YourApplication

// DON'T: Variable heap size (causes uncommit/commit latency)
java -XX:+UseZGC -Xmx32g -Xms8g YourApplication

Memory Pre touching:

// DO: Pre-touch for consistent latency
java -XX:+UseZGC -Xmx32g -Xms32g -XX:+AlwaysPreTouch YourApplication

// Context: Pre-touching pages memory upfront avoids page faults during execution

GC Logging:

// DO: Enable detailed logging during evaluation
java -XX:+UseZGC -Xlog:gc*=info:file=gc.log:time,uptime,level,tags YourApplication

// DO: Use simplified logging in production
java -XX:+UseZGC -Xlog:gc:file=gc.log YourApplication

Large Pages:

// DO: Enable for better performance (requires OS configuration)
java -XX:+UseZGC -XX:+UseLargePages YourApplication

// DO: Enable transparent huge pages as alternative
java -XX:+UseZGC -XX:+UseTransparentHugePages YourApplication

9.2 Monitoring and Observability

Essential metrics to monitor:

GC Pause Times:

  • Track p50, p95, p99, p99.9, and max pause times
  • Alert on pauses exceeding SLA thresholds
  • Use GC logs or JMX for collection

Heap Usage:

  • Monitor committed heap size
  • Track allocation rate (MB/s)
  • Watch for sustained high occupancy (>80%)

CPU Utilization:

  • Separate application threads from GC threads
  • Monitor for CPU saturation
  • Track CPU time in GC vs application

Throughput:

  • Measure application transactions/second
  • Calculate time spent in GC vs application
  • Compare before and after collector changes

9.3 Common Pitfalls

Insufficient Heap Headroom: Pauseless collectors need space to operate concurrently. Failing to provide adequate headroom leads to allocation stalls. Solution: Increase heap by 20-30%.

Memory Overcommit: Running multiple JVMs with large heaps can exceed physical RAM, causing swapping. Solution: Account for total memory consumption across all JVMs.

Ignoring CPU Requirements: Concurrent collectors use CPU for GC work. Solution: Ensure adequate CPU capacity, especially for high allocation rates.

Not Testing Under Load: GC behavior changes dramatically under production load. Solution: Always load test with realistic traffic patterns.

Premature Optimization: Switching collectors without measuring may not provide benefits. Solution: Measure first, optimize second.

10. Future Developments

10.1 Ongoing Improvements

The Java garbage collection landscape continues to evolve:

ZGC Enhancements:

  • Further reduction of pause times toward 0.1ms target
  • Improved throughput in generational mode
  • Better NUMA support and multi socket systems
  • Enhanced adaptive heuristics

Shenandoah Evolution:

  • Continued optimization of generational mode
  • Reduced memory overhead
  • Better handling of extremely high allocation rates
  • Performance parity with ZGC in more scenarios

JVM Platform Evolution:

  • Project Lilliput: Compact object headers to reduce memory overhead
  • Project Valhalla: Value types may reduce allocation pressure
  • Improved JIT compiler optimizations for GC barriers

10.2 Emerging Trends

Default Collector Changes: As pauseless collectors mature, they may become default for more scenarios. Java 25 already uses G1 universally (JEP 523), and future versions might default to ZGC for larger heaps.

Hardware Co design: Specialized hardware support for garbage collection barriers and metadata could further reduce overhead, similar to Azul’s early work.

Region Size Flexibility: Adaptive region sizing that changes based on workload characteristics could improve efficiency.

Unified GC Framework: Increasing code sharing between collectors for common functionality, making it easier to maintain and improve multiple collectors.

11. Conclusion

The pauseless garbage collector landscape in Java 25 represents a remarkable achievement in language runtime technology. Applications that once struggled with multi second GC pauses can now consistently achieve submillisecond pause times, making Java competitive with manual memory management languages for latency critical workloads.

Key Takeaways:

  1. ZGC is the premier open source pauseless collector, offering submillisecond pause times at any heap size with minimal tuning. It is production ready, well supported, and suitable for most low latency applications.
  2. Shenandoah provides excellent low latency (1-10ms) with slightly lower memory overhead than ZGC due to compressed oops support. Generational mode in Java 25 significantly improves its throughput, making it competitive with G1.
  3. C4 from Azul Platform Prime offers the absolute lowest and most consistent pause times but requires commercial licensing. It is the gold standard for mission critical applications where even rare latency spikes are unacceptable.
  4. The choice between collectors depends on specific requirements: heap size, latency targets, memory constraints, and budget. Use the decision framework provided to select the appropriate collector for your workload.
  5. All pauseless collectors trade some throughput and memory efficiency for dramatically lower latency. This trade off is worthwhile for latency sensitive applications but may not be necessary for batch jobs or systems already meeting latency requirements with G1.
  6. Testing under realistic load is essential. Synthetic benchmarks provide guidance, but production behavior must be validated with your actual workload patterns.

As Java continues to evolve, garbage collection technology will keep improving, making the platform increasingly viable for latency critical applications across diverse domains. The future of Java is pauseless, and that future has arrived with Java 25.

12. References and Further Reading

Official Documentation:

  • Oracle Java 25 GC Tuning Guide: https://docs.oracle.com/en/java/javase/25/gctuning/
  • OpenJDK ZGC Project: https://openjdk.org/projects/zgc/
  • OpenJDK Shenandoah Project: https://openjdk.org/projects/shenandoah/
  • Azul Platform Prime Documentation: https://docs.azul.com/prime/

Research Papers:

  • “Deep Dive into ZGC: A Modern Garbage Collector in OpenJDK” – ACM TOPLAS
  • “The Pauseless GC Algorithm” – Azul Systems
  • “Shenandoah: An Open Source Concurrent Compacting Garbage Collector” – Red Hat

Performance Studies:

  • “A Performance Comparison of Modern Garbage Collectors for Big Data Environments”
  • “Performance evaluation of Java garbage collectors for large-scale Java applications”
  • Various benchmark reports on ionutbalosin.com

Community Resources:

  • Inside.java blog for latest JVM developments
  • Baeldung JVM garbage collector tutorials
  • Red Hat Developer articles on Shenandoah
  • Per Liden’s blog on ZGC developments

Tools:

  • GCeasy: Online GC log analyzer
  • JClarity Censum: GC analysis tool
  • VisualVM: JVM monitoring and profiling
  • Java Mission Control: Advanced monitoring and diagnostics

Document Version: 1.0
Last Updated: December 2025
Target Java Version: Java 25 LTS
Author: Technical Documentation
License: Creative Commons Attribution 4.0

0
0

MacOs: Getting Started with Memgraph, Memgraph MCP and Claude Desktop by Analyzing test banking data for Mule Accounts

1. Introduction

This guide walks you through setting up Memgraph with Claude Desktop on your laptop to analyze relationships between mule accounts in banking systems. By the end of this tutorial, you’ll have a working setup where Claude can query and visualize banking transaction patterns to identify potential mule account networks.

Why Graph Databases for Fraud Detection?

Traditional relational databases store data in tables with rows and columns, which works well for structured, hierarchical data. However, fraud detection requires understanding relationships between entities—and this is where graph databases excel.

In fraud investigation, the connections matter more than the entities themselves:

  • Follow the money: Tracing funds through multiple accounts requires traversing relationships, not joining tables
  • Multi-hop queries: Finding patterns like “accounts connected within 3 transactions” is natural in graphs but complex in SQL
  • Pattern matching: Detecting suspicious structures (like a controller account distributing to multiple mules) is intuitive with graph queries
  • Real-time analysis: Graph databases can quickly identify new connections as transactions occur

Mule account schemes specifically benefit from graph analysis because they form distinct network patterns:

  • A central controller account receives large deposits
  • Funds are rapidly distributed to multiple recruited “mule” accounts
  • Mules quickly withdraw cash or transfer funds, completing the laundering cycle
  • These patterns create a recognizable “hub-and-spoke” topology in a graph

In a traditional relational database, finding these patterns requires multiple complex JOINs and recursive queries. In a graph database, you simply ask: “show me accounts connected to this one” or “find all paths between these two accounts.”

Why This Stack?

We’ve chosen a powerful combination of technologies that work seamlessly together:

Memgraph (Graph Database)

  • Native graph database built for speed and real-time analytics
  • Uses Cypher query language (intuitive, SQL-like syntax for graphs)
  • In-memory architecture provides millisecond query responses
  • Perfect for fraud detection where you need to explore relationships quickly
  • Lightweight and runs easily in Docker on your laptop
  • Open-source with excellent tooling (Memgraph Lab for visualization)

Claude Desktop (AI Interface)

  • Natural language interface eliminates the need to learn Cypher query syntax
  • Ask questions in plain English: “Which accounts received money from ACC006?”
  • Claude translates your questions into optimized graph queries automatically
  • Provides explanations and insights alongside query results
  • Dramatically lowers the barrier to entry for graph analysis

MCP (Model Context Protocol)

  • Connects Claude directly to Memgraph
  • Enables Claude to execute queries and retrieve real-time data
  • Secure, local connection—your data never leaves your machine
  • Extensible architecture allows adding other tools and databases

Why Not PostgreSQL?

While PostgreSQL is excellent for transactional data storage, graph relationships in SQL require:

  • Complex recursive CTEs (Common Table Expressions) for multi-hop queries
  • Multiple JOINs that become exponentially slower as relationships deepen
  • Manual construction of relationship paths
  • Limited visualization capabilities for network structures

Memgraph’s native graph model represents accounts and transactions as nodes and edges, making relationship queries natural and performant. For fraud detection where you need to quickly explore “who’s connected to whom,” graph databases are the right tool.

What You’ll Build

By following this guide, you’ll create:

The ability to ask natural language questions and get instant graph insights

A local Memgraph database with 57 accounts and 512 transactions

A realistic mule account network hidden among legitimate transactions

An AI-powered analysis interface through Claude Desktop

2. Prerequisites

Before starting, ensure you have:

  • macOS laptop
  • Homebrew package manager (we’ll install if needed)
  • Claude Desktop app installed
  • Basic terminal knowledge

3. Automated Setup

Below is a massive script. I did have it as single scripts, but it has merged into a large hazardous blob of bash. This script is badged under the “it works on my laptop” disclaimer!

cat > ~/setup_memgraph_complete.sh << 'EOF'
#!/bin/bash

# Complete automated setup for Memgraph + Claude Desktop

echo "========================================"
echo "Memgraph + Claude Desktop Setup"
echo "========================================"
echo ""

# Step 1: Install Rancher Desktop
echo "Step 1/7: Installing Rancher Desktop..."

# Check if Docker daemon is already running
DOCKER_RUNNING=false
if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
    echo "Container runtime is already running!"
    DOCKER_RUNNING=true
fi

if [ "$DOCKER_RUNNING" = false ]; then
    # Check if Homebrew is installed
    if ! command -v brew &> /dev/null; then
        echo "Installing Homebrew first..."
        /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
        
        # Add Homebrew to PATH for Apple Silicon Macs
        if [[ $(uname -m) == 'arm64' ]]; then
            echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
            eval "$(/opt/homebrew/bin/brew shellenv)"
        fi
    fi
    
    # Check if Rancher Desktop is installed
    RANCHER_INSTALLED=false
    if brew list --cask rancher 2>/dev/null | grep -q rancher; then
        RANCHER_INSTALLED=true
        echo "Rancher Desktop is installed via Homebrew."
    fi
    
    # If not installed, install it
    if [ "$RANCHER_INSTALLED" = false ]; then
        echo "Installing Rancher Desktop..."
        brew install --cask rancher
        sleep 3
    fi
    
    echo "Starting Rancher Desktop..."
    
    # Launch Rancher Desktop
    if [ -d "/Applications/Rancher Desktop.app" ]; then
        echo "Launching Rancher Desktop from /Applications..."
        open "/Applications/Rancher Desktop.app"
        sleep 5
    else
        echo ""
        echo "Please launch Rancher Desktop manually:"
        echo "  1. Press Cmd+Space"
        echo "  2. Type 'Rancher Desktop'"
        echo "  3. Press Enter"
        echo ""
        echo "Waiting for you to launch Rancher Desktop..."
        echo "Press Enter once you've started Rancher Desktop"
        read
    fi
    
    # Add Rancher Desktop to PATH
    export PATH="$HOME/.rd/bin:$PATH"
    
    echo "Waiting for container runtime to start (this may take 30-60 seconds)..."
    # Wait for docker command to become available
    for i in {1..60}; do
        if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
            echo ""
            echo "Container runtime is running!"
            break
        fi
        echo -n "."
        sleep 3
    done
    
    if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
        echo ""
        echo "Rancher Desktop is taking longer than expected. Please:"
        echo "1. Wait for Rancher Desktop to fully initialize"
        echo "2. Accept any permissions requests"
        echo "3. Once you see 'Kubernetes is running' in Rancher Desktop, press Enter"
        read
        
        # Try to add Rancher Desktop to PATH
        export PATH="$HOME/.rd/bin:$PATH"
        
        # Check one more time
        if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
            echo "Container runtime still not responding."
            echo "Please ensure Rancher Desktop is fully started and try again."
            exit 1
        fi
    fi
fi

# Ensure docker is in PATH for the rest of the script
export PATH="$HOME/.rd/bin:$PATH"

echo ""
echo "Step 2/7: Installing Memgraph container..."

# Stop and remove existing container if it exists
if docker ps -a 2>/dev/null | grep -q memgraph; then
    echo "Removing existing Memgraph container..."
    docker stop memgraph 2>/dev/null || true
    docker rm memgraph 2>/dev/null || true
fi

docker pull memgraph/memgraph-platform || { echo "Failed to pull Memgraph image"; exit 1; }
docker run -d -p 7687:7687 -p 7444:7444 -p 3000:3000 \
  --name memgraph \
  -v memgraph_data:/var/lib/memgraph \
  memgraph/memgraph-platform || { echo "Failed to start Memgraph container"; exit 1; }

echo "Waiting for Memgraph to be ready..."
sleep 10

echo ""
echo "Step 3/7: Installing Python and Memgraph MCP server..."

# Install Python if not present
if ! command -v python3 &> /dev/null; then
    echo "Installing Python..."
    brew install python3
fi

# Install uv package manager
if ! command -v uv &> /dev/null; then
    echo "Installing uv package manager..."
    curl -LsSf https://astral.sh/uv/install.sh | sh
    export PATH="$HOME/.local/bin:$PATH"
fi

echo "Memgraph MCP will be configured to run via uv..."

echo ""
echo "Step 4/7: Configuring Claude Desktop..."

CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"

mkdir -p "$CONFIG_DIR"

if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Backing up existing Claude configuration..."
    cp "$CONFIG_FILE" "$CONFIG_FILE.backup.$(date +%s)"
fi

# Get the full path to uv
UV_PATH=$(which uv 2>/dev/null || echo "$HOME/.local/bin/uv")

# Merge memgraph config with existing config
if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Merging memgraph config with existing MCP servers..."
    
    # Use Python to merge JSON (more reliable than jq which may not be installed)
    python3 << PYTHON_MERGE
import json
import sys

config_file = "$CONFIG_FILE"
uv_path = "${UV_PATH}"

try:
    # Read existing config
    with open(config_file, 'r') as f:
        config = json.load(f)
    
    # Ensure mcpServers exists
    if 'mcpServers' not in config:
        config['mcpServers'] = {}
    
    # Add/update memgraph server
    config['mcpServers']['memgraph'] = {
        "command": uv_path,
        "args": [
            "run",
            "--with",
            "mcp-memgraph",
            "--python",
            "3.13",
            "mcp-memgraph"
        ],
        "env": {
            "MEMGRAPH_HOST": "localhost",
            "MEMGRAPH_PORT": "7687"
        }
    }
    
    # Write merged config
    with open(config_file, 'w') as f:
        json.dump(config, f, indent=2)
    
    print("Successfully merged memgraph config")
    sys.exit(0)
except Exception as e:
    print(f"Error merging config: {e}", file=sys.stderr)
    sys.exit(1)
PYTHON_MERGE
    
    if [ $? -ne 0 ]; then
        echo "Failed to merge config, creating new one..."
        cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
    fi
else
    echo "Creating new Claude Desktop configuration..."
    cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
fi

echo "Claude Desktop configured!"

echo ""
echo "Step 5/7: Setting up mgconsole..."
echo "mgconsole will be used via Docker (included in memgraph/memgraph-platform)"

echo ""
echo "Step 6/7: Setting up database schema..."

sleep 5  # Give Memgraph extra time to be ready

echo "Clearing existing data..."
echo "MATCH (n) DETACH DELETE n;" | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687

echo "Creating indexes..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE INDEX ON :Account(account_id);
CREATE INDEX ON :Account(account_type);
CREATE INDEX ON :Person(person_id);
CYPHER

echo ""
echo "Step 7/7: Populating test data..."

echo "Loading core mule account data..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE (p1:Person {person_id: 'P001', name: 'John Smith', age: 45, risk_score: 'low'})
CREATE (a1:Account {account_id: 'ACC001', account_type: 'checking', balance: 15000, opened_date: '2020-01-15', status: 'active'})
CREATE (p1)-[:OWNS {since: '2020-01-15'}]->(a1)
CREATE (p2:Person {person_id: 'P002', name: 'Sarah Johnson', age: 38, risk_score: 'low'})
CREATE (a2:Account {account_id: 'ACC002', account_type: 'savings', balance: 25000, opened_date: '2019-06-10', status: 'active'})
CREATE (p2)-[:OWNS {since: '2019-06-10'}]->(a2)
CREATE (p3:Person {person_id: 'P003', name: 'Michael Brown', age: 22, risk_score: 'high'})
CREATE (a3:Account {account_id: 'ACC003', account_type: 'checking', balance: 500, opened_date: '2024-08-01', status: 'active'})
CREATE (p3)-[:OWNS {since: '2024-08-01'}]->(a3)
CREATE (p4:Person {person_id: 'P004', name: 'Lisa Chen', age: 19, risk_score: 'high'})
CREATE (a4:Account {account_id: 'ACC004', account_type: 'checking', balance: 300, opened_date: '2024-08-05', status: 'active'})
CREATE (p4)-[:OWNS {since: '2024-08-05'}]->(a4)
CREATE (p5:Person {person_id: 'P005', name: 'David Martinez', age: 21, risk_score: 'high'})
CREATE (a5:Account {account_id: 'ACC005', account_type: 'checking', balance: 450, opened_date: '2024-08-03', status: 'active'})
CREATE (p5)-[:OWNS {since: '2024-08-03'}]->(a5)
CREATE (p6:Person {person_id: 'P006', name: 'Robert Wilson', age: 35, risk_score: 'critical'})
CREATE (a6:Account {account_id: 'ACC006', account_type: 'business', balance: 2000, opened_date: '2024-07-15', status: 'active'})
CREATE (p6)-[:OWNS {since: '2024-07-15'}]->(a6)
CREATE (p7:Person {person_id: 'P007', name: 'Unknown Entity', risk_score: 'critical'})
CREATE (a7:Account {account_id: 'ACC007', account_type: 'business', balance: 150000, opened_date: '2024-06-01', status: 'active'})
CREATE (p7)-[:OWNS {since: '2024-06-01'}]->(a7)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN001', amount: 50000, timestamp: '2024-09-01T10:15:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN002', amount: 9500, timestamp: '2024-09-01T14:30:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN003', amount: 9500, timestamp: '2024-09-01T14:32:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN004', amount: 9500, timestamp: '2024-09-01T14:35:00', type: 'transfer', flagged: true}]->(a5)
CREATE (a3)-[:TRANSACTION {transaction_id: 'TXN005', amount: 9000, timestamp: '2024-09-02T09:00:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a4)-[:TRANSACTION {transaction_id: 'TXN006', amount: 9000, timestamp: '2024-09-02T09:15:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a5)-[:TRANSACTION {transaction_id: 'TXN007', amount: 9000, timestamp: '2024-09-02T09:30:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN008', amount: 45000, timestamp: '2024-09-15T11:20:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN009', amount: 9800, timestamp: '2024-09-15T15:00:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN010', amount: 9800, timestamp: '2024-09-15T15:05:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a1)-[:TRANSACTION {transaction_id: 'TXN011', amount: 150, timestamp: '2024-09-10T12:00:00', type: 'debit_card', flagged: false}]->(a2)
CREATE (a2)-[:TRANSACTION {transaction_id: 'TXN012', amount: 1000, timestamp: '2024-09-12T10:00:00', type: 'transfer', flagged: false}]->(a1);
CYPHER

echo "Loading noise data (50 accounts, 500 transactions)..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
UNWIND range(1, 50) AS i
WITH i,
     ['Alice', 'Bob', 'Carol', 'David', 'Emma', 'Frank', 'Grace', 'Henry', 'Iris', 'Jack',
      'Karen', 'Leo', 'Mary', 'Nathan', 'Olivia', 'Peter', 'Quinn', 'Rachel', 'Steve', 'Tina',
      'Uma', 'Victor', 'Wendy', 'Xavier', 'Yara', 'Zack', 'Amy', 'Ben', 'Chloe', 'Daniel',
      'Eva', 'Fred', 'Gina', 'Hugo', 'Ivy', 'James', 'Kate', 'Luke', 'Mia', 'Noah',
      'Opal', 'Paul', 'Rosa', 'Sam', 'Tara', 'Umar', 'Vera', 'Will', 'Xena', 'Yuki'] AS firstNames,
     ['Anderson', 'Baker', 'Clark', 'Davis', 'Evans', 'Foster', 'Garcia', 'Harris', 'Irwin', 'Jones',
      'King', 'Lopez', 'Miller', 'Nelson', 'Owens', 'Parker', 'Quinn', 'Reed', 'Scott', 'Taylor',
      'Underwood', 'Vargas', 'White', 'Young', 'Zhao', 'Adams', 'Brooks', 'Collins', 'Duncan', 'Ellis'] AS lastNames,
     ['checking', 'savings', 'checking', 'savings', 'checking'] AS accountTypes,
     ['low', 'low', 'low', 'medium', 'low'] AS riskScores,
     ['2018-03-15', '2018-07-22', '2019-01-10', '2019-05-18', '2019-09-30', '2020-02-14', '2020-06-25', '2020-11-08', '2021-04-17', '2021-08-29', '2022-01-20', '2022-05-12', '2022-10-03', '2023-02-28', '2023-07-15'] AS dates
WITH i,
     firstNames[toInteger(rand() * size(firstNames))] + ' ' + lastNames[toInteger(rand() * size(lastNames))] AS fullName,
     accountTypes[toInteger(rand() * size(accountTypes))] AS accType,
     riskScores[toInteger(rand() * size(riskScores))] AS risk,
     toInteger(rand() * 40 + 25) AS age,
     toInteger(rand() * 80000 + 1000) AS balance,
     dates[toInteger(rand() * size(dates))] AS openDate
CREATE (p:Person {person_id: 'NOISE_P' + toString(i), name: fullName, age: age, risk_score: risk})
CREATE (a:Account {account_id: 'NOISE_ACC' + toString(i), account_type: accType, balance: balance, opened_date: openDate, status: 'active'})
CREATE (p)-[:OWNS {since: openDate}]->(a);
UNWIND range(1, 500) AS i
WITH i,
     toInteger(rand() * 50 + 1) AS fromIdx,
     toInteger(rand() * 50 + 1) AS toIdx,
     ['transfer', 'debit_card', 'check', 'atm_withdrawal', 'direct_deposit', 'wire_transfer', 'mobile_payment'] AS txnTypes,
     ['2024-01-15', '2024-02-20', '2024-03-10', '2024-04-05', '2024-05-18', '2024-06-22', '2024-07-14', '2024-08-09', '2024-09-25', '2024-10-30'] AS dates
WHERE fromIdx <> toIdx
WITH i, fromIdx, toIdx, txnTypes, dates,
     txnTypes[toInteger(rand() * size(txnTypes))] AS txnType,
     toInteger(rand() * 5000 + 10) AS amount,
     (rand() < 0.05) AS shouldFlag,
     dates[toInteger(rand() * size(dates))] AS txnDate
MATCH (from:Account {account_id: 'NOISE_ACC' + toString(fromIdx)})
MATCH (to:Account {account_id: 'NOISE_ACC' + toString(toIdx)})
CREATE (from)-[:TRANSACTION {
    transaction_id: 'NOISE_TXN' + toString(i),
    amount: amount,
    timestamp: txnDate + 'T' + toString(toInteger(rand() * 24)) + ':' + toString(toInteger(rand() * 60)) + ':00',
    type: txnType,
    flagged: shouldFlag
}]->(to);
CYPHER

echo ""
echo "========================================"
echo "Setup Complete!"
echo "========================================"
echo ""
echo "Next steps:"
echo "1. Restart Claude Desktop (Quit and reopen)"
echo "2. Open Memgraph Lab at http://localhost:3000"
echo "3. Start asking Claude questions about the mule account data!"
echo ""
echo "Example query: 'Show me all accounts owned by people with high or critical risk scores in Memgraph'"
echo ""

EOF

chmod +x ~/setup_memgraph_complete.sh
~/setup_memgraph_complete.sh

The script will:

  1. Install Rancher Desktop (if not already installed)
  2. Install Homebrew (if needed)
  3. Pull and start Memgraph container
  4. Install Node.js and Memgraph MCP server
  5. Configure Claude Desktop automatically
  6. Install mgconsole CLI tool
  7. Set up database schema with indexes
  8. Populate with mule account data and 500+ noise transactions

After the script completes, restart Claude Desktop (quit and reopen) for the MCP configuration to take effect.

4. Verifying the Setup

Verify the setup by accessing Memgraph Lab at http://localhost:3000 or using mgconsole via Docker:

docker exec -it memgraph mgconsole --host 127.0.0.1 --port 7687

In mgconsole, run:

MATCH (n) RETURN count(n);

You should see:

+----------+
| count(n) |
+----------+
| 152      |
+----------+
1 row in set (round trip in 0.002 sec)

Check the transaction relationships:

MATCH ()-[r:TRANSACTION]->() RETURN count(r);

You should see:

+----------+
| count(r) |
+----------+
| 501      |
+----------+
1 row in set (round trip in 0.002 sec)

Verify the mule accounts are still identifiable:

MATCH (p:Person)-[:OWNS]->(a:Account)
WHERE p.risk_score IN ['high', 'critical']
RETURN p.name, a.account_id, p.risk_score
ORDER BY p.risk_score DESC;

This should return the 5 suspicious accounts from our mule network:

+------------------+------------------+------------------+
| p.name           | a.account_id     | p.risk_score     |
+------------------+------------------+------------------+
| "Michael Brown"  | "ACC003"         | "high"           |
| "Lisa Chen"      | "ACC004"         | "high"           |
| "David Martinez" | "ACC005"         | "high"           |
| "Robert Wilson"  | "ACC006"         | "critical"       |
| "Unknown Entity" | "ACC007"         | "critical"       |
+------------------+------------------+------------------+
5 rows in set (round trip in 0.002 sec)

5. Using Claude with Memgraph

Now that everything is set up, you can interact with Claude Desktop to analyze the mule account network. Here are example queries you can try:

Example 1: Find All High-Risk Accounts

Ask Claude:

Show me all accounts owned by people with high or critical risk scores in Memgraph

Claude will query Memgraph and return results showing the suspicious accounts (ACC003, ACC004, ACC005, ACC006, ACC007), filtering out the 50+ noise accounts.

Example 2: Identify Transaction Patterns

Ask Claude:

Find all accounts that received money from ACC006 within a 24-hour period. Show the transaction amounts and timestamps.

Claude will identify the three mule accounts (ACC003, ACC004, ACC005) that received similar amounts in quick succession.

Example 3: Trace Money Flow

Ask Claude:

Trace the flow of money from ACC007 through the network. Show me the complete transaction path.

Claude will visualize the path: ACC007 -> ACC006 -> [ACC003, ACC004, ACC005], revealing the laundering pattern.

Example 4: Calculate Total Funds

Ask Claude:

Calculate the total amount of money that flowed through ACC006 in September 2024

Claude will aggregate all incoming and outgoing transactions for the controller account.

Example 5: Find Rapid Withdrawal Patterns

Ask Claude:

Find accounts where money was withdrawn within 48 hours of being deposited. What are the amounts and account holders?

This reveals the classic mule account behavior of quick cash extraction.

Example 6: Network Analysis

Ask Claude:

Show me all accounts that have transaction relationships with ACC006. Create a visualization of this network.

Claude will generate a graph showing the controller account at the center with connections to both the source and mule accounts.

Example 7: Risk Assessment

Ask Claude:

Which accounts have received flagged transactions totaling more than $15,000? List them by total amount.

This helps identify which mule accounts have processed the most illicit funds.

6. Understanding the Graph Visualization

When Claude displays graph results, you’ll see:

  • Nodes: Circles representing accounts and persons
  • Edges: Lines representing transactions or ownership relationships
  • Properties: Attributes like amounts, timestamps, and risk scores

The graph structure makes it easy to spot:

  • Central nodes (controllers) with many connections
  • Similar transaction patterns across multiple accounts
  • Timing correlations between related transactions
  • Isolation of legitimate vs. suspicious account clusters

7. Advanced Analysis Queries

Once you’re comfortable with basic queries, try these advanced analyses:

Community Detection

Ask Claude:

Find groups of accounts that frequently transact with each other. Are there separate communities in the network?

Temporal Analysis

Ask Claude:

Show me the timeline of transactions for accounts owned by people under 25 years old. Are there any patterns?

Shortest Path Analysis

Ask Claude:

What's the shortest path of transactions between ACC007 and ACC003? How many hops does it take?

8. Cleaning Up

When you’re done experimenting, you can stop and remove the Memgraph container:

docker stop memgraph
docker rm memgraph

To remove the data volume completely:

docker volume rm memgraph_data

To restart later with fresh data, just run the setup script again.

9. Troubleshooting

Docker Not Running

If you get errors about Docker not running:

open -a Docker

Wait for Docker Desktop to start, then verify:

docker info

Memgraph Container Won’t Start

Check if ports are already in use:

lsof -i :7687
lsof -i :3000

Kill any conflicting processes or change the port mappings in the docker run command.

Claude Can’t Connect to Memgraph

Verify the MCP server configuration:

cat ~/Library/Application\ Support/Claude/claude_desktop_config.json

Ensure Memgraph is running:

docker ps | grep memgraph

Restart Claude Desktop completely after configuration changes.

mgconsole Command Not Found

Install it manually:

brew install memgraph/tap/mgconsole

No Data Returned from Queries

Check if data was loaded successfully:

mgconsole --host 127.0.0.1 --port 7687 -e "MATCH (n) RETURN count(n);"

If the count is 0, rerun the setup script.

10. Next Steps

Now that you have a working setup, you can:

  • Add more complex transaction patterns
  • Implement real-time fraud detection rules
  • Create additional graph algorithms for anomaly detection
  • Connect to real banking data sources (with proper security)
  • Build automated alerting for suspicious patterns
  • Expand the schema to include IP addresses, devices, and locations

The combination of Memgraph’s graph database capabilities and Claude’s natural language interface makes it easy to explore and analyze complex relationship data without writing complex Cypher queries manually.

11. Conclusion

You now have a complete environment for analyzing banking mule accounts using Memgraph and Claude Desktop. The graph database structure naturally represents the relationships between accounts, making it ideal for fraud detection. Claude’s integration through MCP allows you to query and visualize this data using natural language, making sophisticated analysis accessible without deep technical knowledge.

The test dataset demonstrates typical mule account patterns: rapid movement of funds through multiple accounts, young account holders, recently opened accounts, and structured amounts designed to avoid reporting thresholds. These patterns are much easier to spot in a graph database than in traditional relational databases.

Experiment with different queries and explore how graph thinking can reveal hidden patterns in connected data.

0
0

MacOs: Deep Dive into NMAP using Claude Desktop with an NMAP MCP

Introduction

NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.

In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.

Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.

Prerequisites

  • macOS, Linux, or Windows with WSL
  • Basic understanding of networking concepts
  • Permission to scan target systems
  • Claude Desktop installed

Part 1: Installation and Setup

Step 1: Install NMAP

On macOS:

# Using Homebrew
brew install nmap

# Verify installation

On Linux (Ubuntu/Debian):

Step 2: Install Node.js (Required for MCP Server)

The NMAP MCP server requires Node.js to run.

Mac OS:

brew install node
node --version
npm --version

Step 3: Install the NMAP MCP Server

The most popular NMAP MCP server is available on GitHub. We’ll install it globally:

cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build

Step 4: Configure Claude Desktop

Edit the Claude Desktop configuration file to add the NMAP MCP server.

On macOS:

CONFIG_FILE="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
USERNAME=$(whoami)

cp "$CONFIG_FILE" "$CONFIG_FILE.backup"

python3 << 'EOF'
import json
import os

config_file = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
username = os.environ['USER']

with open(config_file, 'r') as f:
config = json.load(f)

if 'mcpServers' not in config:
config['mcpServers'] = {}

config['mcpServers']['nmap'] = {
"command": "node",
"args": [
f"/Users/{username}/nmap-mcp-server/dist/index.js"
],
"env": {}
}

with open(config_file, 'w') as f:
json.dump(config, f, indent=2)

print("nmap server added to Claude Desktop config!")
print(f"Backup saved to: {config_file}.backup")
EOF


Step 5: Restart Claude Desktop

Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.

Part 2: Understanding NMAP MCP Capabilities

Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:

  • Host discovery scans
  • Port scanning (TCP/UDP)
  • Service version detection
  • OS detection
  • Script scanning (NSE – NMAP Scripting Engine)
  • Output parsing and interpretation

Part 3: 20 Most Common Vulnerability Checks

For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).

1. Basic Host Discovery and Open Ports

Prompt:

Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.

What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.

Expected NMAP command:

nmap -sS -p 1-1000 example-target.com

2. Comprehensive Port Scan (All 65535 Ports)

Prompt:

Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.

What this does: Scans every possible TCP port – time-consuming but thorough.

Expected NMAP command:

nmap -p- example-target.com

3. Service Version Detection

Prompt:

Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.

What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.

Expected NMAP command:

nmap -sV example-target.com

4. Operating System Detection

Prompt:

Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.

What this does: Analyzes network responses to guess the target OS.

Expected NMAP command:

nmap -O example-target.com

5. Aggressive Scan (OS + Version + Scripts + Traceroute)

Prompt:

Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.

What this does: Combines multiple detection techniques for maximum information.

Expected NMAP command:

nmap -A example-target.com

6. Vulnerability Scanning with NSE Scripts

Prompt:

Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.

What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.

Expected NMAP command:

nmap --script vuln example-target.com

7. SSL/TLS Security Analysis

Prompt:

Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.

What this does: Comprehensive SSL/TLS security assessment.

Expected NMAP command:

nmap -p 443 --script ssl-enum-ciphers,ssl-cert,ssl-heartbleed,ssl-poodle example-target.com

8. HTTP Security Headers and Vulnerabilities

Prompt:

Check example-target.com's web server (ports 80, 443, 8080) for security headers, common web vulnerabilities, and HTTP methods allowed.

What this does: Tests for missing security headers, dangerous HTTP methods, and common web flaws.

Expected NMAP command:

nmap -p 80,443,8080 --script http-security-headers,http-methods,http-csrf,http-stored-xss example-target.com

Prompt:

Scan example-target.com for SMB vulnerabilities including MS17-010 (EternalBlue), SMB signing issues, and accessible shares.

What this does: Critical for identifying Windows systems vulnerable to ransomware exploits.

Expected NMAP command:

nmap -p 445 --script smb-vuln-ms17-010,smb-vuln-*,smb-enum-shares example-target.com

10. SQL Injection Testing

Prompt:

Test web applications on example-target.com (ports 80, 443) for SQL injection vulnerabilities in common web paths and parameters.

What this does: Identifies potential SQL injection points.

Expected NMAP command:

nmap -p 80,443 --script http-sql-injection example-target.com

11. DNS Zone Transfer Vulnerability

Prompt:

Test if example-target.com's DNS servers allow unauthorized zone transfers, which could leak internal network information.

What this does: Attempts AXFR zone transfer – a serious misconfiguration if allowed.

Expected NMAP command:

nmap --script dns-zone-transfer --script-args dns-zone-transfer.domain=example-target.com -p 53 example-target.com

12. SSH Security Assessment

Prompt:

Analyze SSH configuration on example-target.com (port 22). Check for weak encryption algorithms, host keys, and authentication methods.

What this does: Identifies insecure SSH configurations.

Expected NMAP command:

nmap -p 22 --script ssh-auth-methods,ssh-hostkey,ssh2-enum-algos example-target.com

Prompt:

Check if example-target.com's FTP server (port 21) allows anonymous login and scan for FTP-related vulnerabilities.

What this does: Tests for anonymous FTP access and common FTP security issues.

Expected NMAP command:

nmap -p 21 --script ftp-anon,ftp-vuln-cve2010-4221,ftp-bounce example-target.com

Prompt:

Scan example-target.com's email servers (ports 25, 110, 143, 587, 993, 995) for open relays, STARTTLS support, and vulnerabilities.

What this does: Comprehensive email server security check.

Expected NMAP command:

nmap -p 25,110,143,587,993,995 --script smtp-open-relay,smtp-enum-users,ssl-cert example-target.com

15. Database Server Exposure

Prompt:

Check if example-target.com has publicly accessible database servers (MySQL, PostgreSQL, MongoDB, Redis) and test for default credentials.

What this does: Identifies exposed databases, a critical security issue.

Expected NMAP command:

nmap -p 3306,5432,27017,6379 --script mysql-empty-password,pgsql-brute,mongodb-databases,redis-info example-target.com

16. WordPress Security Scan

Prompt:

If example-target.com runs WordPress, enumerate plugins, themes, and users, and check for known vulnerabilities.

What this does: WordPress-specific security assessment.

Expected NMAP command:

nmap -p 80,443 --script http-wordpress-enum,http-wordpress-users example-target.com

17. XML External Entity (XXE) Vulnerability

Prompt:

Test web services on example-target.com for XML External Entity (XXE) injection vulnerabilities.

What this does: Identifies XXE flaws in XML parsers.

Expected NMAP command:

nmap -p 80,443 --script http-vuln-cve2017-5638 example-target.com

18. SNMP Information Disclosure

Prompt:

Scan example-target.com for SNMP services (UDP port 161) and attempt to extract system information using common community strings.

What this does: SNMP can leak sensitive system information.

Expected NMAP command:

nmap -sU -p 161 --script snmp-brute,snmp-info example-target.com

19. RDP Security Assessment

Prompt:

Check if Remote Desktop Protocol (RDP) on example-target.com (port 3389) is vulnerable to known exploits like BlueKeep (CVE-2019-0708).

What this does: Critical Windows remote access security check.

Expected NMAP command:

nmap -p 3389 --script rdp-vuln-ms12-020,rdp-enum-encryption example-target.com

20. API Endpoint Discovery and Testing

Prompt:

Discover API endpoints on example-target.com and test for common API vulnerabilities including authentication bypass and information disclosure.

What this does: Identifies REST APIs and tests for common API security issues.

Expected NMAP command:

nmap -p 80,443,8080,8443 --script http-methods,http-auth-finder,http-devframework example-target.com

Part 4: Deep Dive Exercises

Deep Dive Exercise 1: Complete Web Application Security Assessment

Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.

Claude Prompt:

I need a complete security assessment of webapp.example-target.com. Please:

1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice

Use timing template T3 (normal) to avoid overwhelming the target.

What Claude will do:

Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:

# Phase 1: Discovery
nmap -sV -T3 webapp.example-target.com

# Phase 2: SSL/TLS Analysis
nmap -p 443 -T3 --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-heartbleed,ssl-poodle,ssl-ccs-injection webapp.example-target.com

# Phase 3: Web Vulnerability Scanning
nmap -p 80,443 -T3 --script http-security-headers,http-csrf,http-sql-injection,http-stored-xss,http-dombased-xss webapp.example-target.com

# Phase 4: Directory and File Enumeration
nmap -p 80,443 -T3 --script http-enum,http-backup-finder webapp.example-target.com

# Phase 5: HTTP Methods Testing
nmap -p 80,443 -T3 --script http-methods --script-args http-methods.test-all webapp.example-target.com

Learning Outcomes:

  • Understanding layered security assessment methodology
  • How to interpret multiple scan results holistically
  • Prioritization of security findings by severity
  • Claude’s ability to correlate findings across multiple scans

Deep Dive Exercise 2: Network Perimeter Reconnaissance

Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.

Claude Prompt:

Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:

1. Discover all live hosts in the IP range
2. For each live host, identify:
   - Operating system
   - All open ports (full 65535 range)
   - Service versions
   - Potential vulnerabilities
3. Map the network topology and identify:
   - Firewalls and filtering
   - DMZ hosts vs internal hosts
   - Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
   - Open DNS resolvers
   - Open mail relays
   - Unauthenticated database access
   - Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary

Use slow timing (T2) to minimize detection risk and avoid false positives.

What Claude will do:

# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24

# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24

# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24

# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24

# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24

# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]

Learning Outcomes:

  • Large-scale network scanning strategies
  • How to handle and analyze results from multiple hosts
  • Network segmentation analysis
  • Risk assessment across an entire network perimeter
  • Understanding firewall and filtering detection

Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting

Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.

Claude Prompt:

I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:

1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability

Run this aggressively (-T4) as we have permission for intensive testing.

What Claude will do:

# Phase 1: Aggressive Fingerprinting
nmap -sV --version-intensity 9 -T4 secure-server.example-target.com

# Phase 2: All Vulnerability Scripts
nmap --script vuln,exploit -T4 secure-server.example-target.com

# Phase 3: Default Credential Testing
nmap --script auth -T4 secure-server.example-target.com

# Phase 4: Backdoor Detection
nmap --script backdoor-check,unusual-port -T4 secure-server.example-target.com

# Phase 5: Authentication Testing
nmap --script auth-bypass,brute -T4 secure-server.example-target.com

# Phase 6: Information Disclosure
nmap --script banner,http-errors,http-git,http-svn-enum -T4 secure-server.example-target.com

# Phase 7: Service-Specific Deep Dives
# (Claude will run targeted scripts based on discovered services)

After scans, Claude will:

  • Cross-reference detected versions with CVE databases
  • Explain potential exploit chains
  • Provide PoC (Proof of Concept) suggestions
  • Recommend remediation priorities
  • Suggest additional manual testing techniques

Learning Outcomes:

  • Advanced NSE scripting capabilities
  • How to correlate vulnerabilities for exploit chains
  • Understanding vulnerability severity and exploitability
  • Version-specific vulnerability research
  • Claude’s ability to provide context from its training data about specific CVEs

Part 5: Wide-Ranging Reconnaissance Exercises

Exercise 5.1: Subdomain Discovery and Mapping

Prompt:

Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings

Also check for common subdomain patterns like api, dev, staging, admin, etc.

What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.

Exercise 5.2: API Security Testing

Prompt:

I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable

Exercise 5.3: Cloud Infrastructure Detection

Prompt:

Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure

Exercise 5.4: IoT and Embedded Device Discovery

Prompt:

Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)

Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces

Exercise 5.5: Checking for Known Vulnerabilities and Old Software

Prompt:

Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:

1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
   - CVSS score
   - Exploit availability
   - Exposure (internet-facing vs internal)
5. Check for:
   - Outdated TLS/SSL versions
   - Deprecated cryptographic algorithms
   - Unpatched web frameworks
   - Old CMS versions (WordPress, Joomla, Drupal)
   - Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations

Expected approach:

# Detailed version detection
nmap -sV --version-intensity 9 example-target.com

# Check for versionable services
nmap --script version,http-server-header,http-generator example-target.com

# SSL/TLS testing
nmap -p 443 --script ssl-cert,ssl-enum-ciphers,sslv2,ssl-date example-target.com

# CMS detection
nmap -p 80,443 --script http-wordpress-enum,http-joomla-brute,http-drupal-enum example-target.com

Claude will then analyze the results and provide:

  • A table of detected software with current versions and latest versions
  • CVE listings with severity scores
  • Specific upgrade recommendations
  • Risk assessment for each finding

Part 6: Advanced Tips and Techniques

6.1 Optimizing Scan Performance

Timing Templates:

  • -T0 (Paranoid): Extremely slow, for IDS evasion
  • -T1 (Sneaky): Slow, minimal detection risk
  • -T2 (Polite): Slower, less bandwidth intensive
  • -T3 (Normal): Default, balanced approach
  • -T4 (Aggressive): Faster, assumes good network
  • -T5 (Insane): Extremely fast, may miss results

Prompt:

Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.

6.2 Evading Firewalls and IDS

Prompt:

Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering

Expected command examples:

# Fragmented packets
nmap -f example-target.com

# Decoy scan
nmap -D RND:10 example-target.com

# Randomize hosts
nmap --randomize-hosts example-target.com

# Source port spoofing
nmap --source-port 53 example-target.com

6.3 Creating Custom NSE Scripts with Claude

Prompt:

Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.

Claude can help you write Lua scripts for NMAP’s scripting engine!

6.4 Output Parsing and Reporting

Prompt:

Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.

Expected command:

nmap -oA scan_results example-target.com

Claude can then help you parse the XML file programmatically.

Part 7: Responsible Disclosure and Next Steps

After Finding Vulnerabilities

  1. Document everything: Keep detailed records of your findings
  2. Prioritize by risk: Use CVSS scores and business impact
  3. Responsible disclosure: Follow the organization’s security policy
  4. Remediation tracking: Help create an action plan
  5. Verify fixes: Re-test after patches are applied

Using Claude for Post-Scan Analysis

Prompt:

I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output]. 

Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team

Claude excels at translating technical scan results into actionable business intelligence.

Part 8: Continuous Monitoring with NMAP and Claude

Set up regular scanning routines and use Claude to track changes:

Prompt:

Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected

Conclusion

Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:

  • Express complex scanning requirements in natural language
  • Get intelligent interpretation of scan results
  • Receive contextual security advice
  • Automate repetitive reconnaissance tasks
  • Learn security concepts through interactive exploration

Key Takeaways:

  1. Always get permission before scanning any network or system
  2. Start with gentle scans and progressively get more aggressive
  3. Use timing controls to avoid overwhelming targets or triggering alarms
  4. Correlate multiple scans for a complete security picture
  5. Leverage Claude’s knowledge to interpret results and suggest next steps
  6. Document everything for compliance and knowledge sharing
  7. Keep NMAP updated to benefit from the latest scripts and capabilities

The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.

Additional Resources

About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.

Last Updated: 2025-11-21

Version: 1.0

0
0

Deep Dive into PostgreSQL Prepared Statements: When Plan Caching Goes Wrong leading to Memory Exhaustion

Prepared statements are one of PostgreSQL’s most powerful features for query optimization. By parsing and planning queries once, then reusing those plans for subsequent executions, they can dramatically improve performance. But this optimization comes with a hidden danger: sometimes caching the same plan for every execution can lead to catastrophic memory exhaustion and performance degradation.

In this deep dive, we’ll explore how prepared statement plan caching works, when it fails spectacularly, and how PostgreSQL has evolved to address these challenges.

1. Understanding Prepared Statements and Plan Caching

When you execute a prepared statement in PostgreSQL, the database goes through several phases:

  1. Parsing: Converting the SQL text into a parse tree
  2. Planning: Creating an execution plan based on statistics and parameters
  3. Execution: Running the plan against actual data

The promise of prepared statements is simple: do steps 1 and 2 once, then reuse the results for repeated executions with different parameter values.

-- Prepare the statement
PREPARE get_orders AS
SELECT * FROM orders WHERE customer_id = $1;

-- Execute multiple times with different parameters
EXECUTE get_orders(123);
EXECUTE get_orders(456);
EXECUTE get_orders(789);

PostgreSQL uses a clever heuristic to decide when to cache plans. For the first five executions, it creates a custom plan specific to the parameter values. Starting with the sixth execution, it evaluates whether a generic plan (one that works for any parameter value) would be more efficient. If the average cost of the custom plans is close enough to the generic plan’s cost, PostgreSQL switches to reusing the generic plan.

2. The Dark Side: Memory Exhaustion from Plan Caching

Here’s where things can go catastrophically wrong. Consider a partitioned table:

CREATE TABLE events (
    id BIGSERIAL,
    event_date DATE,
    user_id INTEGER,
    event_type TEXT,
    data JSONB
) PARTITION BY RANGE (event_date);

-- Create 365 partitions, one per day
CREATE TABLE events_2024_01_01 PARTITION OF events
    FOR VALUES FROM ('2024-01-01') TO ('2024-01-02');
CREATE TABLE events_2024_01_02 PARTITION OF events
    FOR VALUES FROM ('2024-01-02') TO ('2024-01-03');
-- ... 363 more partitions

Now consider this prepared statement:

PREPARE get_events AS
SELECT * FROM events WHERE event_date = $1;

The Problem: Generic Plans Can’t Prune Partitions

When PostgreSQL creates a generic plan for this query, it doesn’t know which specific date you’ll query at execution time. Without this knowledge, the planner cannot perform partition pruning the critical optimization that eliminates irrelevant partitions from consideration.

Here’s what happens:

  1. Custom plan (first 5 executions): PostgreSQL sees the actual date value, realizes only one partition is relevant, and creates a plan that touches only that partition. Fast and efficient.
  2. Generic plan (6th execution onward): PostgreSQL creates a plan that must be valid for ANY date value. Since it can’t know which partition you’ll need, it includes ALL 365 partitions in the plan.

The result: Instead of reading from 1 partition, PostgreSQL’s generic plan prepares to read from all 365 partitions. This leads to:

  • Memory exhaustion: The query plan itself becomes enormous, containing nodes for every partition
  • Planning overhead: Even though the plan is cached, initializing it for execution requires allocating memory for all partition nodes
  • Execution inefficiency: The executor must check every partition, even though 364 of them will return zero rows

In extreme cases with thousands of partitions, this can consume gigabytes of memory per connection and bring your database to its knees.

3. Partition Pruning: The Critical Optimization and How It Works

Partition pruning is the process of eliminating partitions that cannot possibly contain relevant data based on query constraints. Understanding partition pruning in depth is essential for working with partitioned tables effectively.

3.1 What Is Partition Pruning?

At its core, partition pruning is PostgreSQL’s mechanism for avoiding unnecessary work. When you query a partitioned table, the database analyzes your WHERE clause and determines which partitions could possibly contain matching rows. All other partitions are excluded from the query execution entirely.

Consider a table partitioned by date range:

CREATE TABLE sales (
    sale_id BIGINT,
    sale_date DATE,
    amount NUMERIC,
    product_id INTEGER
) PARTITION BY RANGE (sale_date);

CREATE TABLE sales_2023_q1 PARTITION OF sales
    FOR VALUES FROM ('2023-01-01') TO ('2023-04-01');
    
CREATE TABLE sales_2023_q2 PARTITION OF sales
    FOR VALUES FROM ('2023-04-01') TO ('2023-07-01');
    
CREATE TABLE sales_2023_q3 PARTITION OF sales
    FOR VALUES FROM ('2023-07-01') TO ('2023-10-01');
    
CREATE TABLE sales_2023_q4 PARTITION OF sales
    FOR VALUES FROM ('2023-10-01') TO ('2024-01-01');

When you execute:

SELECT * FROM sales WHERE sale_date = '2023-05-15';

PostgreSQL performs partition pruning by examining the partition constraints. It determines that only sales_2023_q2 can contain rows with sale_date = ‘2023-05-15’, so it completely ignores the other three partitions. They never get opened, scanned, or loaded into memory.

3.2 The Two Stages of Partition Pruning

PostgreSQL performs partition pruning at two distinct stages in query execution, and understanding the difference is crucial for troubleshooting performance issues.

Stage 1: Plan Time Pruning (Static Pruning)

Plan time pruning happens during the query planning phase, before execution begins. This is the ideal scenario because pruned partitions never appear in the execution plan at all.

When it occurs:

  • The query contains literal values in the WHERE clause
  • The partition key columns are directly compared to constants
  • The planner can evaluate the partition constraints at planning time

Example:

EXPLAIN SELECT * FROM sales WHERE sale_date = '2023-05-15';

Output might show:

Seq Scan on sales_2023_q2 sales
  Filter: (sale_date = '2023-05-15'::date)

Notice that only one partition appears in the plan. The other three partitions were pruned away during planning, and they consume zero resources.

What makes plan time pruning possible:

The planner evaluates the WHERE clause condition against each partition’s constraint. For sales_2023_q2, the constraint is:

sale_date >= '2023-04-01' AND sale_date < '2023-07-01'

The planner performs boolean logic: “Can sale_date = ‘2023-05-15’ be true if the constraint requires sale_date >= ‘2023-04-01’ AND sale_date < ‘2023-07-01’?” Yes, it can. For the other partitions, the answer is no, so they’re eliminated.

Performance characteristics:

  • No runtime overhead for pruned partitions
  • Minimal memory usage
  • Optimal query performance
  • The execution plan is lean and specific

Stage 2: Execution Time Pruning (Dynamic Pruning)

Execution time pruning, also called runtime pruning, happens during query execution rather than planning. This occurs when the planner cannot determine which partitions to prune until the query actually runs.

When it occurs:

  • Parameters or variables are used instead of literal values
  • Subqueries provide the filter values
  • Join conditions determine which partitions are needed
  • Prepared statements with parameters

Example:

PREPARE get_sales AS 
SELECT * FROM sales WHERE sale_date = $1;

EXPLAIN (ANALYZE) EXECUTE get_sales('2023-05-15');

With execution time pruning, the plan initially includes all partitions, but the output shows:

Append (actual rows=100)
  Subplans Removed: 3
  -> Seq Scan on sales_2023_q2 sales (actual rows=100)
       Filter: (sale_date = '2023-05-15'::date)

The key indicator is “Subplans Removed: 3”, which tells you that three partitions were pruned at execution time.

How execution time pruning works:

During the initialization phase of query execution, PostgreSQL evaluates the actual parameter values and applies the same constraint checking logic as plan time pruning. However, instead of eliminating partitions from the plan, it marks them as “pruned” and skips their initialization and execution.

The critical difference:

Even though execution time pruning skips scanning the pruned partitions, the plan still contains nodes for all partitions. This means:

  • Memory is allocated for all partition nodes (though less than full initialization)
  • The plan structure is larger
  • There is a small runtime cost to check each partition
  • More complex bookkeeping is required

This is why execution time pruning, while much better than no pruning, is not quite as efficient as plan time pruning.

3.3 Partition Pruning with Different Partition Strategies

PostgreSQL supports multiple partitioning strategies, and pruning works differently for each.

Range Partitioning

Range partitioning is the most common and supports the most effective pruning:

CREATE TABLE measurements (
    measurement_time TIMESTAMPTZ,
    sensor_id INTEGER,
    value NUMERIC
) PARTITION BY RANGE (measurement_time);

Pruning logic: PostgreSQL uses range comparison. Given a filter like measurement_time >= '2024-01-01' AND measurement_time < '2024-02-01', it identifies all partitions whose ranges overlap with this query range.

Pruning effectiveness: Excellent. Range comparisons are computationally cheap and highly selective.

List Partitioning

List partitioning groups rows by discrete values:

CREATE TABLE orders (
    order_id BIGINT,
    country_code TEXT,
    amount NUMERIC
) PARTITION BY LIST (country_code);

CREATE TABLE orders_us PARTITION OF orders
    FOR VALUES IN ('US');
    
CREATE TABLE orders_uk PARTITION OF orders
    FOR VALUES IN ('UK');
    
CREATE TABLE orders_eu PARTITION OF orders
    FOR VALUES IN ('DE', 'FR', 'IT', 'ES');

Pruning logic: PostgreSQL checks if the query value matches any value in each partition’s list.

SELECT * FROM orders WHERE country_code = 'FR';

Only orders_eu is accessed because ‘FR’ appears in its value list.

Pruning effectiveness: Very good for equality comparisons. Less effective for OR conditions across many values or pattern matching.

Hash Partitioning

Hash partitioning distributes rows using a hash function:

CREATE TABLE users (
    user_id BIGINT,
    username TEXT,
    email TEXT
) PARTITION BY HASH (user_id);

CREATE TABLE users_p0 PARTITION OF users
    FOR VALUES WITH (MODULUS 4, REMAINDER 0);
    
CREATE TABLE users_p1 PARTITION OF users
    FOR VALUES WITH (MODULUS 4, REMAINDER 1);
-- ... p2, p3

Pruning logic: PostgreSQL computes the hash of the query value and determines which partition it maps to.

SELECT * FROM users WHERE user_id = 12345;

PostgreSQL calculates hash(12345) % 4 and accesses only the matching partition.

Pruning effectiveness: Excellent for equality on the partition key. Completely ineffective for range queries, pattern matching, or anything except exact equality matches.

3.4 Complex Partition Pruning Scenarios

Real world queries are often more complex than simple equality comparisons. Here’s how pruning handles various scenarios:

Multi Column Partition Keys

CREATE TABLE events (
    event_date DATE,
    region TEXT,
    data JSONB
) PARTITION BY RANGE (event_date, region);

Pruning works on the leading columns of the partition key. A query filtering only on event_date can still prune effectively. A query filtering only on region cannot prune at all because region is not the leading column.

OR Conditions

SELECT * FROM sales 
WHERE sale_date = '2023-05-15' OR sale_date = '2023-08-20';

PostgreSQL must access partitions for both dates (Q2 and Q3), so it keeps both and prunes Q1 and Q4. OR conditions reduce pruning effectiveness.

Inequality Comparisons

SELECT * FROM sales WHERE sale_date >= '2023-05-01';

PostgreSQL prunes partitions entirely before the date (Q1) but must keep all partitions from Q2 onward. Range queries reduce pruning selectivity.

Joins Between Partitioned Tables

SELECT * FROM sales s
JOIN products p ON s.product_id = p.product_id
WHERE s.sale_date = '2023-05-15';

If sales is partitioned by sale_date, that partition pruning works normally. If products is also partitioned, PostgreSQL attempts partitionwise joins where possible, enabling pruning on both sides.

Subqueries Providing Values

SELECT * FROM sales 
WHERE sale_date = (SELECT MAX(order_date) FROM orders);

This requires execution time pruning because the subquery must run before PostgreSQL knows which partition to access.

3.5 Monitoring Partition Pruning

To verify partition pruning is working, use EXPLAIN:

EXPLAIN (ANALYZE, BUFFERS) 
SELECT * FROM sales WHERE sale_date = '2023-05-15';

What to look for:

Plan time pruning succeeded:

Seq Scan on sales_2023_q2

Only one partition appears in the plan at all.

Execution time pruning succeeded:

Append
  Subplans Removed: 3
  -> Seq Scan on sales_2023_q2

All partitions appear in the plan structure, but “Subplans Removed” shows pruning happened.

No pruning occurred:

Append
  -> Seq Scan on sales_2023_q1
  -> Seq Scan on sales_2023_q2
  -> Seq Scan on sales_2023_q3
  -> Seq Scan on sales_2023_q4

All partitions were scanned. This indicates a problem.

3.6 Why Partition Pruning Fails

Understanding why pruning fails helps you fix it:

  1. Query doesn’t filter on partition key: If your WHERE clause doesn’t reference the partition column(s), PostgreSQL cannot prune.
  2. Function calls on partition key: WHERE EXTRACT(YEAR FROM sale_date) = 2023 prevents pruning because PostgreSQL can’t map the function result back to partition ranges. Use WHERE sale_date >= '2023-01-01' AND sale_date < '2024-01-01' instead.
  3. Type mismatches: If your partition key is DATE but you compare to TEXT without explicit casting, pruning may fail.
  4. Generic plans in prepared statements: As discussed in the main article, generic plans prevent plan time pruning and older PostgreSQL versions struggled with execution time pruning.
  5. OR conditions with non-partition columns: WHERE sale_date = '2023-05-15' OR customer_id = 100 prevents pruning because customer_id isn’t the partition key.
  6. Volatile functions: WHERE sale_date = CURRENT_DATE may prevent plan time pruning (but should work with execution time pruning).

3.7 Partition Pruning Performance Impact

The performance difference between pruned and unpruned queries can be staggering:

Example scenario: 1000 partitions, each with 1 million rows. Query targets one partition.

With pruning:

  • Partitions opened: 1
  • Rows scanned: 1 million
  • Memory for plan nodes: ~10KB
  • Query time: 50ms

Without pruning:

  • Partitions opened: 1000
  • Rows scanned: 1 billion (returning ~1 million)
  • Memory for plan nodes: ~10MB
  • Query time: 45 seconds

The difference is not incremental; it’s exponential as partition count grows.

4. Partition Pruning in Prepared Statements: The Core Problem

Let me illustrate the severity with a real-world scenario:

-- Table with 1000 partitions
CREATE TABLE metrics (
    timestamp TIMESTAMPTZ,
    metric_name TEXT,
    value NUMERIC
) PARTITION BY RANGE (timestamp);

-- Create 1000 daily partitions...

-- Prepared statement
PREPARE get_metrics AS
SELECT * FROM metrics 
WHERE timestamp >= $1 AND timestamp < $2;

After the 6th execution, PostgreSQL switches to a generic plan. Each subsequent execution:

  1. Allocates memory for 1000 partition nodes
  2. Initializes executor state for 1000 partitions
  3. Checks 1000 partition constraints
  4. Returns data from just 1-2 partitions

If you have 100 connections each executing this prepared statement, you’re multiplying this overhead by 100. With connection poolers reusing connections (and thus reusing prepared statements), the problem compounds.

6. The Fix: Evolution Across PostgreSQL Versions

PostgreSQL has steadily improved partition pruning for prepared statements:

PostgreSQL 11: Execution-Time Pruning Introduced

PostgreSQL 11 introduced run-time partition pruning, but it had significant limitations with prepared statements. Generic plans still included all partitions in memory, even if they could be skipped during execution.

PostgreSQL 12: Better Prepared Statement Pruning

PostgreSQL 12 made substantial improvements:

  • Generic plans gained the ability to defer partition pruning to execution time more effectively
  • The planner became smarter about when to use generic vs. custom plans for partitioned tables
  • Memory consumption for generic plans improved significantly

However, issues remained in edge cases, particularly with: • Multi level partitioning • Complex join queries involving partitioned tables • Prepared statements in stored procedures

PostgreSQL 13-14: Refined Heuristics

These versions improved the cost model for deciding between custom and generic plans:

  • Better accounting for partition pruning benefits in the cost calculation
  • More accurate statistics gathering on partitioned tables
  • Improved handling of partitionwise joins

PostgreSQL 15-16: The Real Game Changers

PostgreSQL 15 and 16 brought transformative improvements:

PostgreSQL 15:

  • Dramatically reduced memory usage for generic plans on partitioned tables
  • Improved execution-time pruning performance
  • Better handling of prepared statements with partition pruning

PostgreSQL 16:

  • Introduced incremental sorting improvements that benefit partitioned queries
  • Enhanced partition-wise aggregation
  • More aggressive execution-time pruning

The key breakthrough: PostgreSQL now builds “stub” plans that allocate minimal memory for partitions that will be pruned, rather than fully initializing all partition nodes.

Workarounds for Older Versions

If you’re stuck on older PostgreSQL versions, here are strategies to avoid the prepared statement pitfall:

1. Disable Generic Plans

Force PostgreSQL to always use custom plans:

-- Set at session level
SET plan_cache_mode = force_custom_plan;

-- Or for specific prepared statement contexts
PREPARE get_events AS
SELECT * FROM events WHERE event_date = $1;

-- Before execution
SET LOCAL plan_cache_mode = force_custom_plan;
EXECUTE get_events('2024-06-15');

This sacrifices the planning time savings but ensures proper partition pruning.

2. Use Statement Level Caching Instead

Many ORMs and database drivers offer statement level caching that doesn’t persist across multiple executions:

# psycopg2 example - named cursors create server-side cursors
# but don't persist plans
cursor = connection.cursor()
cursor.execute(
    "SELECT * FROM events WHERE event_date = %s",
    (date_value,)
)

3. Adjust plan_cache_mode Per Query

PostgreSQL 12+ provides plan_cache_mode:

-- auto (default): use PostgreSQL's heuristics
-- force_generic_plan: always use generic plan
-- force_custom_plan: always use custom plan

SET plan_cache_mode = force_custom_plan;

For partitioned tables, force_custom_plan is often the right choice.

4. Increase Custom Plan Count

The threshold of 5 custom plans before switching to generic is hardcoded, but you can work around it by using different prepared statement names or by periodically deallocating and recreating prepared statements:

DEALLOCATE get_events;
PREPARE get_events AS SELECT * FROM events WHERE event_date = $1;

5. Partition Pruning Hints

In PostgreSQL 12+, you can sometimes coerce the planner into better behavior:

-- Using an explicit constraint that helps the planner
SELECT * FROM events 
WHERE event_date = $1 
  AND event_date >= CURRENT_DATE - INTERVAL '1 year';

This additional constraint provides a hint about the parameter range.

Best Practices

  1. Monitor your query plans: Use EXPLAIN (ANALYZE, BUFFERS) to check if partition pruning is happening:
EXPLAIN (ANALYZE, BUFFERS) EXECUTE get_events('2024-06-15');

Look for “Partitions removed” in the output.

  1. Check prepared statement statistics: Query pg_prepared_statements to see generic vs. custom plan usage:
SELECT name, 
       generic_plans, 
       custom_plans 
FROM pg_prepared_statements;
  1. Upgrade PostgreSQL: If you’re dealing with large partitioned tables, the improvements in PostgreSQL 15+ are worth the upgrade effort.
  2. Design partitions appropriately: Don’t over-partition. Having 10,000 tiny partitions creates problems even with perfect pruning.
  3. Use connection pooling wisely: Prepared statements persist per connection. With connection pooling, long-lived connections accumulate many prepared statements. Configure your pooler to occasionally recycle connections.
  4. Benchmark both modes: Test your specific workload with both custom and generic plans to measure the actual impact.

Conclusion

Prepared statements are a powerful optimization, but their interaction with partitioned tables exposes a fundamental tension: caching for reuse versus specificity for efficiency. PostgreSQL’s evolution from version 11 through 16 represents a masterclass in addressing this challenge.

The key takeaway: if you’re using prepared statements with partitioned tables on PostgreSQL versions older than 15, be vigilant about plan caching behavior. Monitor memory usage, check execution plans, and don’t hesitate to force custom plans when generic plans cause problems.

For modern PostgreSQL installations (15+), the improvements are substantial enough that the traditional guidance of “be careful with prepared statements on partitioned tables” is becoming outdated. The database now handles these scenarios with far more intelligence and efficiency.

But understanding the history and mechanics remains crucial, because the next time you see mysterious memory growth in your PostgreSQL connections, you’ll know exactly where to look.

0
0

Stablecoins: A Comprehensive Guide

1. What Are Stablecoins?

Stablecoins are a type of cryptocurrency designed to maintain a stable value by pegging themselves to a reserve asset, typically a fiat currency like the US dollar. Unlike volatile cryptocurrencies such as Bitcoin or Ethereum, which can experience dramatic price swings, stablecoins aim to provide the benefits of digital currency without the price volatility.

The most common types of stablecoins include:

Fiat collateralized stablecoins are backed by traditional currencies held in reserve at a 1:1 ratio. Examples include Tether (USDT) and USD Coin (USDC), which maintain reserves in US dollars or dollar equivalent assets.

Crypto collateralized stablecoins use other cryptocurrencies as collateral, often over collateralized to account for volatility. DAI is a prominent example, backed by Ethereum and other crypto assets.

Algorithmic stablecoins attempt to maintain their peg through automated supply adjustments based on market demand, without traditional collateral backing. These have proven to be the most controversial and risky category.

2. Why Do Stablecoins Exist?

Stablecoins emerged to solve several critical problems in both traditional finance and the cryptocurrency ecosystem.

In the crypto world, they provide a stable store of value and medium of exchange. Traders use stablecoins to move in and out of volatile positions without converting back to fiat currency, avoiding the delays and fees associated with traditional banking. They serve as a safe harbor during market turbulence and enable seamless transactions across different blockchain platforms.

For cross border payments and remittances, stablecoins offer significant advantages over traditional methods. International transfers that typically take days and cost substantial fees can be completed in minutes for a fraction of the cost. This makes them particularly valuable for workers sending money to families in other countries or businesses conducting international trade.

Stablecoins also address financial inclusion challenges. In countries with unstable currencies or limited banking infrastructure, they provide access to a stable digital currency that can be held and transferred using just a smartphone. This opens up financial services to the unbanked and underbanked populations worldwide.

2.1 How Do Stablecoins Move Money?

Stablecoins move between countries by riding on public or permissioned blockchains rather than correspondent banking rails. When a sender in one country initiates a payment, their bank or payment provider converts local currency into a regulated stablecoin (for example a USD or EUR backed token) and sends that token directly to the recipient bank’s blockchain address. The transaction settles globally in minutes with finality provided by the blockchain, not by intermediaries. To participate, a bank joins a stablecoin network by becoming an authorised issuer or distributor, integrating custody and wallet infrastructure, and connecting its core banking systems to blockchain rails via APIs. On the receiving side, the bank accepts the stablecoin, performs compliance checks (KYC, AML, sanctions screening), and redeems it back into local currency for the client’s account. Because value moves as tokens on chain rather than as messages between correspondent banks, there is no need for SWIFT messaging, nostro/vostro accounts, or multi-day settlement, resulting in faster, cheaper, and more transparent cross border payments.

If a bank does not want the operational and regulatory burden of running its own digital asset custody, it can partner with specialist technology and infrastructure providers that offer custody, wallet management, compliance tooling, and blockchain connectivity as managed services. In this model, the bank retains the customer relationship and regulatory accountability, while the tech partner handles private key security, smart-contract interaction, transaction monitoring, and network operations under strict service-level and audit agreements. Commonly used players in this space include Fireblocks and Copper for institutional custody and secure transaction orchestration; Anchorage Digital and BitGo for regulated custody and settlement services; Circle for stablecoin issuance and on-/off-ramps (USDC); Coinbase Institutional for custody and liquidity; and Stripe or Visa for fiat to stablecoin on-ramps and payment integration. This partnership approach allows banks to move quickly into stablecoin based cross-border payments without rebuilding their core infrastructure or taking on unnecessary operational risk.

3. How Do Stablecoins Make Money?

Stablecoin issuers have developed several revenue models that can be remarkably profitable.

The primary revenue source for fiat backed stablecoins is interest on reserves. When issuers hold billions of dollars in US Treasury bills or other interest bearing assets backing their stablecoins, they earn substantial returns. For instance, with interest rates at 5%, a stablecoin issuer with $100 billion in reserves could generate $5 billion annually while still maintaining the 1:1 peg. Users typically receive no interest on their stablecoin holdings, allowing issuers to pocket the entire yield.

Transaction fees represent another revenue stream. While often minimal, the sheer volume of stablecoin transactions generates significant income. Some issuers charge fees for minting (creating) or redeeming stablecoins, particularly for large institutional transactions.

Premium services for institutional clients provide additional revenue. Banks, payment processors, and large enterprises often pay for faster settlement, higher transaction limits, dedicated support, and integration services.

Many stablecoin platforms also generate revenue through their broader ecosystem. This includes charging fees on decentralized exchanges, lending protocols, or other financial services built around the stablecoin.

3.1 The Pendle Revenue Model: Yield Trading Innovation

Pendle represents an innovative evolution in the DeFi stablecoin ecosystem through its yield trading protocol. Rather than issuing stablecoins directly, Pendle creates markets for trading future yield on stablecoin deposits and other interest bearing assets.

The Pendle revenue model operates through several mechanisms. The protocol charges trading fees on its automated market makers (AMMs), typically around 0.1% to 0.3% per swap. When users trade yield tokens on Pendle’s platform, a portion of these fees goes to the protocol treasury while another portion rewards liquidity providers who supply capital to the trading pools.

Pendle’s unique approach involves splitting interest bearing tokens into two components: the principal token (PT) representing the underlying asset, and the yield token (YT) representing the future interest. This separation allows sophisticated users to speculate on interest rates, hedge yield exposure, or lock in fixed returns on their stablecoin holdings.

The protocol generates revenue through swap fees, redemption fees when tokens mature, and potential governance token value capture as the protocol grows. This model demonstrates how stablecoin adjacent services can create profitable businesses by adding layers of financial sophistication on top of basic stablecoin infrastructure. Pendle particularly benefits during periods of high interest rates, when demand for yield trading increases and the potential returns from separating yield rights become more valuable.

4. Security and Fraud Concerns

Stablecoins face several critical security and fraud challenges that potential users and regulators must consider.

Reserve transparency and verification remain the most significant concern. Issuers must prove they actually hold the assets backing their stablecoins. Several controversies have erupted when stablecoin companies failed to provide clear, audited proof of reserves. The risk is that an issuer might not have sufficient backing, leading to a bank run scenario where the peg collapses and users cannot redeem their coins.

Smart contract vulnerabilities pose technical risks. Stablecoins built on blockchain platforms rely on code that, if flawed, can be exploited by hackers. Major hacks have resulted in hundreds of millions of dollars in losses, and once stolen, blockchain transactions are typically irreversible.

Regulatory uncertainty creates ongoing challenges. Different jurisdictions treat stablecoins differently, and the lack of clear, consistent regulation creates risks for both issuers and users. There’s potential for sudden regulatory action that could freeze assets or shut down operations.

Counterparty risk is inherent in centralized stablecoins. Users must trust the issuing company to maintain reserves, operate honestly, and remain solvent. If the company fails or acts fraudulently, users may lose their funds with limited recourse.

The algorithmic stablecoin model has proven particularly vulnerable. The catastrophic collapse of TerraUSD in 2022, which lost over $40 billion in value, demonstrated that algorithmic mechanisms can fail spectacularly under market stress, creating devastating losses for holders.

Money laundering and sanctions evasion concerns have drawn regulatory scrutiny. The pseudonymous nature of cryptocurrency transactions makes stablecoins attractive for illicit finance, though blockchain’s transparent ledger also makes transactions traceable with proper tools and cooperation.

4.1 Monitoring Stablecoin Flows

Effective monitoring of stablecoin flows has become critical for financial institutions, regulators, and the issuers themselves to ensure compliance, detect fraud, and understand market dynamics.

On Chain Analytics Tools provide the foundation for stablecoin monitoring. Since most stablecoins operate on public blockchains, every transaction is recorded and traceable. Companies like Chainalysis, Elliptic, and TRM Labs specialize in blockchain analytics, offering platforms that track stablecoin movements across wallets and exchanges. These tools can identify patterns, flag suspicious activities, and trace funds through complex transaction chains.

Real Time Transaction Monitoring systems alert institutions to potentially problematic flows. These systems track large transfers, unusual transaction patterns, rapid movement between exchanges (potentially indicating wash trading or manipulation), and interactions with known illicit addresses. Financial institutions integrating stablecoins must implement monitoring comparable to traditional payment systems.

Wallet Clustering and Entity Attribution techniques help identify the real world entities behind blockchain addresses. By analyzing transaction patterns, timing, and common input addresses, analytics firms can cluster related wallets and often attribute them to specific exchanges, services, or even individuals. This capability is crucial for understanding who holds stablecoins and where they’re being used.

Reserve Monitoring and Attestation focuses on the issuer side. Independent auditors and blockchain analysis firms track the total supply of stablecoins and verify that corresponding reserves exist. Circle, for instance, publishes monthly attestations from accounting firms. Some advanced monitoring systems provide real time transparency by linking on chain supply data with bank account verification.

Cross Chain Tracking has become essential as stablecoins exist across multiple blockchains. USDC and USDT operate on Ethereum, Tron, Solana, and other chains, requiring monitoring solutions that aggregate data across these ecosystems to provide a complete picture of flows.

Market Intelligence and Risk Assessment platforms combine on chain data with off chain information to assess concentration risk, identify potential market manipulation, and provide early warning of potential instability. When a small number of addresses hold large stablecoin positions, it creates systemic risk that monitoring can help quantify.

Banks and financial institutions implementing stablecoins typically deploy a combination of commercial blockchain analytics platforms, custom monitoring systems, and compliance teams trained in cryptocurrency investigation. The goal is achieving the same level of financial crime prevention and risk management that exists in traditional banking while adapting to the unique characteristics of blockchain technology.

5. How Regulators View Stablecoins

Regulatory attitudes toward stablecoins vary significantly across jurisdictions, but common themes and concerns have emerged globally.

United States Regulatory Approach involves multiple agencies with overlapping jurisdictions. The Securities and Exchange Commission (SEC) has taken the position that some stablecoins may be securities, particularly those offering yield or governed by investment contracts. The Commodity Futures Trading Commission (CFTC) views certain stablecoins as commodities. The Treasury Department and the Financial Stability Oversight Council have identified stablecoins as potential systemic risks requiring bank like regulation.

Proposed legislation in the US Congress has sought to create a comprehensive framework requiring stablecoin issuers to maintain high quality liquid reserves, submit to regular audits, and potentially obtain banking charters or trust company licenses. The regulatory preference is clearly toward treating major stablecoin issuers as financial institutions subject to banking supervision.

European Union Regulation has taken a more structured approach through the Markets in Crypto Assets (MiCA) regulation, which came into effect in 2024. MiCA establishes clear requirements for stablecoin issuers including reserve asset quality standards, redemption rights for holders, capital requirements, and governance standards. The regulation distinguishes between smaller stablecoin operations and “significant” stablecoins that require more stringent oversight due to their systemic importance.

United Kingdom Regulators are developing a framework that treats stablecoins used for payments as similar to traditional payment systems. The Bank of England and Financial Conduct Authority have indicated that stablecoin issuers should meet standards comparable to commercial banks, including holding reserves in central bank accounts or high quality government securities.

Asian Regulatory Perspectives vary widely. Singapore’s Monetary Authority has created a licensing regime for stablecoin issuers focused on reserve management and redemption guarantees. Hong Kong is developing similar frameworks. China has banned private stablecoins entirely while developing its own central bank digital currency. Japan requires stablecoin issuers to be licensed banks or trust companies.

Key Regulatory Concerns consistently include systemic risk (the failure of a major stablecoin could trigger broader financial instability), consumer protection (ensuring holders can redeem stablecoins for fiat currency), anti money laundering compliance, reserve adequacy and quality, concentration risk in the Treasury market (if stablecoin reserves significantly increase holdings of government securities), and the potential for stablecoins to facilitate capital flight or undermine monetary policy.

Central Bank Digital Currencies (CBDCs) represent a regulatory response to private stablecoins. Many central banks are developing or piloting digital currencies partly to provide a public alternative to private stablecoins, allowing governments to maintain monetary sovereignty while capturing the benefits of digital currency.

The regulatory trend is clearly toward treating stablecoins as systemically important financial infrastructure requiring oversight comparable to banks or payment systems, with an emphasis on reserve quality, redemption rights, and anti money laundering compliance.

5.1 How Stablecoins Impact the Correspondent Banking Model

Stablecoins pose both opportunities and existential challenges to the traditional correspondent banking system that has dominated international payments for decades.

The Traditional Correspondent Banking Model relies on a network of banking relationships where banks hold accounts with each other to facilitate international transfers. When a business in Brazil wants to pay a supplier in Thailand, the payment typically flows through multiple intermediary banks, each taking fees and adding delays. This system involves currency conversion, compliance checks at multiple points, and settlement risk, making international payments slow and expensive.

Stablecoins as Direct Competition offer a fundamentally different model. A business can send USDC directly to a recipient anywhere in the world in minutes, bypassing the correspondent banking network entirely. The recipient can then convert to local currency through a local exchange or payment processor. This disintermediation threatens the fee generating correspondent banking relationships that have been profitable for banks, particularly in remittance corridors and business to business payments.

Cost and Speed Advantages are significant. Traditional correspondent banking involves fees at multiple layers, often totaling 3-7% for remittances and 1-3% for business payments, with settlement taking 1-5 days. Stablecoin transfers can cost less than 1% including conversion fees, with settlement in minutes. This efficiency gap puts pressure on banks to either adopt stablecoin technology or risk losing payment volume.

The Disintermediation Threat extends beyond just payments. Correspondent banking generates substantial revenue for major international banks through foreign exchange spreads, service fees, and liquidity management. If businesses and individuals can hold and transfer value in stablecoins, they become less dependent on banks for international transactions. This is particularly threatening in high volume, low margin corridors where efficiency matters most.

Banks Adapting Through Integration represents one response to this threat. Rather than being displaced, some banks are incorporating stablecoins into their service offerings. They can issue their own stablecoins, partner with stablecoin issuers to provide on ramps and off ramps, or offer custody and transaction services for corporate clients wanting to use stablecoins. JPMorgan’s JPM Coin exemplifies this approach, using blockchain technology and stablecoin principles for institutional payments within a bank controlled system.

The Hybrid Model Emerging in practice combines stablecoins with traditional banking. Banks provide the fiat on ramps and off ramps, regulatory compliance, customer relationships, and local currency conversion, while stablecoins handle the actual transfer of value. This partnership model allows banks to maintain their customer relationships and regulatory compliance role while capturing efficiency gains from blockchain technology.

Regulatory Arbitrage Concerns arise because stablecoins can sometimes operate with less regulatory burden than traditional correspondent banking. Banks face extensive anti money laundering requirements, capital requirements, and regulatory scrutiny. If stablecoins provide similar services with lighter regulation, they gain a competitive advantage that regulators are increasingly seeking to eliminate through tighter stablecoin oversight.

Settlement Risk and Liquidity Management change fundamentally with stablecoins. Traditional correspondent banking requires banks to maintain nostro accounts (accounts held in foreign banks) prefunded with liquidity. Stablecoins allow for near instant settlement without prefunding requirements, potentially freeing up billions in trapped liquidity that banks currently must maintain across the correspondent network.

The long term impact will likely involve correspondent banking evolving rather than disappearing. Banks will increasingly serve as regulated gateways between fiat currency and stablecoins, while stablecoins handle the actual transfer of value. The most vulnerable players are mid tier correspondent banks that primarily provide routing services without strong customer relationships or value added services.

5.2 How FATF Standards Apply to Stablecoins

The Financial Action Task Force (FATF) provides international standards for combating money laundering and terrorist financing, and these standards have been extended to cover stablecoins and other virtual assets.

The Travel Rule represents the most significant FATF requirement affecting stablecoins. Originally designed for traditional wire transfers, the Travel Rule requires that information about the originator and beneficiary of transfers above a certain threshold (typically $1,000) must travel with the transaction. For stablecoins, this means that Virtual Asset Service Providers (VASPs) such as exchanges, wallet providers, and payment processors must collect and transmit customer information when facilitating stablecoin transfers.

Implementing the Travel Rule on public blockchains creates technical challenges. While bank wire transfers pass through controlled systems where information can be attached, blockchain transactions are peer to peer and pseudonymous. The industry has developed solutions like the Travel Rule Information Sharing Architecture (TRISA) and other protocols that allow VASPs to exchange customer information securely off chain while the stablecoin transaction occurs on chain.

Know Your Customer (KYC) and Customer Due Diligence requirements apply to any entity that provides services for stablecoin transactions. Exchanges, wallet providers, and payment processors must verify customer identities, assess risk levels, and maintain records of transactions. This requirement creates a tension with the permissionless nature of blockchain technology, where anyone can hold a self hosted wallet and transact directly without intermediaries.

VASP Registration and Licensing is required in most jurisdictions following FATF guidance. Any business providing stablecoin custody, exchange, or transfer services must register with financial authorities, implement anti money laundering programs, and submit to regulatory oversight. This has created significant compliance burdens for smaller operators and driven consolidation toward larger, well capitalized platforms.

Stablecoin Issuers as VASPs are generally classified as Virtual Asset Service Providers under FATF standards, subjecting them to the full range of anti money laundering and counter terrorist financing obligations. This includes transaction monitoring, suspicious activity reporting, and sanctions screening. Major issuers like Circle and Paxos have built sophisticated compliance programs comparable to traditional financial institutions.

The Self Hosted Wallet Challenge represents a key friction point. FATF has expressed concern about transactions involving self hosted (non custodial) wallets where users control their own private keys without intermediary oversight. Some jurisdictions have proposed restricting or requiring enhanced due diligence for transactions between VASPs and self hosted wallets, though this remains controversial and difficult to enforce technically.

Cross Border Coordination is essential but challenging. Stablecoins operate globally and instantly, but regulatory enforcement is jurisdictional. FATF promotes information sharing between national financial intelligence units and encourages mutual legal assistance. However, gaps in enforcement across jurisdictions create opportunities for regulatory arbitrage, where bad actors operate from jurisdictions with weak oversight.

Sanctions Screening is mandatory for stablecoin service providers. They must screen transactions against lists of sanctioned individuals, entities, and countries maintained by organizations like the US Office of Foreign Assets Control (OFAC). Several stablecoin issuers have demonstrated the ability to freeze funds in wallets associated with sanctioned addresses, showing that even decentralized systems can implement centralized controls when required by law.

Risk Based Approach is fundamental to FATF methodology. Service providers must assess the money laundering and terrorist financing risks specific to their operations and implement controls proportionate to those risks. For stablecoins, this means considering factors like transaction volumes, customer types, geographic exposure, and the underlying blockchain’s anonymity features.

Challenges in Implementation are significant. The pseudonymous nature of blockchain transactions makes it difficult to identify ultimate beneficial owners. The speed and global reach of stablecoin transfers compress the time window for intervention. The prevalence of decentralized exchanges and peer to peer transactions creates enforcement gaps. Some argue that excessive regulation will drive activity to unregulated platforms or privacy focused cryptocurrencies, making financial crime harder rather than easier to detect.

The FATF framework essentially attempts to impose traditional financial system controls on a technology designed to operate without intermediaries. While large, regulated stablecoin platforms can implement these requirements, the tension between regulatory compliance and the permissionless nature of blockchain technology remains unresolved and continues to drive both technological innovation and regulatory evolution.

6. Good Use Cases for Stablecoins

Despite the risks, stablecoins excel in several legitimate applications that offer clear advantages over traditional alternatives.

Cross border payments and remittances benefit enormously from stablecoins. Workers sending money home can avoid high fees and long delays, with transactions settling in minutes rather than days. Businesses conducting international trade can reduce costs and streamline operations significantly.

Treasury management for crypto native companies provides a practical use case. Cryptocurrency exchanges, blockchain projects, and Web3 companies need stable assets for operations while staying within the crypto ecosystem. Stablecoins let them hold working capital without exposure to crypto volatility.

Decentralized finance (DeFi) applications rely heavily on stablecoins. They enable lending and borrowing, yield farming, liquidity provision, and trading without the complications of volatile assets. Users can earn interest on stablecoin deposits or use them as collateral for loans.

Hedging against local currency instability makes stablecoins valuable in countries experiencing hyperinflation or currency crises. Citizens can preserve purchasing power by holding dollar backed stablecoins instead of rapidly devalating local currencies.

Programmable payments and smart contracts benefit from stablecoins. Businesses can automate payments based on conditions (such as releasing funds when goods are received) or create subscription services, escrow arrangements, and other complex payment structures that execute automatically.

Ecommerce and online payments increasingly accept stablecoins as they combine the low fees of cryptocurrency with price stability. This is particularly valuable for digital goods, online services, and merchant payments where volatility would be problematic.

6.1 Companies Specializing in Banking Stablecoin Integration

Several companies have emerged as leaders in helping traditional banks launch and integrate stablecoin solutions into their existing infrastructure.

Paxos is a regulated blockchain infrastructure company that provides white label stablecoin solutions for financial institutions. They’ve partnered with major companies to issue stablecoins and offer compliance focused infrastructure that meets banking regulatory requirements. Paxos handles the technical complexity while allowing banks to maintain their customer relationships.

Circle offers comprehensive business account services and APIs that enable banks to integrate USD Coin (USDC) into their platforms. Their developer friendly tools and banking partnerships have made them a go to provider for institutions wanting to offer stablecoin services. Circle emphasizes regulatory compliance and transparency with regular reserve attestations.

Fireblocks provides institutional grade infrastructure for banks looking to offer digital asset services, including stablecoins. Their platform handles custody, treasury operations, and connectivity to various blockchains, allowing banks to offer stablecoin functionality without building everything from scratch.

Taurus specializes in digital asset infrastructure for banks, wealth managers, and other financial institutions in Europe. They provide technology for custody, tokenization, and trading that enables traditional financial institutions to offer stablecoin services within existing regulatory frameworks.

Sygnum operates as a Swiss digital asset bank and offers banking as a service solutions. They help other banks integrate digital assets including stablecoins while ensuring compliance with Swiss banking regulations. Their approach combines traditional banking security with blockchain innovation.

Ripple has expanded beyond its cryptocurrency focus to offer enterprise blockchain solutions for banks, including infrastructure for stablecoin issuance and cross border payment solutions. Their partnerships with financial institutions worldwide position them as a bridge between traditional banking and blockchain technology.

BBVA and JPMorgan have also developed proprietary solutions (JPM Coin for JPMorgan) that other institutions might license or use as models, though these are typically more focused on their own operations and select partners.

7.1 The Bid Offer Spread Challenge: Liquidity vs. True 1:1 Conversions

One of the hidden costs in stablecoin adoption that significantly impacts user economics is the bid offer spread applied during conversions between fiat currency and stablecoins. While stablecoins are designed to maintain a 1:1 peg with their underlying asset (typically the US dollar), the reality of converting between fiat and crypto introduces market dynamics that can erode this theoretical parity.

7.1 Understanding the Spread Problem

When users convert fiat currency to stablecoins or vice versa through most platforms, they encounter a bid offer spread the difference between the buying price and selling price. Even though USDC or USDT theoretically equals $1.00, a platform might effectively charge $1.008 to buy a stablecoin and offer only $0.992 when selling it back. This 0.8% to 1.5% spread represents a significant friction cost, particularly for businesses making frequent conversions or moving large amounts.

This spread exists because most platforms operate market making models where they must maintain liquidity on both sides of the transaction. Holding inventory of both fiat and stablecoins involves costs: capital tied up in reserves, exposure to brief depegging events, regulatory compliance overhead, and the operational expense of managing banking relationships for fiat on ramps and off ramps. Platforms traditionally recover these costs through the spread rather than explicit fees.

For cryptocurrency exchanges and most fintech platforms, the spread also serves as their primary revenue mechanism for stablecoin conversions. When a platform facilitates thousands or millions of conversions daily, even small spreads generate substantial income. The spread compensates for the risk that during periods of market stress, stablecoins might temporarily trade below their peg, leaving the platform holding depreciated assets.

7.2 The Impact on Users and Business Operations

The cumulative effect of bid offer spreads becomes particularly painful for certain use cases. Small and medium sized businesses operating across borders face multiple conversion points: exchanging local currency to USD, converting USD to stablecoins for cross border transfer, then converting stablecoins back to USD or local currency at the destination. Each conversion compounds the cost, potentially consuming 2% to 4% of the transaction value when combined with traditional banking fees.

For businesses using stablecoins as working capital converting payroll, managing treasury operations, or settling international invoices the spread can eliminate much of the cost advantage that stablecoins are supposed to provide over traditional correspondent banking. A company converting $100,000 might effectively pay $1,500 in spread costs on a round trip conversion, comparable to traditional wire transfer fees that stablecoins aimed to disrupt.

Individual users in countries with unstable currencies face similar challenges. While holding USDT or USDC protects against local currency devaluation, the cost of frequently moving between local currency and stablecoins can be prohibitive. The spread becomes a “tax” on financial stability that disproportionately affects those who can least afford it.

7.3 Revolut’s 1:1 Model: Internalizing the Cost

Revolut’s recent introduction of true 1:1 conversions between USD and stablecoins (USDC and USDT) represents a fundamentally different approach to solving the spread problem. Rather than passing market making costs to users, Revolut absorbs the spread internally, guaranteeing that $1.00 in fiat equals exactly 1.00 stablecoin units in both directions, with no hidden markups.

This model is economically viable for Revolut because of several structural advantages. First, as a neobank with 65 million users and existing banking infrastructure, Revolut already maintains substantial fiat currency liquidity and doesn’t need to rely on external banking partners for every stablecoin conversion. Second, the company generates revenue from other services within its ecosystem subscription fees, interchange fees from card spending, interest on deposits allowing it to treat stablecoin conversions as a loss leader or break even feature that enhances customer retention and platform stickiness.

Third, by setting a monthly limit of approximately $578,000 per customer, Revolut manages its risk exposure while still accommodating the vast majority of retail and small business use cases. This prevents arbitrage traders from exploiting the zero spread model to make risk free profits by moving large volumes between Revolut and other platforms where spreads exist.

Revolut essentially bets that the value of removing friction from fiat crypto conversions thereby making stablecoins genuinely useful as working capital rather than speculative assets will drive sufficient user engagement and platform growth to justify the cost of eliminating spreads. For users, this transforms the economics of stablecoin usage, particularly for frequent converters or those operating in high currency volatility environments.

7.4 Why Not Everyone Can Offer 1:1 Conversions

The challenge for smaller platforms and pure cryptocurrency exchanges is that they lack Revolut’s structural advantages. A standalone crypto exchange without banking licenses and integrated fiat services must partner with banks for fiat on ramps, pay fees to those partners, maintain separate liquidity pools, and manage the regulatory complexity of operating in multiple jurisdictions. These costs don’t disappear simply because users want better rates they must be recovered somehow.

Additionally, maintaining tight spreads or true 1:1 conversions requires deep liquidity and sophisticated risk management. When thousands of users simultaneously want to exit stablecoins during market stress, a platform must have sufficient reserves to honor redemptions instantly without moving the price. Smaller platforms operating with thin liquidity buffers cannot safely eliminate spreads without risking insolvency during volatile periods.

The market structure for stablecoins also presents challenges. While stablecoins theoretically maintain 1:1 pegs, secondary market prices on decentralized exchanges and between different platforms can vary by small amounts. A platform offering guaranteed 1:1 conversions must either hold sufficient reserves to absorb these variations or accept that arbitrage traders will exploit any price discrepancies, potentially draining liquidity.

7.5 The Competitive Implications

Revolut’s move to zero spread stablecoin conversions could trigger a competitive dynamic in the fintech space, similar to how its original zero fee foreign exchange offering disrupted traditional currency conversion. Established players like Coinbase, Kraken, and other major exchanges will face pressure to reduce their spreads or explain why their costs remain higher.

For traditional banks contemplating stablecoin integration, the spread question becomes strategic. Banks could follow the Revolut model, absorbing spread costs to drive adoption and maintain customer relationships in an increasingly crypto integrated financial system. Alternatively, they might maintain spreads but offer other value added services that justify the cost, such as enhanced compliance, insurance on holdings, or integration with business treasury management systems.

The long term outcome may be market segmentation. Large, integrated fintech platforms with diverse revenue streams can offer true 1:1 conversions as a competitive advantage. Smaller, specialized platforms will continue operating with spreads but may differentiate through speed, blockchain coverage, or serving specific niches like high volume traders who value depth of liquidity over tight spreads.

For stablecoin issuers like Circle and Tether, the spread dynamics affect their business indirectly. Wider spreads on third party platforms create friction that slows stablecoin adoption, reducing the total assets under management that generate interest income for issuers. Partnerships with platforms offering tighter spreads or true 1:1 conversions could accelerate growth, even if those partnerships involve revenue sharing or other commercial arrangements.

Ultimately, the bid offer spread challenge highlights a fundamental tension in stablecoin economics: the gap between the theoretical promise of 1:1 value stability and the practical costs of maintaining liquidity, managing risk, and operating the infrastructure that connects fiat currency to blockchain based assets. Platforms that can bridge this gap efficiently whether through scale, integration, or innovative business models will have significant competitive advantages as stablecoins move from crypto native use cases into mainstream financial infrastructure.

8. Conclusion

Stablecoins represent a significant innovation in digital finance, offering the benefits of cryptocurrency without extreme volatility. They’ve found genuine utility in payments, remittances, and decentralized finance while generating substantial revenue for issuers through interest on reserves. However, they also carry real risks around reserve transparency, regulatory uncertainty, and potential fraud that users and institutions must carefully consider.

The regulatory landscape is rapidly evolving, with authorities worldwide moving toward treating stablecoins as systemically important financial infrastructure requiring bank like oversight. FATF standards impose traditional anti money laundering requirements on stablecoin service providers, creating compliance obligations comparable to traditional finance. Meanwhile, sophisticated monitoring tools have emerged to track flows, detect illicit activity, and ensure reserve adequacy.

For traditional banks, stablecoins represent both a competitive threat to correspondent banking models and an opportunity to modernize payment infrastructure. Rather than being displaced entirely, banks are increasingly positioning themselves as regulated gateways between fiat currency and stablecoins, maintaining customer relationships and compliance functions while leveraging blockchain efficiency.

For banks considering stablecoin integration, working with established infrastructure providers can mitigate technical and compliance challenges. The key is choosing use cases where stablecoins offer clear advantages, particularly in cross border payments and treasury management, while implementing robust risk management, transaction monitoring, and ensuring regulatory compliance with both traditional financial regulations and emerging crypto specific frameworks.

As the regulatory landscape evolves and technology matures, stablecoins are likely to become increasingly integrated into mainstream financial services. Their success will depend on maintaining trust through transparency, security, and regulatory cooperation while continuing to deliver value that traditional financial rails cannot match. The future likely involves a hybrid model where stablecoins and traditional banking coexist, each playing to their respective strengths in a more efficient, global financial system.

0
0

Building an advanced Browser Curl Script with Playwright and Selenium for load testing websites

Modern sites often block plain curl. Using a real browser engine (Chromium via Playwright) gives you true browser behavior: real TLS/HTTP2 stack, cookies, redirects, and JavaScript execution if needed. This post mirrors the functionality of the original browser_curl.sh wrapper but implemented with Playwright. It also includes an optional Selenium mini-variant at the end.

What this tool does

  • Sends realistic browser headers (Chrome-like)
  • Uses Chromium’s real network stack (HTTP/2, compression)
  • Manages cookies (persist to a file)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests for quick load tests

Note: Advanced bot defenses (CAPTCHAs, JS/ML challenges, strict TLS/HTTP2 fingerprinting) may still require full page automation and real user-like behavior. Playwright can do that too by driving real pages.

Setup

Run these once to install Playwright and Chromium:

npm init -y && \
npm install playwright && \
npx playwright install chromium

The complete Playwright CLI

Run this to create browser_playwright.mjs:

cat > browser_playwright.mjs << 'EOF'
#!/usr/bin/env node
import { chromium } from 'playwright';
import fs from 'fs';
import path from 'path';
import { spawn } from 'child_process';
const RED = '\u001b[31m';
const GRN = '\u001b[32m';
const YLW = '\u001b[33m';
const NC  = '\u001b[0m';
function usage() {
  const b = path.basename(process.argv[1]);
  console.log(`Usage: ${b} [OPTIONS] URL
Advanced HTTP client using Playwright (Chromium) with browser-like behavior.
OPTIONS:
  -X, --method METHOD        HTTP method (GET, POST, PUT, DELETE) [default: GET]
  -d, --data DATA            Request body
  -H, --header HEADER        Add custom header (repeatable)
  -o, --output FILE          Write response body to file
  -c, --cookie FILE          Cookie storage file [default: /tmp/pw_cookies_<pid>.json]
  -A, --user-agent UA        Custom User-Agent
  -t, --timeout SECONDS      Request timeout [default: 30]
      --async                Run request(s) in background
      --count N              Number of async requests to fire [default: 1, requires --async]
      --no-redirect          Do not follow redirects (best-effort)
      --show-headers         Print response headers
      --json                 Send data as JSON (sets Content-Type)
      --form                 Send data as application/x-www-form-urlencoded
  -v, --verbose              Verbose output
  -h, --help                 Show this help message
EXAMPLES:
  ${b} https://example.com
  ${b} --async https://example.com
  ${b} -X POST --json -d '{"a":1}' https://httpbin.org/post
  ${b} --async --count 10 https://httpbin.org/get
`);
}
function parseArgs(argv) {
  const args = { method: 'GET', async: false, count: 1, followRedirects: true, showHeaders: false, timeout: 30, data: '', contentType: '', cookieFile: '', verbose: false, headers: [], url: '' };
  for (let i = 0; i < argv.length; i++) {
    const a = argv[i];
    switch (a) {
      case '-X': case '--method': args.method = String(argv[++i] || 'GET'); break;
      case '-d': case '--data': args.data = String(argv[++i] || ''); break;
      case '-H': case '--header': args.headers.push(String(argv[++i] || '')); break;
      case '-o': case '--output': args.output = String(argv[++i] || ''); break;
      case '-c': case '--cookie': args.cookieFile = String(argv[++i] || ''); break;
      case '-A': case '--user-agent': args.userAgent = String(argv[++i] || ''); break;
      case '-t': case '--timeout': args.timeout = Number(argv[++i] || '30'); break;
      case '--async': args.async = true; break;
      case '--count': args.count = Number(argv[++i] || '1'); break;
      case '--no-redirect': args.followRedirects = false; break;
      case '--show-headers': args.showHeaders = true; break;
      case '--json': args.contentType = 'application/json'; break;
      case '--form': args.contentType = 'application/x-www-form-urlencoded'; break;
      case '-v': case '--verbose': args.verbose = true; break;
      case '-h': case '--help': usage(); process.exit(0);
      default:
        if (!args.url && !a.startsWith('-')) args.url = a; else {
          console.error(`${RED}Error: Unknown argument: ${a}${NC}`);
          process.exit(1);
        }
    }
  }
  return args;
}
function parseHeaderList(list) {
  const out = {};
  for (const h of list) {
    const idx = h.indexOf(':');
    if (idx === -1) continue;
    const name = h.slice(0, idx).trim();
    const value = h.slice(idx + 1).trim();
    if (!name) continue;
    out[name] = value;
  }
  return out;
}
function buildDefaultHeaders(userAgent) {
  const ua = userAgent || 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36';
  return {
    'User-Agent': ua,
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.9',
    'Accept-Encoding': 'gzip, deflate, br',
    'Connection': 'keep-alive',
    'Upgrade-Insecure-Requests': '1',
    'Sec-Fetch-Dest': 'document',
    'Sec-Fetch-Mode': 'navigate',
    'Sec-Fetch-Site': 'none',
    'Sec-Fetch-User': '?1',
    'Cache-Control': 'max-age=0'
  };
}
async function performRequest(opts) {
  // Cookie file handling
  const defaultCookie = `/tmp/pw_cookies_${process.pid}.json`;
  const cookieFile = opts.cookieFile || defaultCookie;
  // Launch Chromium
  const browser = await chromium.launch({ headless: true });
  const extraHeaders = { ...buildDefaultHeaders(opts.userAgent), ...parseHeaderList(opts.headers) };
  if (opts.contentType) extraHeaders['Content-Type'] = opts.contentType;
  const context = await browser.newContext({ userAgent: extraHeaders['User-Agent'], extraHTTPHeaders: extraHeaders });
  // Load cookies if present
  if (fs.existsSync(cookieFile)) {
    try {
      const ss = JSON.parse(fs.readFileSync(cookieFile, 'utf8'));
      if (ss.cookies?.length) await context.addCookies(ss.cookies);
    } catch {}
  }
  const request = context.request;
  // Build request options
  const reqOpts = { headers: extraHeaders, timeout: opts.timeout * 1000 };
  if (opts.data) {
    // Playwright will detect JSON strings vs form strings by headers
    reqOpts.data = opts.data;
  }
  if (opts.followRedirects === false) {
    // Best-effort: limit redirects to 0
    reqOpts.maxRedirects = 0;
  }
  const method = opts.method.toUpperCase();
  let resp;
  try {
    if (method === 'GET') resp = await request.get(opts.url, reqOpts);
    else if (method === 'POST') resp = await request.post(opts.url, reqOpts);
    else if (method === 'PUT') resp = await request.put(opts.url, reqOpts);
    else if (method === 'DELETE') resp = await request.delete(opts.url, reqOpts);
    else if (method === 'PATCH') resp = await request.patch(opts.url, reqOpts);
    else {
      console.error(`${RED}Unsupported method: ${method}${NC}`);
      await browser.close();
      process.exit(2);
    }
  } catch (e) {
    console.error(`${RED}[ERROR] ${e?.message || e}${NC}`);
    await browser.close();
    process.exit(3);
  }
  // Persist cookies
  try {
    const state = await context.storageState();
    fs.writeFileSync(cookieFile, JSON.stringify(state, null, 2));
  } catch {}
  // Output
  const status = resp.status();
  const statusText = resp.statusText();
  const headers = await resp.headers();
  const body = await resp.text();
  if (opts.verbose) {
    console.error(`${YLW}Request: ${method} ${opts.url}${NC}`);
    console.error(`${YLW}Headers: ${JSON.stringify(extraHeaders)}${NC}`);
  }
  if (opts.showHeaders) {
    // Print a simple status line and headers to stdout before body
    console.log(`HTTP ${status} ${statusText}`);
    for (const [k, v] of Object.entries(headers)) {
      console.log(`${k}: ${v}`);
    }
    console.log('');
  }
  if (opts.output) {
    fs.writeFileSync(opts.output, body);
  } else {
    process.stdout.write(body);
  }
  if (!resp.ok()) {
    console.error(`${RED}[ERROR] HTTP ${status} ${statusText}${NC}`);
    await browser.close();
    process.exit(4);
  }
  await browser.close();
}
async function main() {
  const argv = process.argv.slice(2);
  const opts = parseArgs(argv);
  if (!opts.url) { console.error(`${RED}Error: URL is required${NC}`); usage(); process.exit(1); }
  if ((opts.count || 1) > 1 && !opts.async) {
    console.error(`${RED}Error: --count requires --async${NC}`);
    process.exit(1);
  }
  if (opts.count < 1 || !Number.isInteger(opts.count)) {
    console.error(`${RED}Error: --count must be a positive integer${NC}`);
    process.exit(1);
  }
  if (opts.async) {
    // Fire-and-forget background processes
    const baseArgs = process.argv.slice(2).filter(a => a !== '--async' && !a.startsWith('--count'));
    const pids = [];
    for (let i = 0; i < opts.count; i++) {
      const child = spawn(process.execPath, [process.argv[1], ...baseArgs], { detached: true, stdio: 'ignore' });
      pids.push(child.pid);
      child.unref();
    }
    if (opts.verbose) {
      console.error(`${YLW}[ASYNC] Spawned ${opts.count} request(s).${NC}`);
    }
    if (opts.count === 1) console.error(`${GRN}[ASYNC] Request started with PID: ${pids[0]}${NC}`);
    else console.error(`${GRN}[ASYNC] ${opts.count} requests started with PIDs: ${pids.join(' ')}${NC}`);
    process.exit(0);
  }
  await performRequest(opts);
}
main().catch(err => {
  console.error(`${RED}[FATAL] ${err?.stack || err}${NC}`);
  process.exit(1);
});
EOF
chmod +x browser_playwright.mjs

Optionally, move it into your PATH:

sudo mv browser_playwright.mjs /usr/local/bin/browser_playwright

Quick start

  • Simple GET:
node browser_playwright.mjs https://example.com
  • Async GET (returns immediately):
node browser_playwright.mjs --async https://example.com
  • Fire 100 async requests in one command:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get

  • POST JSON:
node browser_playwright.mjs -X POST --json \
  -d '{"username":"user","password":"pass"}' \
  https://httpbin.org/post
  • POST form data:
node browser_playwright.mjs -X POST --form \
  -d "username=user&password=pass" \
  https://httpbin.org/post
  • Include response headers:
node browser_playwright.mjs --show-headers https://example.com
  • Save response to a file:
node browser_playwright.mjs -o response.json https://httpbin.org/json
  • Custom headers:
node browser_playwright.mjs \
  -H "X-API-Key: your-key" \
  -H "Authorization: Bearer token" \
  https://httpbin.org/headers
  • Persistent cookies across requests:
COOKIE_FILE="playwright_session.json"
# Login and save cookies
node browser_playwright.mjs -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=test&pass=secret" \
  https://httpbin.org/post > /dev/null
# Authenticated-like follow-up (cookie file reused)
node browser_playwright.mjs -c "$COOKIE_FILE" \
  https://httpbin.org/cookies

Load testing patterns

  • Simple load test with --count:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get
  • Loop-based alternative:
for i in {1..100}; do
  node browser_playwright.mjs --async https://httpbin.org/get
done
  • Timed load test:
cat > pw_load_for_duration.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
DURATION="${2:-60}"
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
  node browser_playwright.mjs --async "$URL" >/dev/null 2>&1
  ((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"
EOF
chmod +x pw_load_for_duration.sh
./pw_load_for_duration.sh https://httpbin.org/get 30
  • Parameterized load test:
cat > pw_load_test.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
node browser_playwright.mjs --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"
EOF
chmod +x pw_load_test.sh
./pw_load_test.sh https://httpbin.org/get 200

Options reference

  • -X, --method HTTP method (GET/POST/PUT/DELETE/PATCH)
  • -d, --data Request body
  • -H, --header Add extra headers (repeatable)
  • -o, --output Write response body to file
  • -c, --cookie Cookie file to use (and persist)
  • -A, --user-agent Override User-Agent
  • -t, --timeout Max request time in seconds (default 30)
  • --async Run request(s) in the background
  • --count N Fire N async requests (requires --async)
  • --no-redirect Best-effort disable following redirects
  • --show-headers Include response headers before body
  • --json Sets Content-Type: application/json
  • --form Sets Content-Type: application/x-www-form-urlencoded
  • -v, --verbose Verbose diagnostics

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

  • Real Chromium network stack (HTTP/2, TLS, compression)
  • Browser-like headers and a true User-Agent
  • Cookie jar via Playwright context storageState
  • Redirect handling by the browser stack

This helps pass simplistic bot checks and more closely resembles real user traffic.

Real-world examples

  • API-style auth flow (demo endpoints):
cat > pw_auth_flow.sh << 'EOF'
#!/usr/bin/env bash
COOKIE_FILE="pw_auth_session.json"
BASE="https://httpbin.org"
echo "Login (simulated form POST)..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=user&pass=pass" \
  "$BASE/post" > /dev/null
echo "Fetch cookies..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  "$BASE/cookies"
echo "Load test a protected-like endpoint..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  --async --count 20 \
  "$BASE/get"
echo "Done"
rm -f "$COOKIE_FILE"
EOF
chmod +x pw_auth_flow.sh
./pw_auth_flow.sh
  • Scraping with rate limiting:
cat > pw_scrape.sh << 'EOF'
#!/usr/bin/env bash
URLS=(
  "https://example.com/"
  "https://example.com/"
  "https://example.com/"
)
for url in "${URLS[@]}"; do
  echo "Fetching: $url"
  node browser_playwright.mjs -o "$(echo "$url" | sed 's#[/:]#_#g').html" "$url"
  sleep 2
done
EOF
chmod +x pw_scrape.sh
./pw_scrape.sh
  • Health check monitoring:
cat > pw_health.sh << 'EOF'
#!/usr/bin/env bash
ENDPOINT="${1:-https://httpbin.org/status/200}"
while true; do
  if node browser_playwright.mjs "$ENDPOINT" >/dev/null 2>&1; then
    echo "$(date): Service healthy"
  else
    echo "$(date): Service unhealthy"
  fi
  sleep 30
done
EOF
chmod +x pw_health.sh
./pw_health.sh

  • Hanging or quoting issues: ensure your shell quoting is balanced. Prefer simple commands without complex inline quoting.
  • Verbose mode too noisy: omit -v in production.
  • Cookie file format: the script writes Playwright storageState JSON. It’s safe to keep or delete.
  • 403 errors: site uses stronger protections. Drive a real page (Playwright page.goto) and interact, or solve CAPTCHAs where required.

Performance notes

Dispatch time depends on process spawn and Playwright startup. For higher throughput, consider reusing the same Node process to issue many requests (modify the script to loop internally) or use k6/Locust/Artillery for large-scale load testing.

Limitations

  • This CLI uses Playwright’s HTTP client bound to a Chromium context. It is much closer to real browsers than curl, but some advanced fingerprinting still detects automation.
  • WebSocket flows, MFA, or complex JS challenges generally require full page automation (which Playwright supports).

When to use what

  • Use this Playwright CLI when you need realistic browser behavior, cookies, and straightforward HTTP requests with quick async dispatch.
  • Use full Playwright page automation for dynamic content, complex logins, CAPTCHAs, and JS-heavy sites.

Advanced combos

  • With jq for JSON processing:
node browser_playwright.mjs https://httpbin.org/json | jq '.slideshow.title'
  • With parallel for concurrency:
echo -e "https://httpbin.org/get\nhttps://httpbin.org/headers" | \
parallel -j 5 "node browser_playwright.mjs -o {#}.json {}"
  • With watch for monitoring:
watch -n 5 "node browser_playwright.mjs https://httpbin.org/status/200 >/dev/null && echo ok || echo fail"
  • With xargs for batch processing:
echo -e "1\n2\n3" | xargs -I {} node browser_playwright.mjs "https://httpbin.org/anything/{}"

Future enhancements

  • Built-in rate limiting and retry logic
  • Output modes (JSON-only, headers-only)
  • Proxy support
  • Response assertions (status codes, content patterns)
  • Metrics collection (timings, success rates)

Minimal Selenium variant (Python)

If you prefer Selenium, here’s a minimal GET/headers/redirect/cookie-capable script. Note: issuing cross-origin POST bodies is more ergonomic with Playwright’s request client; Selenium focuses on page automation.

Install Selenium:

python3 -m venv .venv && source .venv/bin/activate
pip install --upgrade pip selenium

Create browser_selenium.py:

cat > browser_selenium.py << 'EOF'
#!/usr/bin/env python3
import argparse, json, os, sys, time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
RED='\033[31m'; GRN='\033[32m'; YLW='\033[33m'; NC='\033[0m'
def parse_args():
    p = argparse.ArgumentParser(description='Minimal Selenium GET client')
    p.add_argument('url')
    p.add_argument('-o','--output')
    p.add_argument('-c','--cookie', default=f"/tmp/selenium_cookies_{os.getpid()}.json")
    p.add_argument('--show-headers', action='store_true')
    p.add_argument('-t','--timeout', type=int, default=30)
    p.add_argument('-A','--user-agent')
    p.add_argument('-v','--verbose', action='store_true')
    return p.parse_args()
args = parse_args()
opts = Options()
opts.add_argument('--headless=new')
if args.user_agent:
    opts.add_argument(f'--user-agent={args.user_agent}')
with webdriver.Chrome(options=opts) as driver:
    driver.set_page_load_timeout(args.timeout)
    # Load cookies if present (domain-specific; best-effort)
    if os.path.exists(args.cookie):
        try:
            ck = json.load(open(args.cookie))
            for c in ck.get('cookies', []):
                try:
                    driver.get('https://' + c.get('domain').lstrip('.'))
                    driver.add_cookie({
                        'name': c['name'], 'value': c['value'], 'path': c.get('path','/'),
                        'domain': c.get('domain'), 'secure': c.get('secure', False)
                    })
                except Exception:
                    pass
        except Exception:
            pass
    driver.get(args.url)
    # Persist cookies (best-effort)
    try:
        cookies = driver.get_cookies()
        json.dump({'cookies': cookies}, open(args.cookie, 'w'), indent=2)
    except Exception:
        pass
    if args.output:
        open(args.output, 'w').write(driver.page_source)
    else:
        sys.stdout.write(driver.page_source)
EOF
chmod +x browser_selenium.py

Use it:

./browser_selenium.py https://example.com > out.html

Conclusion

You now have a Playwright-powered CLI that mirrors the original curl-wrapper’s ergonomics but uses a real browser engine, plus a minimal Selenium alternative. Use the CLI for realistic headers, cookies, redirects, JSON/form POSTs, and async dispatch with --count. For tougher sites, scale up to full page automation with Playwright.

Resources

0
0

Building a Browser Curl Wrapper for Reliable HTTP Requests and Load Testing

Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.

This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.

What this script does

  • Sends browser-like headers (Chrome on macOS)
  • Uses HTTP/2 and compression
  • Manages cookies automatically (cookie jar)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests in one command

Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.

The complete script

Save this as browser_curl.sh and make it executable in one command:

cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash

# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false

# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"

usage() {
    cat << EOH
Usage: $(basename "$0") [OPTIONS] URL

Advanced curl wrapper that mimics browser behavior to bypass bot protection.

OPTIONS:
    -X, --method METHOD        HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
    -d, --data DATA           POST/PUT data
    -H, --header HEADER       Add custom header (can be used multiple times)
    -o, --output FILE         Write output to file
    -c, --cookie FILE         Use custom cookie file [default: temp file]
    -A, --user-agent UA       Custom user agent [default: Chrome on macOS]
    -t, --timeout SECONDS     Request timeout [default: 30]
    --async                   Run request asynchronously in background
    --count N                 Number of async requests to fire [default: 1, requires --async]
    --no-redirect             Don't follow redirects
    --show-headers            Show response headers
    --json                    Send data as JSON (sets Content-Type)
    --form                    Send data as form-urlencoded
    -v, --verbose             Verbose output
    -h, --help                Show this help message

EXAMPLES:
    # Simple GET request
    $(basename "$0") https://example.com

    # Async GET request
    $(basename "$0") --async https://example.com

    # POST with JSON data
    $(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login

    # POST with form data
    $(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login

    # Multiple async requests (using loop)
    for i in {1..10}; do
        $(basename "$0") --async https://example.com/api/endpoint
    done

    # Multiple async requests (using --count)
    $(basename "$0") --async --count 10 https://example.com/api/endpoint

EOH
    exit 0
}

# Parse arguments
EXTRA_HEADERS=()
URL=""

while [[ $# -gt 0 ]]; do
    case $1 in
        -X|--method)
            METHOD="$2"
            shift 2
            ;;
        -d|--data)
            DATA="$2"
            shift 2
            ;;
        -H|--header)
            EXTRA_HEADERS+=("$2")
            shift 2
            ;;
        -o|--output)
            OUTPUT_FILE="$2"
            shift 2
            ;;
        -c|--cookie)
            COOKIE_FILE="$2"
            shift 2
            ;;
        -A|--user-agent)
            USER_AGENT="$2"
            shift 2
            ;;
        -t|--timeout)
            TIMEOUT="$2"
            shift 2
            ;;
        --async)
            ASYNC=true
            shift
            ;;
        --count)
            COUNT="$2"
            shift 2
            ;;
        --no-redirect)
            FOLLOW_REDIRECTS=false
            shift
            ;;
        --show-headers)
            SHOW_HEADERS=true
            shift
            ;;
        --json)
            CONTENT_TYPE="application/json"
            shift
            ;;
        --form)
            CONTENT_TYPE="application/x-www-form-urlencoded"
            shift
            ;;
        -v|--verbose)
            VERBOSE=true
            shift
            ;;
        -h|--help)
            usage
            ;;
        *)
            if [[ -z "$URL" ]]; then
                URL="$1"
            else
                echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
                exit 1
            fi
            shift
            ;;
    esac
done

# Validate URL
if [[ -z "$URL" ]]; then
    echo -e "${RED}Error: URL is required${NC}" >&2
    usage
fi

# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
    echo -e "${RED}Error: --count requires --async${NC}" >&2
    exit 1
fi

if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
    echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
    exit 1
fi

# Execute curl
execute_curl() {
    # Build curl arguments as array instead of string
    local -a curl_args=()
    
    # Basic options
    curl_args+=("--compressed")
    curl_args+=("--max-time" "$TIMEOUT")
    curl_args+=("--connect-timeout" "10")
    curl_args+=("--http2")
    
    # Cookies (ensure file exists to avoid curl warning)
    : > "$COOKIE_FILE" 2>/dev/null || true
    curl_args+=("--cookie" "$COOKIE_FILE")
    curl_args+=("--cookie-jar" "$COOKIE_FILE")
    
    # Follow redirects
    if [[ "$FOLLOW_REDIRECTS" == true ]]; then
        curl_args+=("--location")
    fi
    
    # Show headers
    if [[ "$SHOW_HEADERS" == true ]]; then
        curl_args+=("--include")
    fi
    
    # Output file
    if [[ -n "$OUTPUT_FILE" ]]; then
        curl_args+=("--output" "$OUTPUT_FILE")
    fi
    
    # Verbose
    if [[ "$VERBOSE" == true ]]; then
        curl_args+=("--verbose")
    else
        curl_args+=("--silent" "--show-error")
    fi
    
    # Method
    curl_args+=("--request" "$METHOD")
    
    # Browser-like headers
    curl_args+=("--header" "User-Agent: $USER_AGENT")
    curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
    curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
    curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
    curl_args+=("--header" "Connection: keep-alive")
    curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
    curl_args+=("--header" "Sec-Fetch-Dest: document")
    curl_args+=("--header" "Sec-Fetch-Mode: navigate")
    curl_args+=("--header" "Sec-Fetch-Site: none")
    curl_args+=("--header" "Sec-Fetch-User: ?1")
    curl_args+=("--header" "Cache-Control: max-age=0")
    
    # Content-Type for POST/PUT
    if [[ -n "$DATA" ]]; then
        if [[ -n "$CONTENT_TYPE" ]]; then
            curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
        fi
        curl_args+=("--data" "$DATA")
    fi
    
    # Extra headers
    for header in "${EXTRA_HEADERS[@]}"; do
        curl_args+=("--header" "$header")
    done
    
    # URL
    curl_args+=("$URL")
    
    if [[ "$ASYNC" == true ]]; then
        # Run asynchronously in background
        if [[ "$VERBOSE" == true ]]; then
            echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
            echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
        fi
        
        # Fire multiple requests if count > 1
        local pids=()
        for ((i=1; i<=COUNT; i++)); do
            # Run in background detached, suppress all output
            nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
            local pid=$!
            disown $pid
            pids+=("$pid")
        done
        
        if [[ "$COUNT" -eq 1 ]]; then
            echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
        else
            echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
        fi
    else
        # Run synchronously
        if [[ "$VERBOSE" == true ]]; then
            echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
        fi
        
        curl "${curl_args[@]}"
        local exit_code=$?
        
        if [[ $exit_code -ne 0 ]]; then
            echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
            return $exit_code
        fi
    fi
}

# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
    if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
        rm -f "$COOKIE_FILE"
    fi
}

# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
    trap cleanup EXIT
fi

# Main execution
execute_curl

# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
    exit 0
fi
EOF

Optionally, move it to your PATH:

sudo mv browser_curl.sh /usr/local/bin/browser_curl

Quick start

Simple GET request

./browser_curl.sh https://example.com

Async GET (returns immediately)

./browser_curl.sh --async https://example.com

Fire 100 async requests in one command

./browser_curl.sh --async --count 100 https://example.com/api

Common examples

POST JSON

./browser_curl.sh -X POST --json \
  -d '{"username":"user","password":"pass"}' \
  https://api.example.com/login

POST form data

./browser_curl.sh -X POST --form \
  -d "username=user&password=pass" \
  https://example.com/login

Include response headers

./browser_curl.sh --show-headers https://example.com

Save response to a file

./browser_curl.sh -o response.json https://api.example.com/data

Custom headers

./browser_curl.sh \
  -H "X-API-Key: your-key" \
  -H "Authorization: Bearer token" \
  https://api.example.com/data

Persistent cookies across requests

COOKIE_FILE="session_cookies.txt"

# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=test&pass=secret" \
  https://example.com/login

# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
  https://example.com/dashboard

Load testing patterns

Simple load test with –count

The easiest way to fire multiple requests:

./browser_curl.sh --async --count 100 https://example.com/api

Example output:

[ASYNC] 100 requests started with PIDs: 1234 1235 1236 ... 1333

Performance: 100 requests dispatched in approximately 0.09 seconds

Loop-based approach (alternative)

for i in {1..100}; do
  ./browser_curl.sh --async https://example.com/api
done

Timed load test

Run continuous requests for a specific duration:

#!/bin/bash
URL="https://example.com/api"
DURATION=60  # seconds
COUNT=0

END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
  ./browser_curl.sh --async "$URL" > /dev/null 2>&1
  ((COUNT++))
done

echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"

Parameterized load test script

#!/bin/bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"

echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""

START=$(date +%s)
./browser_curl.sh --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"

Usage:

./load_test.sh https://api.example.com/endpoint 200

Options reference

OptionDescriptionDefault
-X, --methodHTTP method (GET/POST/PUT/DELETE)GET
-d, --dataRequest body (JSON or form)
-H, --headerAdd extra headers (repeatable)
-o, --outputWrite response to a filestdout
-c, --cookieCookie file to use (and persist)temp file
-A, --user-agentOverride User-AgentChrome/macOS
-t, --timeoutMax request time in seconds30
--asyncRun request(s) in the backgroundfalse
--count NFire N async requests (requires --async)1
--no-redirectDon’t follow redirectsfollows
--show-headersInclude response headersfalse
--jsonSets Content-Type: application/json
--formSets Content-Type: application/x-www-form-urlencoded
-v, --verboseVerbose diagnosticsfalse
-h, --helpShow usage

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

Browser-like headers

The script automatically adds these headers to mimic Chrome:

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1

HTTP/2 + compression

  • Uses --http2 flag for HTTP/2 protocol support
  • Enables --compressed for automatic gzip/brotli decompression
  • Closer to modern browser behavior
  • Maintains session cookies across redirects and calls
  • Persists cookies to file for reuse
  • Automatically created and cleaned up

Redirect handling

  • Follows redirects by default with --location
  • Critical for login flows, SSO, and OAuth redirects

These features help bypass basic bot detection that blocks obvious non-browser clients.

Real-world examples

Example 1: API authentication flow

cd ~/Desktop/warp
bash -c 'cat > test_auth.sh << '\''SCRIPT'\''
#!/bin/bash
COOKIE_FILE="auth_session.txt"
API_BASE="https://api.example.com"

echo "Logging in..."
./browser_curl.sh -c "$COOKIE_FILE" -X POST --json -d "{\"username\":\"user\",\"password\":\"pass\"}" "$API_BASE/auth/login" > /dev/null

echo "Fetching profile..."
./browser_curl.sh -c "$COOKIE_FILE" "$API_BASE/user/profile" | jq .

echo "Load testing..."
./browser_curl.sh -c "$COOKIE_FILE" --async --count 50 "$API_BASE/api/data"

echo "Done!"
rm -f "$COOKIE_FILE"
SCRIPT
chmod +x test_auth.sh
./test_auth.sh'

Example 2: Scraping with rate limiting

#!/bin/bash
URLS=(
  "https://example.com/page1"
  "https://example.com/page2"
  "https://example.com/page3"
)

for url in "${URLS[@]}"; do
  echo "Fetching: $url"
  ./browser_curl.sh -o "$(basename "$url").html" "$url"
  sleep 2  # Rate limiting
done

Example 3: Health check monitoring

#!/bin/bash
ENDPOINT="https://api.example.com/health"

while true; do
  if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
    echo "$(date): Service healthy"
  else
    echo "$(date): Service unhealthy"
  fi
  sleep 30
done

Installing browser_curl to your PATH

If you want browser_curl.sh to be available anywhere then install it on your path using:

mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl

echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
  echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc

echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc

echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"

Troubleshooting

Issue: Hanging with dquote> prompt

Cause: Shell quoting issue (unbalanced quotes)

Solution: Use simple, direct commands

# Good
./browser_curl.sh --async https://example.com

# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"

For chaining commands:

echo Start; ./browser_curl.sh --async https://example.com; echo Done

Issue: Verbose mode produces too much output

Cause: -v flag prints all curl diagnostics to stderr

Solution: Remove -v for production use:

# Debug mode
./browser_curl.sh -v https://example.com

# Production mode
./browser_curl.sh https://example.com

Cause: First-time cookie file creation

Solution: The script now pre-creates the cookie file automatically. You can ignore any residual warnings.

Issue: 403 Forbidden errors

Cause: Site has stronger protections (JavaScript challenges, TLS fingerprinting)

Solution: Consider using real browser automation:

  • Playwright (Python/Node.js)
  • Selenium
  • Puppeteer

Or combine approaches:

  1. Use Playwright to initialize session and get cookies
  2. Export cookies to file
  3. Use browser_curl.sh -c cookies.txt for subsequent requests

Performance benchmarks

Tests conducted on 2023 MacBook Pro M2, macOS Sonoma:

TestTimeRequests/sec
Single sync requestapproximately 0.2s
10 async requests (–count)approximately 0.03s333/s
100 async requests (–count)approximately 0.09s1111/s
1000 async requests (–count)approximately 0.8s1250/s

Note: Dispatch time only; actual HTTP completion depends on target server.

Limitations

What this script CANNOT do

  • JavaScript execution – Can’t solve JS challenges (use Playwright)
  • CAPTCHA solving – Requires human intervention or services
  • Advanced TLS fingerprinting – Can’t mimic exact browser TLS stack
  • HTTP/2 fingerprinting – Can’t perfectly match browser HTTP/2 frames
  • WebSocket connections – HTTP only
  • Browser API access – No Canvas, WebGL, Web Crypto fingerprints

What this script CAN do

  • Basic header spoofing – Pass simple User-Agent checks
  • Cookie management – Maintain sessions
  • Load testing – Quick async request dispatch
  • API testing – POST/PUT/DELETE with JSON/form data
  • Simple scraping – Pages without JS requirements
  • Health checks – Monitoring endpoints

When to use what

Use browser_curl.sh when:

  • Target has basic bot detection (header checks)
  • API testing with authentication
  • Quick load testing (less than 10k requests)
  • Monitoring/health checks
  • No JavaScript required
  • You want a lightweight tool

Use Playwright/Selenium when:

  • Target requires JavaScript execution
  • CAPTCHA challenges present
  • Advanced fingerprinting detected
  • Need to interact with dynamic content
  • Heavy scraping with anti-bot measures
  • Login flows with MFA/2FA

Hybrid approach:

  1. Use Playwright to bootstrap session
  2. Extract cookies
  3. Use browser_curl.sh for follow-up requests (faster)

Advanced: Combining with other tools

With jq for JSON processing

./browser_curl.sh https://api.example.com/users | jq '.[] | .name'

With parallel for concurrency control

cat urls.txt | parallel -j 10 "./browser_curl.sh -o {#}.html {}"

With watch for monitoring

watch -n 5 "./browser_curl.sh https://api.example.com/health | jq .status"

With xargs for batch processing

cat ids.txt | xargs -I {} ./browser_curl.sh "https://api.example.com/item/{}"

Future enhancements

Potential features to add:

  • Rate limiting – Built-in requests/second throttling
  • Retry logic – Exponential backoff on failures
  • Output formats – JSON-only, CSV, headers-only modes
  • Proxy support – SOCKS5/HTTP proxy options
  • Custom TLS – Certificate pinning, client certs
  • Response validation – Assert status codes, content patterns
  • Metrics collection – Timing stats, success rates
  • Configuration file – Default settings per domain

Conclusion

browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.

Key takeaways:

  • Simple wrapper around curl with realistic browser behavior
  • Async mode with --count for easy load testing
  • Works for basic bot detection, not advanced challenges
  • Combine with Playwright for tough targets
  • Lightweight and fast for everyday API work

The script is particularly useful for:

  • API development and testing
  • Quick load testing during development
  • Monitoring and health checks
  • Simple scraping tasks
  • Learning curl features

For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.

Resources

0
0

Amazon Aurora DSQL: A Deep Dive into Performance and Limitations

1. Executive Summary

Amazon Aurora DSQL represents AWS’s ambitious entry into the distributed SQL database market, announced at re:Invent 2024. It’s a serverless, distributed SQL database featuring active active high availability and PostgreSQL compatibility. While the service offers impressive architectural innovations including 99.99% single region and 99.999% multi region availability, but it comes with significant limitations that developers must carefully consider. This analysis examines Aurora DSQL’s performance characteristics, architectural tradeoffs, and critical constraints that impact real world applications.

Key Takeaways:

  • Aurora DSQL excels at multiregion active active workloads with low latency reads and writes
  • Significant PostgreSQL compatibility gaps limit migration paths for existing applications
  • Optimistic concurrency control requires application level retry logic
  • Preview phase limitations include 10,000 row transaction limits and missing critical features
  • Pricing model is complex and difficult to predict without production testing

1.1 Architecture Overview

Aurora DSQL fundamentally reimagines distributed database architecture by decoupling transaction processing from storage. The service overcomes two historical challenges: achieving multi region strong consistency with low latency and syncing servers with microsecond accuracy around the globe.

1.2 Core Components

The system consists of three independently scalable components:

  1. Compute Layer: Executes SQL queries without managing locks
  2. Commit/Journal Layer: Handles transaction ordering and conflict detection
  3. Storage Layer: Provides durable, queryable storage built from transaction logs

The Journal logs every transaction, while the Adjudicator component manages transaction isolation and conflict resolution. This separation allows Aurora DSQL to scale each component independently based on workload demands.

1.3 Multi Region Architecture

In multi region configurations, clusters provide two regional endpoints that present a single logical database supporting concurrent read and write operations with strong data consistency. A third witness region stores transaction logs for recovery purposes, enabling the system to maintain availability even during regional failures.

2. Performance Characteristics

2.1 Read and Write Latency

In simple workload tests, read latency achieves single digit milliseconds, while write latency is approximately two roundtrip times to the nearest region at commit. This predictable latency model differs significantly from traditional databases where performance degrades under contention.

Transaction latency remains constant relative to statement count, even across regions. This consistency provides predictable performance characteristics regardless of transaction complexity—a significant advantage for globally distributed applications.

2.2 Scalability Claims

AWS claims Aurora DSQL delivers reads and writes that are four times faster than Google Cloud Spanner. However, real world benchmarks will further reveal performance at extreme scale, as Aurora DSQL is yet to be truly tested on an enterprise scale.

The service provides virtually unlimited scalability from a single endpoint, eliminating manual provisioning and management of database instances. The architecture automatically partitions the key space to detect conflicting transactions, allowing it to scale without traditional sharding complexity.

2.3 Concurrency Control Tradeoffs

Aurora DSQL uses optimistic concurrency control (OCC), where transactions run without considering other concurrent transactions, with conflict detection happening at commit time. While this prevents slow transactions from blocking others, it requires applications to handle transaction retries.

OCC provides better scalability for query processing and a more robust cluster for realistic failures by avoiding locking mechanisms that can lead to deadlocks or performance bottlenecks. However, this comes at the cost of increased application complexity.

3. Critical Limitations

3.1 Transaction Constraints

The preview version imposes several hard limits that significantly impact application design:

  • Maximum 10,000 rows modified per transaction
  • Transaction size cannot exceed 10 MiB
  • Sessions are capped at 1 hour, requiring reconnection for long lived processes
  • Cannot mix DDL and DML statements within a single transaction

Common DDL/DML Limitation Example:

The most common pattern that fails in Aurora DSQL is SELECT INTO, which combines table creation and data population in a single statement:

BEGIN;
SELECT id, username, created_at 
INTO new_users_copy 
FROM users 
WHERE active = true;
COMMIT;  -- ERROR: SELECT INTO mixes DDL and DML

This pattern is extremely common in:

  • ETL processes that create temporary staging tables and load data with SELECT INTO
  • Reporting workflows that materialize query results into new tables
  • Database migrations that create tables and seed initial data
  • Test fixtures that set up schema and populate test data
  • Multi-tenant applications that dynamically create tenant-specific tables and initialize them

The workaround requires splitting into separate transactions using CREATE TABLE followed by INSERT INTO ... SELECT:

-- Transaction 1: Create table structure
BEGIN;
CREATE TABLE new_users_copy (
    id BIGINT,
    username TEXT,
    created_at TIMESTAMP
);
COMMIT;

-- Transaction 2: Populate with data
BEGIN;
INSERT INTO new_users_copy 
SELECT id, username, created_at 
FROM users 
WHERE active = true;
COMMIT;

This limitation stems from Aurora DSQL’s asynchronous DDL processing architecture, where schema changes are propagated separately from data changes across the distributed system.

These constraints make Aurora DSQL unsuitable for batch processing workloads or applications requiring large bulk operations.

3.2 Missing PostgreSQL Features

Aurora DSQL sacrifices several critical PostgreSQL features for performance and scalability:

Not Supported:

  • No foreign keys
  • No temporary tables
  • No views
  • No triggers, PL/pgSQL, sequences, or explicit locking
  • No serializable isolation level or LOCK TABLE support
  • No PostgreSQL extensions (pgcrypto, PostGIS, PGVector, hstore)

Unsupported Data Types:

  • No SERIAL or BIGSERIAL (auto incrementing integers)
  • No JSON or JSONB types
  • No range types (tsrange, int4range, etc.)
  • No geospatial types (geometry, geography)
  • No vector types
  • TEXT type limited to 1MB (vs. 1GB in PostgreSQL)

Aurora DSQL prioritizes scalability, sacrificing some features for performance. The distributed architecture and asynchronous DDL requirements prevent support for extensions that depend on PostgreSQL’s internal storage format. These omissions require significant application redesign for PostgreSQL migrations.

3.3 Isolation Level Limitations

Aurora DSQL supports strong snapshot isolation, equivalent to repeatable read isolation in PostgreSQL. This creates several important implications:

  1. Write Skew Anomalies: Applications susceptible to write skew cannot rely on serializable isolation to prevent conflicts.

What is Write Skew?

Write skew is a database anomaly that occurs when two concurrent transactions read overlapping data, make disjoint updates based on what they read, and both commit successfully—even though the result violates a business constraint. This happens because snapshot isolation only detects direct write write conflicts on the same rows, not constraint violations across different rows.

Classic example: A hospital requires at least two doctors on call at all times. Two transactions check the current count (finds 2 doctors), both see the requirement is satisfied, and both remove themselves from the on call roster. Both transactions modify different rows (their own records), so snapshot isolation sees no conflict. Both commit successfully, leaving zero doctors on call and violating the business rule.

In PostgreSQL with serializable isolation, this would be prevented because the database detects the anomaly and aborts one transaction. Aurora DSQL’s snapshot isolation cannot prevent this, requiring application level logic to handle such constraints.

  1. Referential Integrity Challenges: Without foreign keys and serializable isolation, maintaining referential integrity requires application side logic using SELECT FOR UPDATE to lock parent keys during child table inserts.
  2. Conflict Detection: Aurora DSQL uses optimistic concurrency control where SELECT FOR UPDATE intents are not synchronized to be visible to other transactions until commit.

4. Inter Region Consistency and Data Durability

4.1 Strong Consistency Guarantees

Aurora DSQL provides strong consistency across all regional endpoints, ensuring that reads and writes to any endpoint always reflect the same logical state. This is achieved through synchronous replication and careful transaction ordering:

How It Works:

  1. Synchronous Commits: When a transaction commits, Aurora DSQL writes to the distributed transaction log and synchronously replicates all committed log data across regions before acknowledging the commit to the client.
  2. Single Logical Database: Both regional endpoints in a multi region cluster present the same logical database. Readers consistently see the same data regardless of which endpoint they query.
  3. Zero Replication Lag: Unlike traditional asynchronous replication, there is no replication lag on commit. The commit only succeeds after data is durably stored across regions.
  4. Witness Region: A third region stores transaction logs for durability but doesn’t have a query endpoint. This witness region ensures multi region durability without requiring three full active regions.

4.2 Can You Lose Data?

Aurora DSQL is designed with strong durability guarantees that make data loss extremely unlikely:

Single Region Configuration:

  • All write transactions are synchronously replicated to storage replicas across three Availability Zones
  • Replication uses quorum based commits, ensuring data survives even if one AZ fails completely
  • No risk of data loss due to replication lag because replication is always synchronous

Multi Region Configuration:

  • Committed transactions are synchronously written to the transaction log in both active regions plus the witness region
  • A transaction only acknowledges success after durable storage across multiple regions
  • Even if an entire AWS region becomes unavailable, committed data remains accessible from the other region

Failure Scenarios:

Single AZ Failure: Aurora DSQL automatically routes to healthy AZs. No data loss occurs because data is replicated across three AZs synchronously.

Single Region Failure (Multi Region Setup): Applications can continue operating from the remaining active region with zero data loss. All committed transactions were synchronously replicated before the commit acknowledgment was sent.

Component Failure: Individual component failures (compute, storage, journal) are handled through Aurora DSQL’s self healing architecture. The system automatically repairs failed replicas asynchronously while serving requests from healthy components.

4.3 Will Everything Always Be in Both Regions?

Yes, with important caveats:

Committed Data: Once a transaction receives a commit acknowledgment, that data is guaranteed to exist in both active regions. The synchronous replication model ensures this.

Uncommitted Transactions: Transactions that haven’t yet committed exist only in their originating region’s session state. If that region fails before commit, the transaction is lost (which is expected behavior).

Durability vs. Availability Tradeoff: The strong consistency model means that if cross region network connectivity is lost, write operations may be impacted. Aurora DSQL prioritizes consistency over availability in the CAP theorem sense, it won’t accept writes that can’t be properly replicated.

Geographic Restrictions: Multi region clusters are currently limited to geographic groupings (US regions together, European regions together, Asia Pacific regions together). You cannot pair US East with EU West, which limits truly global active active deployments.

4.4 Consistency Risks and Limitations

While Aurora DSQL provides strong consistency, developers should understand these considerations:

Network Partition Handling: In the event of a network partition between regions, Aurora DSQL’s behavior depends on which components can maintain quorum. The system is designed to maintain consistency, which may mean rejecting writes rather than accepting writes that can’t be properly replicated.

Write Skew at Application Level: While individual transactions are consistent, applications must still handle write skew anomalies that can occur with snapshot isolation (as discussed in the Isolation Level Limitations section).

Time Synchronization Dependency: Aurora DSQL relies on Amazon Time Sync Service for precise time coordination. While highly reliable, this creates a subtle dependency on time synchronization for maintaining transaction ordering across regions.

This test measures basic insert latency using the PostgreSQL wire protocol:

#!/bin/bash
# Basic Aurora DSQL Performance Test
# Prerequisites: AWS CLI, psql, jq
# Configuration
REGION="us-east-1"
CLUSTER_ENDPOINT="your-cluster-endpoint.dsql.us-east-1.on.aws"
DATABASE="testdb"
# Generate temporary authentication token
export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token \
  --hostname $CLUSTER_ENDPOINT \
  --region $REGION \
  --expires-in 3600)
export PGHOST=$CLUSTER_ENDPOINT
export PGDATABASE=$DATABASE
export PGUSER=admin
# Create test table
psql << 'EOF'
DROP TABLE IF EXISTS perf_test;
CREATE TABLE perf_test (
  id BIGSERIAL PRIMARY KEY,
  data TEXT,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
EOF
# Run timed insert test
echo "Running 100 sequential inserts..."
time psql -c "
DO \$\$
DECLARE
  i INT;
BEGIN
  FOR i IN 1..100 LOOP
    INSERT INTO perf_test (data) 
    VALUES ('test_data_' || i);
  END LOOP;
END \$\$;
"
# Test transaction commit latency
echo "Testing transaction commit latency..."
psql << 'EOF'
\timing on
BEGIN;
INSERT INTO perf_test (data) VALUES ('commit_test');
COMMIT;
EOF

4.4 Concurrency Control Testing

Test optimistic concurrency behavior and conflict detection:

#!/usr/bin/env python3
"""
Aurora DSQL Concurrency Test
Tests optimistic concurrency control and retry logic
"""
import psycopg2
import boto3
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
def get_auth_token(endpoint, region):
    """Generate IAM authentication token"""
    client = boto3.client('dsql', region_name=region)
    return client.generate_db_connect_admin_auth_token(
        hostname=endpoint,
        region=region,
        expires_in=3600
    )
def connect(endpoint, database, region):
    """Create database connection"""
    token = get_auth_token(endpoint, region)
    return psycopg2.connect(
        host=endpoint,
        database=database,
        user='admin',
        password=token,
        sslmode='require'
    )
def update_with_retry(endpoint, database, region, row_id, new_value, max_retries=5):
    """Update with exponential backoff retry logic"""
    retries = 0
    delay = 0.1
    
    while retries < max_retries:
        conn = None
        try:
            conn = connect(endpoint, database, region)
            cursor = conn.cursor()
            
            # Start transaction
            cursor.execute("BEGIN")
            
            # Read current value
            cursor.execute(
                "SELECT value FROM test_table WHERE id = %s FOR UPDATE",
                (row_id,)
            )
            current = cursor.fetchone()
            
            # Simulate some processing
            time.sleep(0.01)
            
            # Update value
            cursor.execute(
                "UPDATE test_table SET value = %s WHERE id = %s",
                (new_value, row_id)
            )
            
            # Commit
            cursor.execute("COMMIT")
            
            return True, retries
            
        except psycopg2.Error as e:
            if "change conflicts with another transaction" in str(e):
                retries += 1
                if retries < max_retries:
                    time.sleep(delay)
                    delay *= 2  # Exponential backoff
                    continue
            raise
        finally:
            if conn:
                conn.close()
    
    return False, max_retries
def run_concurrency_test(endpoint, database, region, num_threads=10):
    """Run concurrent updates on same row"""
    
    # Setup test table
    conn = connect(endpoint, database, region)
    cursor = conn.cursor()
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS test_table (
            id BIGINT PRIMARY KEY,
            value INT
        )
    """)
    cursor.execute("INSERT INTO test_table (id, value) VALUES (1, 0)")
    conn.commit()
    conn.close()
    
    # Run concurrent updates
    start_time = time.time()
    results = {'success': 0, 'failed': 0, 'total_retries': 0}
    
    with ThreadPoolExecutor(max_workers=num_threads) as executor:
        futures = [
            executor.submit(update_with_retry, endpoint, database, region, 1, i)
            for i in range(num_threads)
        ]
        
        for future in as_completed(futures):
            success, retries = future.result()
            if success:
                results['success'] += 1
            else:
                results['failed'] += 1
            results['total_retries'] += retries
    
    elapsed = time.time() - start_time
    
    print(f"\nConcurrency Test Results:")
    print(f"Duration: {elapsed:.2f}s")
    print(f"Successful: {results['success']}")
    print(f"Failed: {results['failed']}")
    print(f"Total Retries: {results['total_retries']}")
    print(f"Avg Retries per Transaction: {results['total_retries']/num_threads:.2f}")
if __name__ == "__main__":
    ENDPOINT = "your-cluster-endpoint.dsql.us-east-1.on.aws"
    DATABASE = "testdb"
    REGION = "us-east-1"
    
    run_concurrency_test(ENDPOINT, DATABASE, REGION)

4.5 Multi Region Latency Test

Measure cross region write latency:

// Node.js Multi-Region Latency Test
const { Client } = require('pg');
const AWS = require('aws-sdk');
async function getAuthToken(endpoint, region) {
  const dsql = new AWS.DSQL({ region });
  const params = {
    hostname: endpoint,
    region: region,
    expiresIn: 3600
  };
  return dsql.generateDbConnectAdminAuthToken(params);
}
async function testRegionalLatency(endpoints, database) {
  const results = [];
  
  for (const [region, endpoint] of Object.entries(endpoints)) {
    const token = await getAuthToken(endpoint, region);
    
    const client = new Client({
      host: endpoint,
      database: database,
      user: 'admin',
      password: token,
      ssl: { rejectUnauthorized: true }
    });
    
    await client.connect();
    
    // Measure read latency
    const readStart = Date.now();
    await client.query('SELECT 1');
    const readLatency = Date.now() - readStart;
    
    // Measure write latency (includes commit sync)
    const writeStart = Date.now();
    await client.query('BEGIN');
    await client.query('INSERT INTO latency_test (ts) VALUES (NOW())');
    await client.query('COMMIT');
    const writeLatency = Date.now() - writeStart;
    
    results.push({
      region,
      readLatency,
      writeLatency
    });
    
    await client.end();
  }
  
  console.log('Multi-Region Latency Results:');
  console.table(results);
}
// Usage
const endpoints = {
  'us-east-1': 'cluster-1.dsql.us-east-1.on.aws',
  'us-west-2': 'cluster-1.dsql.us-west-2.on.aws'
};
testRegionalLatency(endpoints, 'testdb');

4.6 Transaction Size Limit Test

Verify the 10,000 row transaction limit impacts on operations:

#!/usr/bin/env python3
"""
Test Aurora DSQL transaction size limits
"""
import psycopg2
import boto3
def connect(endpoint, database, region):
    """Create database connection"""
    client = boto3.client('dsql', region_name=region)
    token = client.generate_db_connect_admin_auth_token(
        hostname=endpoint,
        region=region,
        expires_in=3600
    )
    return psycopg2.connect(
        host=endpoint,
        database=database,
        user='admin',
        password=token,
        sslmode='require'
    )
def test_limits(endpoint, database, region):
    """Test transaction row limits"""
    conn = connect(endpoint, database, region)
    cursor = conn.cursor()
    
    # Create test table
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS limit_test (
            id BIGSERIAL PRIMARY KEY,
            data TEXT
        )
    """)
    conn.commit()
    
    # Test under limit (should succeed)
    print("Testing 9,999 row insert (under limit)...")
    try:
        cursor.execute("BEGIN")
        for i in range(9999):
            cursor.execute(
                "INSERT INTO limit_test (data) VALUES (%s)",
                (f"row_{i}",)
            )
        cursor.execute("COMMIT")
        print("✓ Success: Under-limit transaction committed")
    except Exception as e:
        print(f"✗ Failed: {e}")
        cursor.execute("ROLLBACK")
    
    conn.close()
if __name__ == "__main__":
    ENDPOINT = "your-cluster-endpoint.dsql.us-east-1.on.aws"
    DATABASE = "testdb"
    REGION = "us-east-1"
    
    test_limits(ENDPOINT, DATABASE, REGION)

5. Aurora DSQL and af-south-1 (Cape Town): Regional Availability and Performance Impact

5.1 Current Regional Support

As of the preview launch in November 2024, Amazon Aurora DSQL is not available in the af-south-1 (Cape Town) region. AWS has initially launched DSQL in a limited set of regions, focusing on major US, European, and Asia Pacific markets. The Cape Town region, while part of AWS’s global infrastructure, has not been included in the initial preview rollout.

This absence is significant for African businesses and organizations looking to leverage DSQL’s distributed SQL capabilities, as there is currently no way to deploy a regional endpoint within the African continent.

5.2 Geographic Pairing Limitations

Even if af-south-1 support is added in the future, Aurora DSQL’s current multi region architecture imposes geographic restrictions on region pairing. According to AWS documentation, multi region clusters are limited to specific geographic groupings:

  • US regions can only pair with other US regions
  • European regions can only pair with other European regions
  • Asia Pacific regions can only pair with other Asia Pacific regions

This means that even with af-south-1 support, African deployments would likely be restricted to pairing with other African regions. Given that AWS currently operates only one region in Africa (Cape Town), true multi region DSQL deployment within the continent would require AWS to launch additional African regions first.

5.3 The 150ms RTT Challenge

The network round trip time (RTT) between Cape Town and major AWS regions presents a fundamental performance challenge for Aurora DSQL deployments. Typical RTT measurements from af-south-1 to other regions include:

  • af-south-1 to eu-west-1 (Ireland): approximately 150ms
  • af-south-1 to us-east-1 (Virginia): approximately 200ms
  • af-south-1 to ap-southeast-1 (Singapore): approximately 280ms

To understand the performance impact of 150ms RTT, we need to examine how Aurora DSQL’s architecture handles write operations.

5.4 Write Performance Impact Analysis

Aurora DSQL’s write latency is directly tied to network round trip times because of its synchronous replication model. When a transaction commits:

  1. The client sends a COMMIT request to the nearest regional endpoint
  2. The commit layer synchronously replicates the transaction log to all regions in the cluster
  3. The system waits for acknowledgment from all regions before confirming the commit to the client
  4. Only after all regions have durably stored the transaction does the client receive confirmation

According to AWS documentation, write latency is approximately two round trip times to the nearest region at commit. This means:

5.5 Single Region Scenario (Hypothetical af-south-1)

If Aurora DSQL were available in af-south-1 as a single region deployment, write performance would be competitive with other regions. The write latency would be roughly:

  • Local RTT within af-south-1: approximately 2-5ms (between availability zones)
  • Expected write latency: 4-10ms (two RTTs)

This would provide acceptable performance for most transactional workloads.

5.6 Multi Region Scenario (af-south-1 paired with eu-west-1)

In a multi region configuration pairing Cape Town with Ireland, the 150ms RTT creates severe performance constraints:

  • Cross region RTT: 150ms
  • Write latency calculation: 2 × 150ms = 300ms minimum per write transaction
  • Best case scenario with optimizations: 250-350ms per transaction

This 300ms write latency has cascading effects on application throughput:

Throughput Impact:

  • With 300ms per transaction, a single connection can complete approximately 3.3 transactions per second
  • To achieve 100 transactions per second, you would need at least 30 concurrent database connections
  • To achieve 1,000 transactions per second, you would need 300+ concurrent connections

Comparison to US/EU Deployments:

  • us-east-1 to us-west-2: approximately 60ms RTT = 120ms write latency
  • eu-west-1 to eu-central-1: approximately 25ms RTT = 50ms write latency

An af-south-1 to eu-west-1 pairing would experience 6x slower writes compared to typical European region pairings and 2.5x slower writes compared to cross-US deployments.

5.7 Optimistic Concurrency Control Compounds the Problem

Aurora DSQL’s optimistic concurrency control (OCC) makes the latency problem worse for African deployments. When transactions conflict:

  1. The application must detect the conflict (after waiting 300ms for the failed commit)
  2. The application implements retry logic with exponential backoff
  3. Each retry attempt incurs another 300ms commit latency

For a workload with 20% conflict rate requiring retries:

  • First attempt: 300ms (80% success)
  • Second attempt: 300ms + backoff delay (15% success)
  • Third attempt: 300ms + backoff delay (4% success)
  • Fourth attempt: 300ms + backoff delay (1% success)

Average transaction time balloons to 400-500ms when accounting for retries and backoff delays, reducing effective throughput to 2-2.5 transactions per second per connection.

5.8 Read Performance Considerations

While write performance suffers significantly, read performance from af-south-1 would be less impacted:

  • Local reads: If querying data from a local af-south-1 endpoint, read latency would remain in single digit milliseconds
  • Cross region reads: Reading from eu-west-1 while located in Cape Town would incur the 150ms RTT, but this is expected behavior for geo-distributed queries

Aurora DSQL’s architecture excels at providing low latency reads from the nearest regional endpoint, so applications that are read-heavy could still benefit from local read performance even with degraded write performance.

5.9 Cost Implications

The 150ms RTT also impacts cost through increased DPU consumption:

  • Longer transaction times mean connections remain open longer
  • More concurrent connections are needed to achieve target throughput
  • Higher retry rates consume additional DPUs for conflict resolution
  • Network data transfer costs between continents are higher than intra-region transfer

A workload that might consume 1 million DPUs in a us-east-1/us-west-2 configuration could easily consume 2-3 million DPUs in an af-south-1/eu-west-1 configuration due to longer transaction times and retry overhead.

At $8 per million DPUs, this represents a 2-3x cost increase purely from geographic latency, before considering higher network transfer costs.

5.10 Viability Assessment for African Deployments

Aurora DSQL is currently not viable for af-south-1 deployments due to:

5.1 Immediate Blockers

  1. No regional availability: DSQL is not offered in af-south-1
  2. No local pairing options: Even if available, there are no other African regions to pair with for multi region deployments
  3. Geographic restrictions: Current architecture prevents pairing af-south-1 with regions outside Africa

5.2 Performance Barriers (if it were available)

  1. Write performance: 300ms write latency makes DSQL unsuitable for:
    • High frequency transactional systems
    • Real time inventory management
    • Interactive user facing applications requiring sub 100ms response times
    • Any workload requiring more than 10-20 transactions per second without massive connection pooling
  2. Cost efficiency: 2-3x higher DPU consumption makes the service economically unviable compared to regional alternatives
  3. Retry amplification: Optimistic concurrency control multiplies the latency problem, making high contention workloads essentially unusable

5.3 Scenarios where DSQL might work (Hypothetically)

If Aurora DSQL becomes available in af-south-1, it could potentially serve:

  1. Asynchronous workloads: Background job processing, batch operations, and tasks where 300ms latency is acceptable
  2. Read-heavy applications: Systems that primarily read locally but occasionally sync writes to Europe
  3. Low-volume transactional systems: Applications processing fewer than 10 transactions per second
  4. Eventually consistent workflows: Systems that can tolerate write delays and handle retries gracefully

However, for these use cases, traditional Aurora PostgreSQL with cross region read replicas or DynamoDB Global Tables would likely provide better performance and cost efficiency.

5.4 Recommendations for African Organizations

Until AWS expands Aurora DSQL availability and addresses the latency constraints, organizations in Africa should consider:

  1. Aurora PostgreSQL: Deploy in af-south-1 with cross region read replicas to Europe or the Middle East for disaster recovery
  2. DynamoDB Global Tables: For globally distributed data with eventual consistency requirements
  3. RDS PostgreSQL: For traditional relational workloads that don’t require multi region active active
  4. Self-managed solutions: CockroachDB or YugabyteDB can be deployed in af-south-1 and paired with regions of your choice, avoiding AWS’s geographic restrictions

5.5 Future Outlook

For Aurora DSQL to become viable in Africa, AWS would need to:

  1. Launch DSQL in af-south-1: Basic prerequisite for any African deployment
  2. Add more African regions: To enable multi region deployments within acceptable latency bounds
  3. Remove geographic pairing restrictions: Allow af-south-1 to pair with me-south-1 (Bahrain) or other nearby regions with better latency profiles
  4. Optimize for high latency scenarios: Implement asynchronous commit options or relaxed consistency modes for geo-distributed deployments

None of these improvements have been announced, and given DSQL’s preview status, African availability is unlikely in the near term.

5.6 Quantified Performance Impact Summary

Single write transaction latency:

  • Local (within af-south-1): 4-10ms (estimated)
  • To eu-west-1: 300ms (6x slower than EU region pairs, 2.5x slower than US region pairs)
  • To us-east-1: 400ms
  • To ap-southeast-1: 560ms

Throughput per connection:

  • Local: 100-250 TPS (estimated)
  • To eu-west-1: 3.3 TPS (30x reduction)
  • With 20% retry rate: 2-2.5 TPS (50x reduction)

Cost multiplier:

  • Estimated 2-3x DPU consumption compared to low latency region pairs
  • Additional cross continental data transfer costs

Conclusion: The 150ms RTT to Europe creates a 6x write latency penalty and reduces per-connection throughput by 30-50x. Combined with the lack of regional availability, geographic pairing restrictions, and cost implications, Aurora DSQL is not viable for African deployments in its current form. Organizations in af-south-1 should continue using traditional database solutions until AWS addresses these fundamental constraints.

5.7 Performance Analysis

5.7.1 Strengths

  1. Predictable Latency: Transaction latency remains constant regardless of statement count, providing consistent performance characteristics that simplify capacity planning.
  2. Multi Region Active Active: Both regional endpoints support concurrent read and write operations with strong data consistency, enabling true active active configurations without complex replication lag management.
  3. No Single Point of Contention: A single slow client or long running query doesn’t impact other transactions because contention is handled at commit time on the server side.

5.7.2 Weaknesses

  1. High Contention Workload Performance: Applications with frequent updates to small key ranges experience high retry rates (see detailed explanation in Performance Analysis section above).
  2. Application Complexity: Aurora DSQL’s optimistic concurrency control minimizes cross region latency but requires applications to handle retries. This shifts complexity from the database to application code.
  3. Feature Gaps: Missing PostgreSQL features like foreign keys, triggers, views, and critical data types like JSON/JSONB require application redesign. Some developers view it as barely a database, more like a key value store with basic PostgreSQL wire compatibility.
  4. Unpredictable Costs: The pricing model is monumentally confusing, with costs varying based on DPU consumption that’s difficult to predict without production testing.

5.8 Use Case Recommendations

5.8.1 Good Fit

Global Ecommerce Platforms

Applications requiring continuous availability across regions with strong consistency for inventory and order management. If your business depends on continuous availability—like global ecommerce or financial platforms—Aurora DSQL’s active active model is a game changer.

Multi Tenant SaaS Applications

Services with dynamic scaling requirements and geographic distribution of users. The automatic scaling eliminates capacity planning concerns.

Financial Services (with caveats)

Transaction processing systems that can implement application level retry logic and work within the snapshot isolation model.

5.8.2 Poor Fit

Batch Processing Systems

The 10,000 row transaction limit makes Aurora DSQL unsuitable for bulk data operations, ETL processes, or large scale data migrations.

Legacy PostgreSQL Applications

Applications depending on foreign keys, triggers, stored procedures, views, or serializable isolation will require extensive rewrites.

High Contention Workloads

Applications with frequent updates to small key ranges (like continuously updating stock tickers, inventory counters for popular items, or high frequency account balance updates) will experience high retry rates and degraded throughput due to optimistic concurrency control. See the detailed explanation in the Performance Analysis section for why this occurs.

6. Comparison with Alternatives

vs. Google Cloud Spanner

Aurora DSQL claims 4x faster performance, but Spanner has been used for multi region consistent deployments at proven enterprise scale. Spanner uses its own SQL dialect, while Aurora DSQL provides PostgreSQL wire protocol compatibility.

vs. CockroachDB

YugabyteDB is more compatible with PostgreSQL, supporting features like triggers, PL/pgSQL, foreign keys, sequences, all isolation levels, and explicit locking. CockroachDB offers similar advantages with battle tested multi cloud deployment options, while Aurora DSQL is AWS exclusive and still in preview.

vs. Aurora PostgreSQL

Traditional Aurora PostgreSQL provides full PostgreSQL compatibility with proven reliability but lacks the multi region active active capabilities and automatic horizontal scaling of DSQL. The choice depends on whether distributed architecture benefits outweigh compatibility trade offs.

6.1 Production Readiness Assessment

Preview Status Concerns

As of November 2024, Aurora DSQL remains in public preview with several implications:

  • No production SLA guarantees
  • Feature set still evolving
  • Limited regional availability
  • Pricing subject to change at general availability

Missing Observability

During preview, instrumentation is limited. PostgreSQL doesn’t provide EXPLAIN ANALYZE output from commits, making it difficult to understand what happens during the synchronization and wait phases.

Migration Complexity

Aurora DSQL prioritizes scalability, sacrificing some features for performance. This requires careful evaluation of application dependencies on unsupported PostgreSQL features before attempting migration.

6.2 Pricing Considerations

Billing for Aurora DSQL is based on two primary measures: Distributed Processing Units (DPU) and storage, with costs of $8 per million DPU and $0.33 per GB month in US East.

However, DPU consumption varies unpredictably based on query complexity, making cost forecasting extremely difficult. The result of cost modeling exercises amounts to “yes, this will cost you some amount of money,” which is unacceptable when costs rise beyond science experiment levels.

The AWS Free Tier provides 100,000 DPUs and 1 GB month of storage monthly, allowing for initial testing without costs.

6.3 Recommendations

For New Applications

Aurora DSQL makes sense for greenfield projects where:

  • Multi region active active is a core requirement
  • Application can be designed around optimistic concurrency from the start
  • Features like foreign keys and triggers aren’t architectural requirements
  • Team accepts preview stage maturity risks

For Existing Applications

Migration from PostgreSQL requires:

  • Comprehensive audit of PostgreSQL feature dependencies
  • Redesign of referential integrity enforcement
  • Implementation of retry logic for optimistic concurrency
  • Extensive testing to validate cost models
  • Acceptance that some features may require application level implementation

Testing Strategy

Before production commitment:

  1. Benchmark actual workloads against Aurora DSQL to measure real DPU consumption
  2. Test at production scale to validate the 10,000 row transaction limit doesn’t impact operations
  3. Implement comprehensive retry logic and verify behavior under contention
  4. Measure cross region latency for your specific geographic requirements
  5. Calculate total cost of ownership including application development effort for missing features

7. Conclusion

Amazon Aurora DSQL represents significant innovation in distributed SQL database architecture, solving genuine problems around multi region strong consistency and operational simplicity. The technical implementation particularly the disaggregated architecture and optimistic concurrency control demonstrates sophisticated engineering.

However, the service makes substantial tradeoffs that limit its applicability. The missing PostgreSQL features, transaction size constraints, and optimistic concurrency requirements create significant migration friction for existing applications. The unpredictable pricing model adds further uncertainty.

For organizations building new globally distributed applications with flexible architectural requirements, Aurora DSQL deserves serious evaluation. For teams with existing PostgreSQL applications or those requiring full PostgreSQL compatibility, traditional Aurora PostgreSQL or alternative distributed SQL databases may provide better paths forward.

As the service matures beyond preview status, AWS will likely address some limitations and provide better cost prediction tools. Until then, Aurora DSQL remains a promising but unfinished solution that requires careful evaluation against specific requirements and willingness to adapt applications to its architectural constraints.

References

Last Updated: November 2024 | Preview Status: Public Preview

0
0

A Script to download Photos, Videos and Images from your iPhone to your Macbook (by creation date and a file name filter)

Annoying Apple never quite got around to making it easy to offload images from your iPhone to your Macbook. So below is a complete guide to automatically download photos and videos from your iPhone to your MacBook, with options to filter by pattern and date, and organize into folders by creation date.

Prerequisites

Install the required tools using Homebrew:

cat > install_iphone_util.sh << 'EOF'
#!/bin/bash
set -e

echo "Installing tools..."

echo "Installing macFUSE"
brew install --cask macfuse

echo "Adding Brew Tap" 
brew tap gromgit/fuse

echo "Installing ifuse-mac" 
brew install gromgit/fuse/ifuse-mac

echo "Installing libimobiledevice" 
brew install libimobiledevice

echo "Installing exiftool"
brew install exiftool

echo "Done! Tools installed."
EOF

echo "Making executable..."
chmod +x install_iphone_util.sh

./install_iphone_util.sh

Setup/Pair your iPhone to your Macbook

  1. Connect your iPhone to your MacBook via USB
  2. Trust the computer on your iPhone when prompted
  3. Verify the connection:
idevicepair validate

If not paired, run:

idevicepair pair

Download Script

Run the script below to create the file download-iphone-media.sh in your current directory:

#!/bin/bash

cat > download-iphone-media.sh << 'OUTER_EOF'
#!/bin/bash

# iPhone Media Downloader
# Downloads photos and videos from iPhone to MacBook
# Supports resumable, idempotent downloads

set -e

# Default values
PATTERN="*"
OUTPUT_DIR="."
ORGANIZE_BY_DATE=false
START_DATE=""
END_DATE=""
MOUNT_POINT="/tmp/iphone_mount"
STATE_DIR=""
VERIFY_CHECKSUM=true

# Usage function
usage() {
    cat << 'INNER_EOF'
Usage: $0 [OPTIONS]

Download photos and videos from iPhone to MacBook.

OPTIONS:
    -p PATTERN          File pattern to match (e.g., "*.jpg", "*.mp4", "IMG_*")
                        Default: * (all files)
    -o OUTPUT_DIR       Output directory (default: current directory)
    -d                  Organize files by creation date into YYYY/MMM folders
    -s START_DATE       Start date filter (YYYY-MM-DD)
    -e END_DATE         End date filter (YYYY-MM-DD)
    -r                  Resume incomplete downloads (default: true)
    -n                  Skip checksum verification (faster, less safe)
    -h                  Show this help message

EXAMPLES:
    # Download all photos and videos to current directory
    $0

    # Download only JPG files to ~/Pictures/iPhone
    $0 -p "*.jpg" -o ~/Pictures/iPhone

    # Download all media organized by date
    $0 -d -o ~/Pictures/iPhone

    # Download videos from specific date range
    $0 -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/iPhone

    # Download specific IMG files organized by date
    $0 -p "IMG_*.{jpg,heic}" -d -o ~/Photos
INNER_EOF
    exit 1
}

# Parse command line arguments
while getopts "p:o:ds:e:rnh" opt; do
    case $opt in
        p) PATTERN="$OPTARG" ;;
        o) OUTPUT_DIR="$OPTARG" ;;
        d) ORGANIZE_BY_DATE=true ;;
        s) START_DATE="$OPTARG" ;;
        e) END_DATE="$OPTARG" ;;
        r) ;; # Resume is default, keeping for backward compatibility
        n) VERIFY_CHECKSUM=false ;;
        h) usage ;;
        *) usage ;;
    esac
done

# Create output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
OUTPUT_DIR=$(cd "$OUTPUT_DIR" && pwd)

# Set up state directory for tracking downloads
STATE_DIR="$OUTPUT_DIR/.iphone_download_state"
mkdir -p "$STATE_DIR"

# Create mount point
mkdir -p "$MOUNT_POINT"

echo "=== iPhone Media Downloader ==="
echo "Pattern: $PATTERN"
echo "Output: $OUTPUT_DIR"
echo "Organize by date: $ORGANIZE_BY_DATE"
[ -n "$START_DATE" ] && echo "Start date: $START_DATE"
[ -n "$END_DATE" ] && echo "End date: $END_DATE"
echo ""

# Check if iPhone is connected
echo "Checking for iPhone connection..."
if ! ideviceinfo -s > /dev/null 2>&1; then
    echo "Error: No iPhone detected. Please connect your iPhone and trust this computer."
    exit 1
fi

# Mount iPhone
echo "Mounting iPhone..."
if ! ifuse "$MOUNT_POINT" 2>/dev/null; then
    echo "Error: Failed to mount iPhone. Make sure you've trusted this computer on your iPhone."
    exit 1
fi

# Cleanup function
cleanup() {
    local exit_code=$?
    echo ""
    if [ $exit_code -ne 0 ]; then
        echo "⚠ Download interrupted. Run the script again to resume."
    fi
    echo "Unmounting iPhone..."
    umount "$MOUNT_POINT" 2>/dev/null || true
    rmdir "$MOUNT_POINT" 2>/dev/null || true
}
trap cleanup EXIT

# Find DCIM folder
DCIM_PATH="$MOUNT_POINT/DCIM"
if [ ! -d "$DCIM_PATH" ]; then
    echo "Error: DCIM folder not found on iPhone"
    exit 1
fi

echo "Scanning for files matching pattern: $PATTERN"
echo ""

# Counter
TOTAL_FILES=0
COPIED_FILES=0
SKIPPED_FILES=0
RESUMED_FILES=0
FAILED_FILES=0

# Function to compute file checksum
compute_checksum() {
    local file="$1"
    if [ -f "$file" ]; then
        shasum -a 256 "$file" 2>/dev/null | awk '{print $1}'
    fi
}

# Function to get file size
get_file_size() {
    local file="$1"
    if [ -f "$file" ]; then
        stat -f "%z" "$file" 2>/dev/null
    fi
}

# Function to mark file as completed
mark_completed() {
    local source_file="$1"
    local dest_file="$2"
    local checksum="$3"
    local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
    
    echo "$dest_file|$checksum|$(date +%s)" > "$state_file"
}

# Function to check if file was previously completed
is_completed() {
    local source_file="$1"
    local dest_file="$2"
    local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
    
    if [ ! -f "$state_file" ]; then
        return 1
    fi
    
    # Read state file
    local saved_dest saved_checksum saved_timestamp
    IFS='|' read -r saved_dest saved_checksum saved_timestamp < "$state_file"
    
    # Check if destination file exists and matches
    if [ "$saved_dest" = "$dest_file" ] && [ -f "$dest_file" ]; then
        if [ "$VERIFY_CHECKSUM" = true ]; then
            local current_checksum=$(compute_checksum "$dest_file")
            if [ "$current_checksum" = "$saved_checksum" ]; then
                return 0
            fi
        else
            # Without checksum verification, just check file exists
            return 0
        fi
    fi
    
    return 1
}

# Convert dates to timestamps for comparison
START_TIMESTAMP=""
END_TIMESTAMP=""
if [ -n "$START_DATE" ]; then
    START_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$START_DATE" "+%s" 2>/dev/null || echo "")
    if [ -z "$START_TIMESTAMP" ]; then
        echo "Error: Invalid start date format. Use YYYY-MM-DD"
        exit 1
    fi
fi
if [ -n "$END_DATE" ]; then
    END_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$END_DATE" "+%s" 2>/dev/null || echo "")
    if [ -z "$END_TIMESTAMP" ]; then
        echo "Error: Invalid end date format. Use YYYY-MM-DD"
        exit 1
    fi
    # Add 24 hours to include the entire end date
    END_TIMESTAMP=$((END_TIMESTAMP + 86400))
fi

# Process files
find "$DCIM_PATH" -type f | while read -r file; do
    filename=$(basename "$file")
    
    # Check if filename matches pattern (basic glob matching)
    if [[ ! "$filename" == $PATTERN ]]; then
        continue
    fi
    
    TOTAL_FILES=$((TOTAL_FILES + 1))
    
    # Get file creation date
    if command -v exiftool > /dev/null 2>&1; then
        # Try to get date from EXIF data
        CREATE_DATE=$(exiftool -s3 -DateTimeOriginal -d "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
        if [ -z "$CREATE_DATE" ]; then
            # Fallback to file modification time
            CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
        fi
    else
        # Use file modification time
        CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
    fi
    
    # Extract date components
    if [ -n "$CREATE_DATE" ]; then
        FILE_DATE=$(echo "$CREATE_DATE" | cut -d' ' -f1)
        FILE_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$FILE_DATE" "+%s" 2>/dev/null || echo "")
        
        # Check date filters
        if [ -n "$START_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -lt "$START_TIMESTAMP" ]; then
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        fi
        if [ -n "$END_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -ge "$END_TIMESTAMP" ]; then
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        fi
        
        # Determine output path with YYYY/MMM structure
        if [ "$ORGANIZE_BY_DATE" = true ]; then
            YEAR=$(echo "$FILE_DATE" | cut -d'-' -f1)
            MONTH_NUM=$(echo "$FILE_DATE" | cut -d'-' -f2)
            # Convert month number to 3-letter abbreviation
            case "$MONTH_NUM" in
                01) MONTH="Jan" ;;
                02) MONTH="Feb" ;;
                03) MONTH="Mar" ;;
                04) MONTH="Apr" ;;
                05) MONTH="May" ;;
                06) MONTH="Jun" ;;
                07) MONTH="Jul" ;;
                08) MONTH="Aug" ;;
                09) MONTH="Sep" ;;
                10) MONTH="Oct" ;;
                11) MONTH="Nov" ;;
                12) MONTH="Dec" ;;
                *) MONTH="Unknown" ;;
            esac
            DEST_DIR="$OUTPUT_DIR/$YEAR/$MONTH"
        else
            DEST_DIR="$OUTPUT_DIR"
        fi
    else
        DEST_DIR="$OUTPUT_DIR"
    fi
    
    # Create destination directory
    mkdir -p "$DEST_DIR"
    
    # Determine destination path
    DEST_PATH="$DEST_DIR/$filename"
    
    # Check if this file was previously completed successfully
    if is_completed "$file" "$DEST_PATH"; then
        echo "✓ Already downloaded: $filename"
        SKIPPED_FILES=$((SKIPPED_FILES + 1))
        continue
    fi
    
    # Check if file already exists with same content (for backward compatibility)
    if [ -f "$DEST_PATH" ]; then
        if cmp -s "$file" "$DEST_PATH"; then
            echo "✓ Already exists (identical): $filename"
            # Mark as completed for future runs
            SOURCE_CHECKSUM=$(compute_checksum "$DEST_PATH")
            mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        else
            # Add timestamp to avoid overwriting different file
            BASE="${filename%.*}"
            EXT="${filename##*.}"
            DEST_PATH="$DEST_DIR/${BASE}_$(date +%s).$EXT"
        fi
    fi
    
    # Use temporary file for atomic copy
    TEMP_PATH="${DEST_PATH}.tmp.$$"
    
    # Copy to temporary file
    echo "⬇ Downloading: $filename → $DEST_PATH"
    if ! cp "$file" "$TEMP_PATH" 2>/dev/null; then
        echo "✗ Failed to copy: $filename"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
        continue
    fi
    
    # Verify size matches (basic corruption check)
    SOURCE_SIZE=$(get_file_size "$file")
    TEMP_SIZE=$(get_file_size "$TEMP_PATH")
    
    if [ "$SOURCE_SIZE" != "$TEMP_SIZE" ]; then
        echo "✗ Size mismatch for $filename (source: $SOURCE_SIZE, copied: $TEMP_SIZE)"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
        continue
    fi
    
    # Compute checksum for verification and tracking
    if [ "$VERIFY_CHECKSUM" = true ]; then
        SOURCE_CHECKSUM=$(compute_checksum "$TEMP_PATH")
    else
        SOURCE_CHECKSUM="skipped"
    fi
    
    # Preserve timestamps
    if [ -n "$CREATE_DATE" ]; then
        touch -t $(date -j -f "%Y-%m-%d %H:%M:%S" "$CREATE_DATE" "+%Y%m%d%H%M.%S" 2>/dev/null) "$TEMP_PATH" 2>/dev/null || true
    fi
    
    # Atomic move from temp to final destination
    if mv "$TEMP_PATH" "$DEST_PATH" 2>/dev/null; then
        echo "✓ Completed: $filename"
        # Mark as successfully completed
        mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
        COPIED_FILES=$((COPIED_FILES + 1))
    else
        echo "✗ Failed to finalize: $filename"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
    fi
done

echo ""
echo "=== Summary ==="
echo "Total files matching pattern: $TOTAL_FILES"
echo "Files downloaded: $COPIED_FILES"
echo "Files already present: $SKIPPED_FILES"
if [ $FAILED_FILES -gt 0 ]; then
    echo "Files failed: $FAILED_FILES"
    echo ""
    echo "⚠ Some files failed to download. Run the script again to retry."
    exit 1
fi
echo ""
echo "✓ Download complete! All files transferred successfully."
OUTER_EOF

echo "Making the script executable..."
chmod +x download-iphone-media.sh

echo "✓ Script created successfully: download-iphone-media.sh"

Usage Examples

Basic Usage

Download all photos and videos to the current directory:

./download-iphone-media.sh

Download with Date Organization

Organize files into folders by creation date (YYYY/MMM structure):

./download-iphone-media.sh -d -o ./Pictures

This creates a structure like:

./Pictures
├── 2024/
│   ├── Jan/
│   │   ├── IMG_1234.jpg
│   │   └── IMG_1235.heic
│   ├── Feb/
│   └── Dec/
├── 2025/
│   ├── Jan/
│   └── Nov/

Filter by File Pattern

Download only specific file types:

# Only JPG files
./download-iphone-media.sh -p "*.jpg" -o ~/Pictures/iPhone

# Only videos (MOV and MP4)
./download-iphone-media.sh -p "*.mov" -o ~/Videos/iPhone
./download-iphone-media.sh -p "*.mp4" -o ~/Videos/iPhone

# Files starting with IMG_
./download-iphone-media.sh -p "IMG_*" -o ~/Pictures

# HEIC photos (iPhone's default format)
./download-iphone-media.sh -p "*.heic" -o ~/Pictures/iPhone

Filter by Date Range

Download photos from a specific date range:

# Photos from January 2025
./download-iphone-media.sh -s 2025-01-01 -e 2025-01-31 -d -o ~/Pictures/January2025

# Photos from last week
./download-iphone-media.sh -s 2025-11-10 -e 2025-11-17 -o ~/Pictures/LastWeek

# Photos after a specific date
./download-iphone-media.sh -s 2025-11-01 -o ~/Pictures/Recent

Combined Filters

Combine multiple options for precise control:

# Download only videos from January 2025, organized by date
./download-iphone-media.sh -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/Vacation

# Download all HEIC photos from the last month, organized by date
./download-iphone-media.sh -p "*.heic" -s 2025-10-17 -e 2025-11-17 -d -o ~/Pictures/LastMonth

Features

Resumable & Idempotent Downloads

  • Crash recovery: Interrupted downloads can be resumed by running the script again
  • Atomic operations: Files are copied to temporary locations first, then moved atomically
  • State tracking: Maintains a hidden state directory (.iphone_download_state) to track completed files
  • Checksum verification: Uses SHA-256 checksums to verify file integrity (can be disabled with -n for speed)
  • No duplicates: Running the script multiple times won’t re-download existing files
  • Corruption detection: Validates file sizes and optionally checksums after copy

Date-Based Organization

  • Automatic folder structure: Creates YYYY/MMM folders based on photo creation date (e.g., 2025/Jan, 2025/Feb)
  • EXIF data support: Reads actual photo capture date from EXIF metadata when available
  • Fallback mechanism: Uses file modification time if EXIF data is unavailable
  • Fewer folders: Maximum 12 month folders per year instead of up to 365 day folders

Smart File Handling

  • Duplicate detection: Skips files that already exist with identical content
  • Conflict resolution: Adds timestamp suffix to filename if different file with same name exists
  • Timestamp preservation: Maintains original creation dates on copied files
  • Error tracking: Reports failed files and provides clear exit codes

Progress Feedback

  • Real-time progress updates showing each file being downloaded
  • Summary statistics at the end (total found, downloaded, skipped, failed)
  • Clear error messages for troubleshooting
  • Helpful resume instructions if interrupted

Common File Patterns

iPhone typically uses these file formats:

TypeExtensionsPattern Example
Photos.jpg.heic*.jpg or *.heic
Videos.mov.mp4*.mov or *.mp4
Screenshots.png*.png
Live Photos.heic.movIMG_*.heic + IMG_*.mov
All mediaall above* (default)

5. Handling Interrupted Downloads

If a download is interrupted (disconnection, error, etc.), simply run the script again:

# Script was interrupted - just run it again
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script will:

  • Skip all successfully downloaded files
  • Retry any failed files
  • Continue from where it left off

6. Fast Mode (Skip Checksum Verification)

For faster transfers on reliable connections, disable checksum verification:

# Skip checksums for speed (still verifies file sizes)
./download-iphone-media.sh -n -d -o ~/Pictures/iPhone

Note: This is generally safe but won’t detect corruption as thoroughly.

7. Clean State and Re-download

If you want to force a re-download of all files:

# Remove state directory to start fresh
rm -rf ~/Pictures/iPhone/.iphone_download_state
./download-iphone-media.sh -d -o ~/Pictures/iPhone

Troubleshooting

iPhone Not Detected

Error: No iPhone detected. Please connect your iPhone and trust this computer.

Solution:

  1. Make sure your iPhone is connected via USB cable
  2. Unlock your iPhone
  3. Tap “Trust” when prompted on your iPhone
  4. Run idevicepair pair if you haven’t already

Failed to Mount iPhone

Error: Failed to mount iPhone

Solution:

  1. Try unplugging and reconnecting your iPhone
  2. Check if another process is using the iPhone:umount /tmp/iphone_mount 2>/dev/null
  3. Restart your iPhone and try again
  4. On macOS Ventura or later, check System Settings → Privacy & Security → Files and Folders

Permission Denied

Solution:
Make sure the script has executable permissions:

chmod +x download-iphone-media.sh

Missing Tools

Error: Commands not found

Solution:
Install the required tools:

brew install libimobiledevice ifuse exiftool

On newer macOS versions, you may need to install macFUSE:

brew install --cask macfuse

After installation, you may need to restart your Mac and allow the kernel extension in System Settings → Privacy & Security.

Tips and Best Practices

1. Regular Backups

Create a scheduled backup script:

#!/bin/bash
# Save as ~/bin/backup-iphone-photos.sh

DATE=$(date +%Y-%m-%d)
BACKUP_DIR=~/Pictures/iPhone-Backups/$DATE

./download-iphone-media.sh -d -o "$BACKUP_DIR"

echo "Backup completed to $BACKUP_DIR"

2. Incremental Downloads

The script is fully idempotent and tracks completed downloads, making it perfect for incremental backups:

# Run daily to get new photos - only new files will be downloaded
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script maintains state in .iphone_download_state/ within your output directory, ensuring:

  • Already downloaded files are skipped instantly (no re-copying)
  • Interrupted downloads can be resumed
  • File integrity is verified with checksums

3. Free Up iPhone Storage

After confirming successful download:

  1. Verify files are on your MacBook
  2. Check file counts match
  3. Delete photos from iPhone via Photos app
  4. Empty “Recently Deleted” album

4. Convert HEIC to JPG (Optional)

If you need JPG files for compatibility:

# Install ImageMagick
brew install imagemagick

# Convert all HEIC files to JPG
find ~/Pictures/iPhone -name "*.heic" -exec sh -c 'magick "$0" "${0%.heic}.jpg"' {} \;

How Idempotent Recovery Works

The script implements several mechanisms to ensure safe, resumable downloads:

1. State Tracking

A hidden directory .iphone_download_state/ is created in your output directory. For each successfully downloaded file, a state file is created containing:

  • Destination file path
  • SHA-256 checksum (if verification enabled)
  • Completion timestamp

2. Atomic Operations

Each file is downloaded using a two-phase commit:

  1. Download Phase: Copy to temporary file (.tmp.$$ suffix)
  2. Verification Phase: Check file size and optionally compute checksum
  3. Commit Phase: Atomically move temp file to final destination
  4. Record Phase: Write completion state

If the script is interrupted at any point, incomplete temporary files are cleaned up automatically.

3. Idempotent Behavior

When you run the script:

  1. Before downloading each file, it checks the state directory
  2. If a state file exists, it verifies the destination file still exists and matches the checksum
  3. If verification passes, the file is skipped (no re-download)
  4. If verification fails or no state exists, the file is downloaded

This means:

  • ✓ Safe to run multiple times
  • ✓ Interrupted downloads can be resumed
  • ✓ Corrupted files are detected and re-downloaded
  • ✓ No wasted bandwidth on already-downloaded files

4. Checksum Verification

By default, SHA-256 checksums are computed and verified:

  • During download: Checksum computed after copy completes
  • On resume: Existing files are verified against stored checksum
  • Optional: Use -n flag to skip checksums for speed (still verifies file sizes)

Example Recovery Scenario

# Start downloading 1000 photos
./download-iphone-media.sh -d -o ~/Pictures/iPhone

# Script is interrupted after 500 files
# Press Ctrl+C or cable disconnects

# Simply run again - picks up where it left off
./download-iphone-media.sh -d -o ~/Pictures/iPhone
# Output:
# ✓ Already downloaded: IMG_0001.heic
# ✓ Already downloaded: IMG_0002.heic
# ...
# ⬇ Downloading: IMG_0501.heic → ~/Pictures/iPhone/2025/Jan/IMG_0501.heic

Performance Notes

  • Transfer speed: Depends on USB connection (USB 2.0 vs USB 3.0)
  • Large libraries: May take significant time for thousands of photos
  • EXIF reading: Adds minimal overhead but provides accurate dates
  • Pattern matching: Processed client-side, so all files are scanned

Conclusion

This script provides a robust, production-ready solution for downloading photos and videos from your iPhone to your MacBook. Key capabilities:

Core Features:

  • Filter by file patterns (type, name)
  • Filter by date ranges
  • Organize automatically into date-based folders
  • Preserve original file metadata

Reliability:

  • Fully idempotent – safe to run multiple times
  • Resumable downloads with automatic crash recovery
  • Atomic file operations prevent corruption
  • Checksum verification ensures data integrity
  • Clear error reporting and recovery instructions

For regular use, consider creating aliases in your ~/.zshrc:

# Add to ~/.zshrc
alias iphone-backup='~/download-iphone-media.sh -d -o ~/Pictures/iPhone'
alias iphone-videos='~/download-iphone-media.sh -p "*.mov" -d -o ~/Videos/iPhone'

Then simply run iphone-backup whenever you want to download your photos!

Resources

0
0