Amazon Aurora DSQL: A Deep Dive into Performance and Limitations

Executive Summary

Amazon Aurora DSQL represents AWS’s ambitious entry into the distributed SQL database market, announced at re:Invent 2024. It’s a serverless, distributed SQL database featuring active active high availability and PostgreSQL compatibility. While the service offers impressive architectural innovations including 99.99% single region and 99.999% multi region availability, but it comes with significant limitations that developers must carefully consider. This analysis examines Aurora DSQL’s performance characteristics, architectural tradeoffs, and critical constraints that impact real world applications.

Key Takeaways:

  • Aurora DSQL excels at multiregion active active workloads with low latency reads and writes
  • Significant PostgreSQL compatibility gaps limit migration paths for existing applications
  • Optimistic concurrency control requires application level retry logic
  • Preview phase limitations include 10,000 row transaction limits and missing critical features
  • Pricing model is complex and difficult to predict without production testing

Architecture Overview

Aurora DSQL fundamentally reimagines distributed database architecture by decoupling transaction processing from storage. The service overcomes two historical challenges: achieving multi region strong consistency with low latency and syncing servers with microsecond accuracy around the globe.

Core Components

The system consists of three independently scalable components:

  1. Compute Layer: Executes SQL queries without managing locks
  2. Commit/Journal Layer: Handles transaction ordering and conflict detection
  3. Storage Layer: Provides durable, queryable storage built from transaction logs

The Journal logs every transaction, while the Adjudicator component manages transaction isolation and conflict resolution. This separation allows Aurora DSQL to scale each component independently based on workload demands.

Multi Region Architecture

In multi region configurations, clusters provide two regional endpoints that present a single logical database supporting concurrent read and write operations with strong data consistency. A third witness region stores transaction logs for recovery purposes, enabling the system to maintain availability even during regional failures.

Performance Characteristics

Read and Write Latency

In simple workload tests, read latency achieves single digit milliseconds, while write latency is approximately two roundtrip times to the nearest region at commit. This predictable latency model differs significantly from traditional databases where performance degrades under contention.

Transaction latency remains constant relative to statement count, even across regions. This consistency provides predictable performance characteristics regardless of transaction complexity—a significant advantage for globally distributed applications.

Scalability Claims

AWS claims Aurora DSQL delivers reads and writes that are four times faster than Google Cloud Spanner. However, real world benchmarks will further reveal performance at extreme scale, as Aurora DSQL is yet to be truly tested on an enterprise scale.

The service provides virtually unlimited scalability from a single endpoint, eliminating manual provisioning and management of database instances. The architecture automatically partitions the key space to detect conflicting transactions, allowing it to scale without traditional sharding complexity.

Concurrency Control Tradeoffs

Aurora DSQL uses optimistic concurrency control (OCC), where transactions run without considering other concurrent transactions, with conflict detection happening at commit time. While this prevents slow transactions from blocking others, it requires applications to handle transaction retries.

OCC provides better scalability for query processing and a more robust cluster for realistic failures by avoiding locking mechanisms that can lead to deadlocks or performance bottlenecks. However, this comes at the cost of increased application complexity.

Critical Limitations

Transaction Constraints

The preview version imposes several hard limits that significantly impact application design:

  • Maximum 10,000 rows modified per transaction
  • Transaction size cannot exceed 10 MiB
  • Sessions are capped at 1 hour, requiring reconnection for long lived processes
  • Cannot mix DDL and DML statements within a single transaction

Common DDL/DML Limitation Example:

The most common pattern that fails in Aurora DSQL is SELECT INTO, which combines table creation and data population in a single statement:

BEGIN;
SELECT id, username, created_at 
INTO new_users_copy 
FROM users 
WHERE active = true;
COMMIT;  -- ERROR: SELECT INTO mixes DDL and DML

This pattern is extremely common in:

  • ETL processes that create temporary staging tables and load data with SELECT INTO
  • Reporting workflows that materialize query results into new tables
  • Database migrations that create tables and seed initial data
  • Test fixtures that set up schema and populate test data
  • Multi-tenant applications that dynamically create tenant-specific tables and initialize them

The workaround requires splitting into separate transactions using CREATE TABLE followed by INSERT INTO ... SELECT:

-- Transaction 1: Create table structure
BEGIN;
CREATE TABLE new_users_copy (
    id BIGINT,
    username TEXT,
    created_at TIMESTAMP
);
COMMIT;

-- Transaction 2: Populate with data
BEGIN;
INSERT INTO new_users_copy 
SELECT id, username, created_at 
FROM users 
WHERE active = true;
COMMIT;

This limitation stems from Aurora DSQL’s asynchronous DDL processing architecture, where schema changes are propagated separately from data changes across the distributed system.

These constraints make Aurora DSQL unsuitable for batch processing workloads or applications requiring large bulk operations.

Missing PostgreSQL Features

Aurora DSQL sacrifices several critical PostgreSQL features for performance and scalability:

Not Supported:

  • No foreign keys
  • No temporary tables
  • No views
  • No triggers, PL/pgSQL, sequences, or explicit locking
  • No serializable isolation level or LOCK TABLE support
  • No PostgreSQL extensions (pgcrypto, PostGIS, PGVector, hstore)

Unsupported Data Types:

  • No SERIAL or BIGSERIAL (auto incrementing integers)
  • No JSON or JSONB types
  • No range types (tsrange, int4range, etc.)
  • No geospatial types (geometry, geography)
  • No vector types
  • TEXT type limited to 1MB (vs. 1GB in PostgreSQL)

Aurora DSQL prioritizes scalability, sacrificing some features for performance. The distributed architecture and asynchronous DDL requirements prevent support for extensions that depend on PostgreSQL’s internal storage format. These omissions require significant application redesign for PostgreSQL migrations.

Isolation Level Limitations

Aurora DSQL supports strong snapshot isolation, equivalent to repeatable read isolation in PostgreSQL. This creates several important implications:

  1. Write Skew Anomalies: Applications susceptible to write skew cannot rely on serializable isolation to prevent conflicts.

What is Write Skew?

Write skew is a database anomaly that occurs when two concurrent transactions read overlapping data, make disjoint updates based on what they read, and both commit successfully—even though the result violates a business constraint. This happens because snapshot isolation only detects direct write write conflicts on the same rows, not constraint violations across different rows.

Classic example: A hospital requires at least two doctors on call at all times. Two transactions check the current count (finds 2 doctors), both see the requirement is satisfied, and both remove themselves from the on call roster. Both transactions modify different rows (their own records), so snapshot isolation sees no conflict. Both commit successfully, leaving zero doctors on call and violating the business rule.

In PostgreSQL with serializable isolation, this would be prevented because the database detects the anomaly and aborts one transaction. Aurora DSQL’s snapshot isolation cannot prevent this, requiring application level logic to handle such constraints.

  1. Referential Integrity Challenges: Without foreign keys and serializable isolation, maintaining referential integrity requires application side logic using SELECT FOR UPDATE to lock parent keys during child table inserts.
  2. Conflict Detection: Aurora DSQL uses optimistic concurrency control where SELECT FOR UPDATE intents are not synchronized to be visible to other transactions until commit.

Inter Region Consistency and Data Durability

Strong Consistency Guarantees

Aurora DSQL provides strong consistency across all regional endpoints, ensuring that reads and writes to any endpoint always reflect the same logical state. This is achieved through synchronous replication and careful transaction ordering:

How It Works:

  1. Synchronous Commits: When a transaction commits, Aurora DSQL writes to the distributed transaction log and synchronously replicates all committed log data across regions before acknowledging the commit to the client.
  2. Single Logical Database: Both regional endpoints in a multi region cluster present the same logical database. Readers consistently see the same data regardless of which endpoint they query.
  3. Zero Replication Lag: Unlike traditional asynchronous replication, there is no replication lag on commit. The commit only succeeds after data is durably stored across regions.
  4. Witness Region: A third region stores transaction logs for durability but doesn’t have a query endpoint. This witness region ensures multi region durability without requiring three full active regions.

Can You Lose Data?

Aurora DSQL is designed with strong durability guarantees that make data loss extremely unlikely:

Single Region Configuration:

  • All write transactions are synchronously replicated to storage replicas across three Availability Zones
  • Replication uses quorum based commits, ensuring data survives even if one AZ fails completely
  • No risk of data loss due to replication lag because replication is always synchronous

Multi Region Configuration:

  • Committed transactions are synchronously written to the transaction log in both active regions plus the witness region
  • A transaction only acknowledges success after durable storage across multiple regions
  • Even if an entire AWS region becomes unavailable, committed data remains accessible from the other region

Failure Scenarios:

Single AZ Failure: Aurora DSQL automatically routes to healthy AZs. No data loss occurs because data is replicated across three AZs synchronously.

Single Region Failure (Multi Region Setup): Applications can continue operating from the remaining active region with zero data loss. All committed transactions were synchronously replicated before the commit acknowledgment was sent.

Component Failure: Individual component failures (compute, storage, journal) are handled through Aurora DSQL’s self healing architecture. The system automatically repairs failed replicas asynchronously while serving requests from healthy components.

Will Everything Always Be in Both Regions?

Yes, with important caveats:

Committed Data: Once a transaction receives a commit acknowledgment, that data is guaranteed to exist in both active regions. The synchronous replication model ensures this.

Uncommitted Transactions: Transactions that haven’t yet committed exist only in their originating region’s session state. If that region fails before commit, the transaction is lost (which is expected behavior).

Durability vs. Availability Tradeoff: The strong consistency model means that if cross region network connectivity is lost, write operations may be impacted. Aurora DSQL prioritizes consistency over availability in the CAP theorem sense, it won’t accept writes that can’t be properly replicated.

Geographic Restrictions: Multi region clusters are currently limited to geographic groupings (US regions together, European regions together, Asia Pacific regions together). You cannot pair US East with EU West, which limits truly global active active deployments.

Consistency Risks and Limitations

While Aurora DSQL provides strong consistency, developers should understand these considerations:

Network Partition Handling: In the event of a network partition between regions, Aurora DSQL’s behavior depends on which components can maintain quorum. The system is designed to maintain consistency, which may mean rejecting writes rather than accepting writes that can’t be properly replicated.

Write Skew at Application Level: While individual transactions are consistent, applications must still handle write skew anomalies that can occur with snapshot isolation (as discussed in the Isolation Level Limitations section).

Time Synchronization Dependency: Aurora DSQL relies on Amazon Time Sync Service for precise time coordination. While highly reliable, this creates a subtle dependency on time synchronization for maintaining transaction ordering across regions.

This test measures basic insert latency using the PostgreSQL wire protocol:

#!/bin/bash
# Basic Aurora DSQL Performance Test
# Prerequisites: AWS CLI, psql, jq
# Configuration
REGION="us-east-1"
CLUSTER_ENDPOINT="your-cluster-endpoint.dsql.us-east-1.on.aws"
DATABASE="testdb"
# Generate temporary authentication token
export PGPASSWORD=$(aws dsql generate-db-connect-admin-auth-token \
  --hostname $CLUSTER_ENDPOINT \
  --region $REGION \
  --expires-in 3600)
export PGHOST=$CLUSTER_ENDPOINT
export PGDATABASE=$DATABASE
export PGUSER=admin
# Create test table
psql << 'EOF'
DROP TABLE IF EXISTS perf_test;
CREATE TABLE perf_test (
  id BIGSERIAL PRIMARY KEY,
  data TEXT,
  created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
EOF
# Run timed insert test
echo "Running 100 sequential inserts..."
time psql -c "
DO $$
DECLARE
  i INT;
BEGIN
  FOR i IN 1..100 LOOP
    INSERT INTO perf_test (data) 
    VALUES ('test_data_' || i);
  END LOOP;
END $$;
"
# Test transaction commit latency
echo "Testing transaction commit latency..."
psql << 'EOF'
\timing on
BEGIN;
INSERT INTO perf_test (data) VALUES ('commit_test');
COMMIT;
EOF

Concurrency Control Testing

Test optimistic concurrency behavior and conflict detection:

#!/usr/bin/env python3
"""
Aurora DSQL Concurrency Test
Tests optimistic concurrency control and retry logic
"""
import psycopg2
import boto3
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
def get_auth_token(endpoint, region):
    """Generate IAM authentication token"""
    client = boto3.client('dsql', region_name=region)
    return client.generate_db_connect_admin_auth_token(
        hostname=endpoint,
        region=region,
        expires_in=3600
    )
def connect(endpoint, database, region):
    """Create database connection"""
    token = get_auth_token(endpoint, region)
    return psycopg2.connect(
        host=endpoint,
        database=database,
        user='admin',
        password=token,
        sslmode='require'
    )
def update_with_retry(endpoint, database, region, row_id, new_value, max_retries=5):
    """Update with exponential backoff retry logic"""
    retries = 0
    delay = 0.1
    
    while retries < max_retries:
        conn = None
        try:
            conn = connect(endpoint, database, region)
            cursor = conn.cursor()
            
            # Start transaction
            cursor.execute("BEGIN")
            
            # Read current value
            cursor.execute(
                "SELECT value FROM test_table WHERE id = %s FOR UPDATE",
                (row_id,)
            )
            current = cursor.fetchone()
            
            # Simulate some processing
            time.sleep(0.01)
            
            # Update value
            cursor.execute(
                "UPDATE test_table SET value = %s WHERE id = %s",
                (new_value, row_id)
            )
            
            # Commit
            cursor.execute("COMMIT")
            
            return True, retries
            
        except psycopg2.Error as e:
            if "change conflicts with another transaction" in str(e):
                retries += 1
                if retries < max_retries:
                    time.sleep(delay)
                    delay *= 2  # Exponential backoff
                    continue
            raise
        finally:
            if conn:
                conn.close()
    
    return False, max_retries
def run_concurrency_test(endpoint, database, region, num_threads=10):
    """Run concurrent updates on same row"""
    
    # Setup test table
    conn = connect(endpoint, database, region)
    cursor = conn.cursor()
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS test_table (
            id BIGINT PRIMARY KEY,
            value INT
        )
    """)
    cursor.execute("INSERT INTO test_table (id, value) VALUES (1, 0)")
    conn.commit()
    conn.close()
    
    # Run concurrent updates
    start_time = time.time()
    results = {'success': 0, 'failed': 0, 'total_retries': 0}
    
    with ThreadPoolExecutor(max_workers=num_threads) as executor:
        futures = [
            executor.submit(update_with_retry, endpoint, database, region, 1, i)
            for i in range(num_threads)
        ]
        
        for future in as_completed(futures):
            success, retries = future.result()
            if success:
                results['success'] += 1
            else:
                results['failed'] += 1
            results['total_retries'] += retries
    
    elapsed = time.time() - start_time
    
    print(f"\nConcurrency Test Results:")
    print(f"Duration: {elapsed:.2f}s")
    print(f"Successful: {results['success']}")
    print(f"Failed: {results['failed']}")
    print(f"Total Retries: {results['total_retries']}")
    print(f"Avg Retries per Transaction: {results['total_retries']/num_threads:.2f}")
if __name__ == "__main__":
    ENDPOINT = "your-cluster-endpoint.dsql.us-east-1.on.aws"
    DATABASE = "testdb"
    REGION = "us-east-1"
    
    run_concurrency_test(ENDPOINT, DATABASE, REGION)

Multi Region Latency Test

Measure cross region write latency:

// Node.js Multi-Region Latency Test
const { Client } = require('pg');
const AWS = require('aws-sdk');
async function getAuthToken(endpoint, region) {
  const dsql = new AWS.DSQL({ region });
  const params = {
    hostname: endpoint,
    region: region,
    expiresIn: 3600
  };
  return dsql.generateDbConnectAdminAuthToken(params);
}
async function testRegionalLatency(endpoints, database) {
  const results = [];
  
  for (const [region, endpoint] of Object.entries(endpoints)) {
    const token = await getAuthToken(endpoint, region);
    
    const client = new Client({
      host: endpoint,
      database: database,
      user: 'admin',
      password: token,
      ssl: { rejectUnauthorized: true }
    });
    
    await client.connect();
    
    // Measure read latency
    const readStart = Date.now();
    await client.query('SELECT 1');
    const readLatency = Date.now() - readStart;
    
    // Measure write latency (includes commit sync)
    const writeStart = Date.now();
    await client.query('BEGIN');
    await client.query('INSERT INTO latency_test (ts) VALUES (NOW())');
    await client.query('COMMIT');
    const writeLatency = Date.now() - writeStart;
    
    results.push({
      region,
      readLatency,
      writeLatency
    });
    
    await client.end();
  }
  
  console.log('Multi-Region Latency Results:');
  console.table(results);
}
// Usage
const endpoints = {
  'us-east-1': 'cluster-1.dsql.us-east-1.on.aws',
  'us-west-2': 'cluster-1.dsql.us-west-2.on.aws'
};
testRegionalLatency(endpoints, 'testdb');

Transaction Size Limit Test

Verify the 10,000 row transaction limit impacts on operations:

#!/usr/bin/env python3
"""
Test Aurora DSQL transaction size limits
"""
import psycopg2
import boto3
def connect(endpoint, database, region):
    """Create database connection"""
    client = boto3.client('dsql', region_name=region)
    token = client.generate_db_connect_admin_auth_token(
        hostname=endpoint,
        region=region,
        expires_in=3600
    )
    return psycopg2.connect(
        host=endpoint,
        database=database,
        user='admin',
        password=token,
        sslmode='require'
    )
def test_limits(endpoint, database, region):
    """Test transaction row limits"""
    conn = connect(endpoint, database, region)
    cursor = conn.cursor()
    
    # Create test table
    cursor.execute("""
        CREATE TABLE IF NOT EXISTS limit_test (
            id BIGSERIAL PRIMARY KEY,
            data TEXT
        )
    """)
    conn.commit()
    
    # Test under limit (should succeed)
    print("Testing 9,999 row insert (under limit)...")
    try:
        cursor.execute("BEGIN")
        for i in range(9999):
            cursor.execute(
                "INSERT INTO limit_test (data) VALUES (%s)",
                (f"row_{i}",)
            )
        cursor.execute("COMMIT")
        print("✓ Success: Under-limit transaction committed")
    except Exception as e:
        print(f"✗ Failed: {e}")
        cursor.execute("ROLLBACK")
    
    conn.close()
if __name__ == "__main__":
    ENDPOINT = "your-cluster-endpoint.dsql.us-east-1.on.aws"
    DATABASE = "testdb"
    REGION = "us-east-1"
    
    test_limits(ENDPOINT, DATABASE, REGION)

Performance Analysis

Strengths

  1. Predictable Latency: Transaction latency remains constant regardless of statement count, providing consistent performance characteristics that simplify capacity planning.
  2. Multi Region Active Active: Both regional endpoints support concurrent read and write operations with strong data consistency, enabling true active active configurations without complex replication lag management.
  3. No Single Point of Contention: A single slow client or long running query doesn’t impact other transactions because contention is handled at commit time on the server side.

Weaknesses

  1. High Contention Workload Performance: Applications with frequent updates to small key ranges experience high retry rates (see detailed explanation in Performance Analysis section above).
  2. Application Complexity: Aurora DSQL’s optimistic concurrency control minimizes cross region latency but requires applications to handle retries. This shifts complexity from the database to application code.
  3. Feature Gaps: Missing PostgreSQL features like foreign keys, triggers, views, and critical data types like JSON/JSONB require application redesign. Some developers view it as barely a database, more like a key value store with basic PostgreSQL wire compatibility.
  4. Unpredictable Costs: The pricing model is monumentally confusing, with costs varying based on DPU consumption that’s difficult to predict without production testing.

Use Case Recommendations

Good Fit

Global Ecommerce Platforms

Applications requiring continuous availability across regions with strong consistency for inventory and order management. If your business depends on continuous availability—like global ecommerce or financial platforms—Aurora DSQL’s active active model is a game changer.

Multi Tenant SaaS Applications

Services with dynamic scaling requirements and geographic distribution of users. The automatic scaling eliminates capacity planning concerns.

Financial Services (with caveats)

Transaction processing systems that can implement application level retry logic and work within the snapshot isolation model.

Poor Fit

Batch Processing Systems

The 10,000 row transaction limit makes Aurora DSQL unsuitable for bulk data operations, ETL processes, or large scale data migrations.

Legacy PostgreSQL Applications

Applications depending on foreign keys, triggers, stored procedures, views, or serializable isolation will require extensive rewrites.

High Contention Workloads

Applications with frequent updates to small key ranges (like continuously updating stock tickers, inventory counters for popular items, or high frequency account balance updates) will experience high retry rates and degraded throughput due to optimistic concurrency control. See the detailed explanation in the Performance Analysis section for why this occurs.

Comparison with Alternatives

vs. Google Cloud Spanner

Aurora DSQL claims 4x faster performance, but Spanner has been used for multi region consistent deployments at proven enterprise scale. Spanner uses its own SQL dialect, while Aurora DSQL provides PostgreSQL wire protocol compatibility.

vs. CockroachDB

YugabyteDB is more compatible with PostgreSQL, supporting features like triggers, PL/pgSQL, foreign keys, sequences, all isolation levels, and explicit locking. CockroachDB offers similar advantages with battle tested multi cloud deployment options, while Aurora DSQL is AWS exclusive and still in preview.

vs. Aurora PostgreSQL

Traditional Aurora PostgreSQL provides full PostgreSQL compatibility with proven reliability but lacks the multi region active active capabilities and automatic horizontal scaling of DSQL. The choice depends on whether distributed architecture benefits outweigh compatibility trade offs.

Production Readiness Assessment

Preview Status Concerns

As of November 2024, Aurora DSQL remains in public preview with several implications:

  • No production SLA guarantees
  • Feature set still evolving
  • Limited regional availability
  • Pricing subject to change at general availability

Missing Observability

During preview, instrumentation is limited. PostgreSQL doesn’t provide EXPLAIN ANALYZE output from commits, making it difficult to understand what happens during the synchronization and wait phases.

Migration Complexity

Aurora DSQL prioritizes scalability, sacrificing some features for performance. This requires careful evaluation of application dependencies on unsupported PostgreSQL features before attempting migration.

Pricing Considerations

Billing for Aurora DSQL is based on two primary measures: Distributed Processing Units (DPU) and storage, with costs of $8 per million DPU and $0.33 per GB month in US East.

However, DPU consumption varies unpredictably based on query complexity, making cost forecasting extremely difficult. The result of cost modeling exercises amounts to “yes, this will cost you some amount of money,” which is unacceptable when costs rise beyond science experiment levels.

The AWS Free Tier provides 100,000 DPUs and 1 GB month of storage monthly, allowing for initial testing without costs.

Recommendations

For New Applications

Aurora DSQL makes sense for greenfield projects where:

  • Multi region active active is a core requirement
  • Application can be designed around optimistic concurrency from the start
  • Features like foreign keys and triggers aren’t architectural requirements
  • Team accepts preview stage maturity risks

For Existing Applications

Migration from PostgreSQL requires:

  • Comprehensive audit of PostgreSQL feature dependencies
  • Redesign of referential integrity enforcement
  • Implementation of retry logic for optimistic concurrency
  • Extensive testing to validate cost models
  • Acceptance that some features may require application level implementation

Testing Strategy

Before production commitment:

  1. Benchmark actual workloads against Aurora DSQL to measure real DPU consumption
  2. Test at production scale to validate the 10,000 row transaction limit doesn’t impact operations
  3. Implement comprehensive retry logic and verify behavior under contention
  4. Measure cross region latency for your specific geographic requirements
  5. Calculate total cost of ownership including application development effort for missing features

Conclusion

Amazon Aurora DSQL represents significant innovation in distributed SQL database architecture, solving genuine problems around multi region strong consistency and operational simplicity. The technical implementation—particularly the disaggregated architecture and optimistic concurrency control—demonstrates sophisticated engineering.

However, the service makes substantial tradeoffs that limit its applicability. The missing PostgreSQL features, transaction size constraints, and optimistic concurrency requirements create significant migration friction for existing applications. The unpredictable pricing model adds further uncertainty.

For organizations building new globally distributed applications with flexible architectural requirements, Aurora DSQL deserves serious evaluation. For teams with existing PostgreSQL applications or those requiring full PostgreSQL compatibility, traditional Aurora PostgreSQL or alternative distributed SQL databases may provide better paths forward.

As the service matures beyond preview status, AWS will likely address some limitations and provide better cost prediction tools. Until then, Aurora DSQL remains a promising but unfinished solution that requires careful evaluation against specific requirements and willingness to adapt applications to its architectural constraints.

References

Last Updated: November 2024 | Preview Status: Public Preview

Leave a Reply

Your email address will not be published. Required fields are marked *