Create / Migrate WordPress to AWS Graviton: Maximum Performance, Minimum Cost

Running WordPress on ARM-based Graviton instances delivers up to 40% better price-performance compared to x86 equivalents. This guide provides production-ready scripts to deploy an optimised WordPress stack in minutes, plus everything you need to migrate your existing site.

Why Graviton for WordPress?

Graviton3 processors deliver:

  • 40% better price-performance vs comparable x86 instances
  • Up to 25% lower cost for equivalent workloads
  • 60% less energy consumption per compute hour
  • Native ARM64 optimisations for PHP 8.x and MariaDB

The t4g.small instance (2 vCPU, 2GB RAM) at ~$12/month handles most WordPress sites comfortably. For higher traffic, t4g.medium or c7g instances scale beautifully.

Architecture

┌─────────────────────────────────────────────────┐
│                   CloudFront                     │
│              (Optional CDN Layer)                │
└─────────────────────┬───────────────────────────┘
                      │
┌─────────────────────▼───────────────────────────┐
│              Graviton EC2 Instance               │
│  ┌─────────────────────────────────────────────┐│
│  │              Caddy (Reverse Proxy)          ││
│  │         Auto-TLS, HTTP/2, Compression       ││
│  └─────────────────────┬───────────────────────┘│
│                        │                         │
│  ┌─────────────────────▼───────────────────────┐│
│  │              PHP-FPM 8.3                     ││
│  │         OPcache, JIT Compilation            ││
│  └─────────────────────┬───────────────────────┘│
│                        │                         │
│  ┌─────────────────────▼───────────────────────┐│
│  │              MariaDB 10.11                   ││
│  │         InnoDB Optimised, Query Cache       ││
│  └─────────────────────────────────────────────┘│
│                                                  │
│  ┌─────────────────────────────────────────────┐│
│  │              EBS gp3 Volume                  ││
│  │         3000 IOPS, 125 MB/s baseline        ││
│  └─────────────────────────────────────────────┘│
└─────────────────────────────────────────────────┘

Prerequisites

  • AWS CLI configured with appropriate permissions
  • A domain name with DNS you control
  • SSH key pair in your target region

Part 1: Launch the Instance

Save this as launch-graviton-wp.sh and run from your local machine:

#!/bin/bash
set -euo pipefail

# Configuration - EDIT THESE
INSTANCE_TYPE="t4g.small"          # t4g.small for small sites, t4g.medium for busier
KEY_NAME="your-key-name"           # Your SSH key pair name
REGION="eu-west-1"                 # Your preferred region
INSTANCE_NAME="wordpress-graviton"
VOLUME_SIZE=30                     # GB - adjust as needed

# Get latest Amazon Linux 2023 ARM64 AMI
AMI_ID=$(aws ec2 describe-images \
    --owners amazon \
    --filters "Name=name,Values=al2023-ami-2023*-arm64" \
              "Name=state,Values=available" \
    --query 'Images | sort_by(@, &CreationDate) | [-1].ImageId' \
    --output text \
    --region "$REGION")

echo "Using AMI: $AMI_ID"

# Create security group
SG_ID=$(aws ec2 create-security-group \
    --group-name "wordpress-graviton-sg" \
    --description "WordPress on Graviton security group" \
    --region "$REGION" \
    --query 'GroupId' \
    --output text 2>/dev/null || \
    aws ec2 describe-security-groups \
        --group-names "wordpress-graviton-sg" \
        --region "$REGION" \
        --query 'SecurityGroups[0].GroupId' \
        --output text)

# Configure security group rules
aws ec2 authorize-security-group-ingress --group-id "$SG_ID" --protocol tcp --port 22 --cidr 0.0.0.0/0 --region "$REGION" 2>/dev/null || true
aws ec2 authorize-security-group-ingress --group-id "$SG_ID" --protocol tcp --port 80 --cidr 0.0.0.0/0 --region "$REGION" 2>/dev/null || true
aws ec2 authorize-security-group-ingress --group-id "$SG_ID" --protocol tcp --port 443 --cidr 0.0.0.0/0 --region "$REGION" 2>/dev/null || true

echo "Security group configured: $SG_ID"

# Launch instance
INSTANCE_ID=$(aws ec2 run-instances \
    --image-id "$AMI_ID" \
    --instance-type "$INSTANCE_TYPE" \
    --key-name "$KEY_NAME" \
    --security-group-ids "$SG_ID" \
    --block-device-mappings "DeviceName=/dev/xvda,Ebs={VolumeSize=$VOLUME_SIZE,VolumeType=gp3,Iops=3000,Throughput=125}" \
    --credit-specification CpuCredits=unlimited \
    --metadata-options "HttpTokens=required,HttpEndpoint=enabled" \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$INSTANCE_NAME}]" \
    --region "$REGION" \
    --query 'Instances[0].InstanceId' \
    --output text)

echo "Launched instance: $INSTANCE_ID"

# Wait for instance to be running
aws ec2 wait instance-running --instance-ids "$INSTANCE_ID" --region "$REGION"

# Get public IP
PUBLIC_IP=$(aws ec2 describe-instances \
    --instance-ids "$INSTANCE_ID" \
    --region "$REGION" \
    --query 'Reservations[0].Instances[0].PublicIpAddress' \
    --output text)

echo ""
echo "============================================"
echo "Instance launched successfully!"
echo "Instance ID: $INSTANCE_ID"
echo "Public IP:   $PUBLIC_IP"
echo ""
echo "Next steps:"
echo "1. Point your domain's A record to: $PUBLIC_IP"
echo "2. SSH in: ssh -i ~/.ssh/${KEY_NAME}.pem ec2-user@$PUBLIC_IP"
echo "3. Run the WordPress setup script"
echo "============================================"

Run it:

chmod +x launch-graviton-wp.sh
./launch-graviton-wp.sh

Part 2: Install WordPress Stack

SSH into your new instance and save this as setup-wordpress.sh:

#!/bin/bash
set -euo pipefail

# Configuration - EDIT THESE
DOMAIN="yourdomain.com"
WP_DB_NAME="wordpress"
WP_DB_USER="wp_user"
WP_DB_PASS=$(openssl rand -base64 24 | tr -dc 'a-zA-Z0-9' | head -c 24)
MYSQL_ROOT_PASS=$(openssl rand -base64 24 | tr -dc 'a-zA-Z0-9' | head -c 24)
WP_ADMIN_USER="admin"
WP_ADMIN_PASS=$(openssl rand -base64 16 | tr -dc 'a-zA-Z0-9' | head -c 16)
WP_ADMIN_EMAIL="admin@${DOMAIN}"

# Store credentials
mkdir -p /root/.wordpress
cat > /root/.wordpress/credentials << EOF
MySQL Root Password: $MYSQL_ROOT_PASS
WordPress DB Name:   $WP_DB_NAME
WordPress DB User:   $WP_DB_USER
WordPress DB Pass:   $WP_DB_PASS
WordPress Admin:     $WP_ADMIN_USER
WordPress Admin Pass: $WP_ADMIN_PASS
EOF
chmod 600 /root/.wordpress/credentials

echo "==> Installing packages..."
dnf update -y

# Find latest available PHP version
PHP_VERSION=$(dnf list available php* 2>/dev/null | grep -oP 'php\d+\.\d+(?=\.x86_64|\.aarch64|-fpm)' | sort -V | tail -1)
if [ -z "$PHP_VERSION" ]; then
    PHP_VERSION="php8.3"  # Fallback
fi
echo "    Using PHP: ${PHP_VERSION}"

# Find latest available MariaDB version
MARIADB_PKG=$(dnf list available mariadb*-server 2>/dev/null | grep -oP 'mariadb\d+-server' | sort -V | tail -1)
if [ -z "$MARIADB_PKG" ]; then
    MARIADB_PKG="mariadb105-server"  # Fallback
fi
echo "    Using MariaDB: ${MARIADB_PKG}"

dnf install -y ${PHP_VERSION} ${PHP_VERSION}-fpm ${PHP_VERSION}-mysqlnd ${PHP_VERSION}-gd ${PHP_VERSION}-xml \
    ${PHP_VERSION}-mbstring ${PHP_VERSION}-opcache ${PHP_VERSION}-zip ${PHP_VERSION}-intl ${PHP_VERSION}-bcmath \
    ${PHP_VERSION}-imagick ${MARIADB_PKG} wget unzip

# Install Caddy
dnf install -y 'dnf-command(copr)'
dnf copr enable -y @caddy/caddy
dnf install -y caddy

echo "==> Configuring MariaDB..."
systemctl enable --now mariadb

mysql -e "ALTER USER 'root'@'localhost' IDENTIFIED BY '${MYSQL_ROOT_PASS}';"
mysql -u root -p"${MYSQL_ROOT_PASS}" << EOF
CREATE DATABASE ${WP_DB_NAME} CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER '${WP_DB_USER}'@'localhost' IDENTIFIED BY '${WP_DB_PASS}';
GRANT ALL PRIVILEGES ON ${WP_DB_NAME}.* TO '${WP_DB_USER}'@'localhost';
FLUSH PRIVILEGES;
EOF

# MariaDB performance tuning
cat > /etc/my.cnf.d/wordpress-optimised.cnf << 'EOF'
[[mysqld]]
# InnoDB Settings
innodb_buffer_pool_size = 256M
innodb_log_file_size = 64M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_file_per_table = 1

# Query Cache (MariaDB still supports this)
query_cache_type = 1
query_cache_size = 32M
query_cache_limit = 2M

# Connection Settings
max_connections = 100
thread_cache_size = 8

# Table Settings
table_open_cache = 2000
table_definition_cache = 1000

# Logging
slow_query_log = 1
slow_query_log_file = /var/log/mariadb/slow.log
long_query_time = 2
EOF

systemctl restart mariadb

echo "==> Configuring PHP-FPM..."
cat > /etc/php.ini.d/99-wordpress-optimised.ini << 'EOF'
; Memory and execution
memory_limit = 256M
max_execution_time = 300
max_input_time = 300
post_max_size = 64M
upload_max_filesize = 64M

; OPcache - Critical for performance
opcache.enable = 1
opcache.memory_consumption = 128
opcache.interned_strings_buffer = 16
opcache.max_accelerated_files = 10000
opcache.validate_timestamps = 0
opcache.revalidate_freq = 0
opcache.save_comments = 1
opcache.enable_file_override = 1

; JIT Compilation (PHP 8+)
opcache.jit = 1255
opcache.jit_buffer_size = 64M

; Realpath cache
realpath_cache_size = 4096K
realpath_cache_ttl = 600
EOF

# PHP-FPM pool configuration
cat > /etc/php-fpm.d/www.conf << 'EOF'
[[www]]
user = caddy
group = caddy
listen = /run/php-fpm/www.sock
listen.owner = caddy
listen.group = caddy
listen.mode = 0660

pm = dynamic
pm.max_children = 25
pm.start_servers = 5
pm.min_spare_servers = 3
pm.max_spare_servers = 10
pm.max_requests = 500

php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on

; Security
php_admin_value[disable_functions] = exec,passthru,shell_exec,system,proc_open,popen
EOF

mkdir -p /var/log/php-fpm
systemctl enable --now php-fpm

echo "==> Installing WordPress..."
mkdir -p /var/www/wordpress
cd /var/www/wordpress

wget -q https://wordpress.org/latest.tar.gz
tar xzf latest.tar.gz --strip-components=1
rm latest.tar.gz

# Generate WordPress salts
SALTS=$(curl -s https://api.wordpress.org/secret-key/1.1/salt/)

cat > wp-config.php << EOF
<?php
define('DB_NAME', '${WP_DB_NAME}');
define('DB_USER', '${WP_DB_USER}');
define('DB_PASSWORD', '${WP_DB_PASS}');
define('DB_HOST', 'localhost');
define('DB_CHARSET', 'utf8mb4');
define('DB_COLLATE', '');

${SALTS}

\$table_prefix = 'wp_';

define('WP_DEBUG', false);
define('WP_DEBUG_LOG', false);
define('WP_DEBUG_DISPLAY', false);

// Performance optimisations
define('WP_CACHE', true);
define('COMPRESS_CSS', true);
define('COMPRESS_SCRIPTS', true);
define('CONCATENATE_SCRIPTS', true);
define('ENFORCE_GZIP', true);

// Security hardening
define('DISALLOW_FILE_EDIT', true);
define('WP_AUTO_UPDATE_CORE', 'minor');

// Memory
define('WP_MEMORY_LIMIT', '256M');
define('WP_MAX_MEMORY_LIMIT', '512M');

if (!defined('ABSPATH')) {
    define('ABSPATH', __DIR__ . '/');
}

require_once ABSPATH . 'wp-settings.php';
EOF

chown -R caddy:caddy /var/www/wordpress
find /var/www/wordpress -type d -exec chmod 755 {} \;
find /var/www/wordpress -type f -exec chmod 644 {} \;

echo "==> Configuring Caddy..."
cat > /etc/caddy/Caddyfile << EOF
${DOMAIN} {
    root * /var/www/wordpress

    # PHP processing
    php_fastcgi unix//run/php-fpm/www.sock {
        resolve_root_symlink
    }

    # Static file serving
    file_server

    # WordPress permalinks
    @notStatic {
        not path /wp-admin/* /wp-includes/* /wp-content/*
        not file
    }
    rewrite @notStatic /index.php

    # Security headers
    header {
        X-Content-Type-Options nosniff
        X-Frame-Options SAMEORIGIN
        X-XSS-Protection "1; mode=block"
        Referrer-Policy strict-origin-when-cross-origin
        -Server
    }

    # Gzip compression
    encode gzip zstd

    # Cache static assets
    @static {
        path *.css *.js *.ico *.gif *.jpg *.jpeg *.png *.svg *.woff *.woff2
    }
    header @static Cache-Control "public, max-age=31536000, immutable"

    # Block sensitive files
    @blocked {
        path /wp-config.php /readme.html /license.txt /.htaccess
        path /wp-includes/*.php
    }
    respond @blocked 404

    # Logging
    log {
        output file /var/log/caddy/access.log
        format console
    }
}
EOF

mkdir -p /var/log/caddy
chown caddy:caddy /var/log/caddy
systemctl enable --now caddy

echo "==> Installing WP-CLI..."
curl -sO https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
chmod +x wp-cli.phar
mv wp-cli.phar /usr/local/bin/wp

# Complete WordPress installation
cd /var/www/wordpress
sudo -u caddy wp core install \
    --url="https://${DOMAIN}" \
    --title="My WordPress Site" \
    --admin_user="${WP_ADMIN_USER}" \
    --admin_password="${WP_ADMIN_PASS}" \
    --admin_email="${WP_ADMIN_EMAIL}" \
    --skip-email

# Install and activate caching plugin
sudo -u caddy wp plugin install wp-super-cache --activate

echo "==> Setting up automated backups..."
mkdir -p /var/backups/wordpress
cat > /etc/cron.daily/wordpress-backup << 'EOF'
#!/bin/bash
BACKUP_DIR="/var/backups/wordpress"
DATE=$(date +%Y%m%d)
cd /var/www/wordpress
wp db export "${BACKUP_DIR}/db-${DATE}.sql" --allow-root
tar czf "${BACKUP_DIR}/files-${DATE}.tar.gz" wp-content/
find "${BACKUP_DIR}" -type f -mtime +7 -delete
EOF
chmod +x /etc/cron.daily/wordpress-backup

echo ""
echo "============================================"
echo "WordPress installation complete!"
echo ""
echo "Site URL: https://${DOMAIN}"
echo ""
echo "Admin credentials saved to: /root/.wordpress/credentials"
cat /root/.wordpress/credentials
echo ""
echo "============================================"

Run it:

chmod +x setup-wordpress.sh
sudo ./setup-wordpress.sh

Part 3: Migrate Your Existing Site

If you’re migrating from an existing WordPress installation, follow these steps.

What gets migrated:

  • All posts, pages, and media
  • All users and their roles
  • All plugins (files + database settings)
  • All themes (including customisations)
  • All plugin/theme configurations (stored in wp_options table)
  • Widgets, menus, and customizer settings
  • WooCommerce products, orders, customers (if applicable)
  • All custom database tables created by plugins

Step 3a: Export from Old Server

Run this on your existing WordPress server. Save as wp-export.sh:

#!/bin/bash
set -euo pipefail

# Configuration
WP_PATH="/var/www/html"           # Adjust to your WordPress path
EXPORT_DIR="/tmp/wp-migration"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)

# Detect WordPress path if not set correctly
if [ ! -f "${WP_PATH}/wp-config.php" ]; then
    for path in "/var/www/wordpress" "/var/www/html/wordpress" "/home/*/public_html" "/var/www/*/public_html"; do
        if [ -f "${path}/wp-config.php" ]; then
            WP_PATH="$path"
            break
        fi
    done
fi

if [ ! -f "${WP_PATH}/wp-config.php" ]; then
    echo "ERROR: wp-config.php not found. Please set WP_PATH correctly."
    exit 1
fi

echo "==> WordPress found at: ${WP_PATH}"

# Extract database credentials from wp-config.php
DB_NAME=$(grep "DB_NAME" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
DB_USER=$(grep "DB_USER" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
DB_PASS=$(grep "DB_PASSWORD" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
DB_HOST=$(grep "DB_HOST" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)

echo "==> Database: ${DB_NAME}"

# Create export directory
mkdir -p "${EXPORT_DIR}"
cd "${EXPORT_DIR}"

echo "==> Exporting database..."
mysqldump -h "${DB_HOST}" -u "${DB_USER}" -p"${DB_PASS}" \
    --single-transaction \
    --quick \
    --lock-tables=false \
    --routines \
    --triggers \
    "${DB_NAME}" > database.sql

DB_SIZE=$(ls -lh database.sql | awk '{print $5}')
echo "    Database exported: ${DB_SIZE}"

echo "==> Exporting wp-content..."
tar czf wp-content.tar.gz -C "${WP_PATH}" wp-content

CONTENT_SIZE=$(ls -lh wp-content.tar.gz | awk '{print $5}')
echo "    wp-content exported: ${CONTENT_SIZE}"

echo "==> Exporting wp-config.php..."
cp "${WP_PATH}/wp-config.php" wp-config.php.bak

echo "==> Creating migration package..."
tar czf "wordpress-migration-${TIMESTAMP}.tar.gz" \
    database.sql \
    wp-content.tar.gz \
    wp-config.php.bak

rm -f database.sql wp-content.tar.gz wp-config.php.bak

PACKAGE_SIZE=$(ls -lh "wordpress-migration-${TIMESTAMP}.tar.gz" | awk '{print $5}')

echo ""
echo "============================================"
echo "Export complete!"
echo ""
echo "Package: ${EXPORT_DIR}/wordpress-migration-${TIMESTAMP}.tar.gz"
echo "Size:    ${PACKAGE_SIZE}"
echo ""
echo "Transfer to new server with:"
echo "  scp ${EXPORT_DIR}/wordpress-migration-${TIMESTAMP}.tar.gz ec2-user@NEW_IP:/tmp/"
echo "============================================"

Step 3b: Transfer the Export

scp /tmp/wp-migration/wordpress-migration-*.tar.gz ec2-user@YOUR_NEW_IP:/tmp/

Step 3c: Import on New Server

Run this on your new Graviton instance. Save as wp-import.sh:

#!/bin/bash
set -euo pipefail

# Configuration - EDIT THESE
MIGRATION_FILE="${1:-/tmp/wordpress-migration-*.tar.gz}"
OLD_DOMAIN="oldsite.com"          # Your old domain
NEW_DOMAIN="newsite.com"          # Your new domain (can be same)
WP_PATH="/var/www/wordpress"

# Resolve migration file path
MIGRATION_FILE=$(ls -1 ${MIGRATION_FILE} 2>/dev/null | head -1)

if [ ! -f "${MIGRATION_FILE}" ]; then
    echo "ERROR: Migration file not found: ${MIGRATION_FILE}"
    echo "Usage: $0 /path/to/wordpress-migration-XXXXXX.tar.gz"
    exit 1
fi

echo "==> Using migration file: ${MIGRATION_FILE}"

# Get database credentials from existing wp-config
if [ ! -f "${WP_PATH}/wp-config.php" ]; then
    echo "ERROR: wp-config.php not found at ${WP_PATH}"
    echo "Please run the WordPress setup script first"
    exit 1
fi

DB_NAME=$(grep "DB_NAME" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
DB_USER=$(grep "DB_USER" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
DB_PASS=$(grep "DB_PASSWORD" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)
MYSQL_ROOT_PASS=$(cat /root/.wordpress/credentials | grep "MySQL Root" | awk '{print $4}')

echo "==> Extracting migration package..."
TEMP_DIR=$(mktemp -d)
cd "${TEMP_DIR}"
tar xzf "${MIGRATION_FILE}"

echo "==> Backing up current installation..."
BACKUP_DIR="/var/backups/wordpress/pre-migration-$(date +%Y%m%d_%H%M%S)"
mkdir -p "${BACKUP_DIR}"
cp -r "${WP_PATH}/wp-content" "${BACKUP_DIR}/" 2>/dev/null || true
mysqldump -u root -p"${MYSQL_ROOT_PASS}" "${DB_NAME}" > "${BACKUP_DIR}/database.sql" 2>/dev/null || true

echo "==> Importing database..."
mysql -u root -p"${MYSQL_ROOT_PASS}" << EOF
DROP DATABASE IF EXISTS ${DB_NAME};
CREATE DATABASE ${DB_NAME} CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
GRANT ALL PRIVILEGES ON ${DB_NAME}.* TO '${DB_USER}'@'localhost';
FLUSH PRIVILEGES;
EOF

mysql -u root -p"${MYSQL_ROOT_PASS}" "${DB_NAME}" < database.sql

echo "==> Importing wp-content..."
rm -rf "${WP_PATH}/wp-content"
tar xzf wp-content.tar.gz -C "${WP_PATH}"
chown -R caddy:caddy "${WP_PATH}/wp-content"
find "${WP_PATH}/wp-content" -type d -exec chmod 755 {} \;
find "${WP_PATH}/wp-content" -type f -exec chmod 644 {} \;

echo "==> Updating URLs in database..."
cd "${WP_PATH}"

OLD_URL_HTTP="http://${OLD_DOMAIN}"
OLD_URL_HTTPS="https://${OLD_DOMAIN}"
NEW_URL="https://${NEW_DOMAIN}"

# Install WP-CLI if not present
if ! command -v wp &> /dev/null; then
    curl -sO https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
    chmod +x wp-cli.phar
    mv wp-cli.phar /usr/local/bin/wp
fi

echo "    Replacing ${OLD_URL_HTTPS} with ${NEW_URL}..."
sudo -u caddy wp search-replace "${OLD_URL_HTTPS}" "${NEW_URL}" --all-tables --precise --skip-columns=guid 2>/dev/null || true

echo "    Replacing ${OLD_URL_HTTP} with ${NEW_URL}..."
sudo -u caddy wp search-replace "${OLD_URL_HTTP}" "${NEW_URL}" --all-tables --precise --skip-columns=guid 2>/dev/null || true

echo "    Replacing //${OLD_DOMAIN} with //${NEW_DOMAIN}..."
sudo -u caddy wp search-replace "//${OLD_DOMAIN}" "//${NEW_DOMAIN}" --all-tables --precise --skip-columns=guid 2>/dev/null || true

echo "==> Flushing caches and rewrite rules..."
sudo -u caddy wp cache flush
sudo -u caddy wp rewrite flush

echo "==> Reactivating plugins..."
# Some plugins may deactivate during migration - reactivate all
sudo -u caddy wp plugin activate --all 2>/dev/null || true

echo "==> Verifying import..."
POST_COUNT=$(sudo -u caddy wp post list --post_type=post --format=count)
PAGE_COUNT=$(sudo -u caddy wp post list --post_type=page --format=count)
USER_COUNT=$(sudo -u caddy wp user list --format=count)
PLUGIN_COUNT=$(sudo -u caddy wp plugin list --format=count)

echo ""
echo "============================================"
echo "Migration complete!"
echo ""
echo "Imported content:"
echo "  - Posts:   ${POST_COUNT}"
echo "  - Pages:   ${PAGE_COUNT}"
echo "  - Users:   ${USER_COUNT}"
echo "  - Plugins: ${PLUGIN_COUNT}"
echo ""
echo "Site URL: https://${NEW_DOMAIN}"
echo ""
echo "Pre-migration backup: ${BACKUP_DIR}"
echo "============================================"

rm -rf "${TEMP_DIR}"

Run it:

chmod +x wp-import.sh
sudo ./wp-import.sh /tmp/wordpress-migration-*.tar.gz

Step 3d: Verify Migration

#!/bin/bash
set -euo pipefail

WP_PATH="/var/www/wordpress"
cd "${WP_PATH}"

echo "==> WordPress Verification Report"
echo "=================================="
echo ""

echo "WordPress Version:"
sudo -u caddy wp core version
echo ""

echo "Site URL Configuration:"
sudo -u caddy wp option get siteurl
sudo -u caddy wp option get home
echo ""

echo "Database Status:"
sudo -u caddy wp db check
echo ""

echo "Content Summary:"
echo "  Posts:      $(sudo -u caddy wp post list --post_type=post --format=count)"
echo "  Pages:      $(sudo -u caddy wp post list --post_type=page --format=count)"
echo "  Media:      $(sudo -u caddy wp post list --post_type=attachment --format=count)"
echo "  Users:      $(sudo -u caddy wp user list --format=count)"
echo ""

echo "Plugin Status:"
sudo -u caddy wp plugin list --format=table
echo ""

echo "Uploads Directory:"
UPLOAD_COUNT=$(find "${WP_PATH}/wp-content/uploads" -type f 2>/dev/null | wc -l)
UPLOAD_SIZE=$(du -sh "${WP_PATH}/wp-content/uploads" 2>/dev/null | cut -f1)
echo "  Files: ${UPLOAD_COUNT}"
echo "  Size:  ${UPLOAD_SIZE}"
echo ""

echo "Service Status:"
echo "  PHP-FPM: $(systemctl is-active php-fpm)"
echo "  MariaDB: $(systemctl is-active mariadb)"
echo "  Caddy:   $(systemctl is-active caddy)"
echo ""

echo "Page Load Test:"
DOMAIN=$(sudo -u caddy wp option get siteurl | sed 's|https://||' | sed 's|/.*||')
curl -w "  Total time: %{time_total}s\n  HTTP code: %{http_code}\n" -o /dev/null -s "https://${DOMAIN}/"

Rollback if Needed

If something goes wrong:

#!/bin/bash
set -euo pipefail

BACKUP_DIR=$(ls -1d /var/backups/wordpress/pre-migration-* 2>/dev/null | tail -1)

if [ -z "${BACKUP_DIR}" ]; then
    echo "ERROR: No backup found"
    exit 1
fi

echo "==> Rolling back to: ${BACKUP_DIR}"

WP_PATH="/var/www/wordpress"
MYSQL_ROOT_PASS=$(cat /root/.wordpress/credentials | grep "MySQL Root" | awk '{print $4}')
DB_NAME=$(grep "DB_NAME" "${WP_PATH}/wp-config.php" | cut -d "'" -f 4)

mysql -u root -p"${MYSQL_ROOT_PASS}" "${DB_NAME}" < "${BACKUP_DIR}/database.sql"

rm -rf "${WP_PATH}/wp-content"
cp -r "${BACKUP_DIR}/wp-content" "${WP_PATH}/"
chown -R caddy:caddy "${WP_PATH}/wp-content"

cd "${WP_PATH}"
sudo -u caddy wp cache flush
sudo -u caddy wp rewrite flush

echo "Rollback complete!"

Part 4: Post-Installation Optimisations

After setup (or migration), run these additional optimisations:

#!/bin/bash

cd /var/www/wordpress

# Remove default content
sudo -u caddy wp post delete 1 2 --force 2>/dev/null || true
sudo -u caddy wp theme delete twentytwentytwo twentytwentythree 2>/dev/null || true

# Update everything
sudo -u caddy wp core update
sudo -u caddy wp plugin update --all
sudo -u caddy wp theme update --all

# Configure WP Super Cache
sudo -u caddy wp super-cache enable 2>/dev/null || true

# Set optimal permalink structure
sudo -u caddy wp rewrite structure '/%postname%/'
sudo -u caddy wp rewrite flush

echo "Optimisations complete!"

Performance Verification

Check your stack is running optimally:

# Verify PHP OPcache status
php -i | grep -i opcache

# Check PHP-FPM status
systemctl status php-fpm

# Test page load time
curl -w "@-" -o /dev/null -s "https://yourdomain.com" << 'EOF'
     time_namelookup:  %{time_namelookup}s
        time_connect:  %{time_connect}s
     time_appconnect:  %{time_appconnect}s
    time_pretransfer:  %{time_pretransfer}s
       time_redirect:  %{time_redirect}s
  time_starttransfer:  %{time_starttransfer}s
                     ----------
          time_total:  %{time_total}s
EOF

Cost Comparison

InstancevCPURAMMonthly CostUse Case
t4g.micro21GB~$6Dev/testing
t4g.small22GB~$12Small blogs
t4g.medium24GB~$24Medium traffic
t4g.large28GB~$48High traffic
c7g.medium12GB~$25CPU-intensive

All prices are approximate for eu-west-1 with on-demand pricing. Reserved instances or Savings Plans reduce costs by 30-60%.


Troubleshooting

502 Bad Gateway: PHP-FPM socket permissions issue

systemctl restart php-fpm
ls -la /run/php-fpm/www.sock

Database connection error: Check MariaDB is running

systemctl status mariadb
mysql -u wp_user -p wordpress

SSL certificate not working: Ensure DNS is pointing to instance IP

dig +short yourdomain.com
curl -I https://yourdomain.com

OPcache not working: Verify with phpinfo

php -r "phpinfo();" | grep -i opcache.enable

Quick Reference

# 1. Launch instance (local machine)
./launch-graviton-wp.sh

# 2. SSH in and setup WordPress
ssh -i ~/.ssh/key.pem ec2-user@IP
sudo ./setup-wordpress.sh

# 3. If migrating - on old server
./wp-export.sh
scp /tmp/wp-migration/wordpress-migration-*.tar.gz ec2-user@NEW_IP:/tmp/

# 4. If migrating - on new server
sudo ./wp-import.sh /tmp/wordpress-migration-*.tar.gz

This setup delivers a production-ready WordPress installation that’ll handle significant traffic while keeping your AWS bill minimal. The combination of Graviton’s price-performance, Caddy’s efficiency, and properly-tuned PHP creates a stack that punches well above its weight class.

0
0

Java 25 AOT Cache: A Deep Dive into Ahead of Time Compilation and Training

1. Introduction

Java 25 introduces a significant enhancement to application startup performance through the AOT (Ahead of Time) cache feature, part of JEP 483. This capability allows the JVM to cache the results of class loading, bytecode parsing, verification, and method compilation, dramatically reducing startup times for subsequent application runs. For enterprise applications, particularly those built with frameworks like Spring, this represents a fundamental shift in how we approach deployment and scaling strategies.

2. Understanding Ahead of Time Compilation

2.1 What is AOT Compilation?

Ahead of Time compilation differs from traditional Just in Time (JIT) compilation in a fundamental way: the compilation work happens before the application runs, rather than during runtime. In the standard JVM model, bytecode is interpreted initially, and the JIT compiler identifies hot paths to compile into native machine code. This process consumes CPU cycles and memory during application startup and warmup.

AOT compilation moves this work earlier in the lifecycle. The JVM can analyze class files, perform verification, parse bytecode structures, and even compile frequently executed methods to native code ahead of time. The results are stored in a cache that subsequent JVM instances can load directly, bypassing the expensive initialization phase.

2.2 The AOT Cache Architecture

The Java 25 AOT cache operates at multiple levels:

Class Data Sharing (CDS): The foundation layer that shares common class metadata across JVM instances. CDS has existed since Java 5 but has been significantly enhanced.

Application Class Data Sharing (AppCDS): Extends CDS to include application classes, not just JDK classes. This reduces class loading overhead for your specific application code.

Dynamic CDS Archives: Automatically generates CDS archives based on the classes loaded during a training run. This is the key enabler for the AOT cache feature.

Compiled Code Cache: Stores native code generated by the JIT compiler during training runs, allowing subsequent instances to load pre-compiled methods directly.

The cache is stored as a memory mapped file that the JVM can load efficiently at startup. The file format is optimized for fast access and includes metadata about the Java version, configuration, and class file checksums to ensure compatibility.

2.3 The Training Process

Training is the process of running your application under representative load to identify which classes to load, which methods to compile, and what optimization decisions to make. During training, the JVM records:

  1. All classes loaded and their initialization order
  2. Method compilation decisions and optimization levels
  3. Inline caching data structures
  4. Class hierarchy analysis results
  5. Branch prediction statistics
  6. Allocation profiles

The training run produces an AOT cache file that captures this runtime behavior. Subsequent JVM instances can then load this cache and immediately benefit from the pre-computed optimization decisions.

3. GraalVM Native Image vs Java 25 AOT Cache

3.1 Architectural Differences

GraalVM Native Image and Java 25 AOT cache solve similar problems but use fundamentally different approaches.

GraalVM Native Image performs closed world analysis at build time. It analyzes your entire application and all dependencies, determines which code paths are reachable, and compiles everything into a single native executable. The result is a standalone binary that:

  • Starts in milliseconds (typically 10-50ms)
  • Uses minimal memory (often 10-50MB at startup)
  • Contains no JVM or bytecode interpreter
  • Cannot load classes dynamically without explicit configuration
  • Requires build time configuration for reflection, JNI, and resources

Java 25 AOT Cache operates within the standard JVM runtime. It accelerates the JVM startup process but maintains full Java semantics:

  • Starts faster than standard JVM (typically 2-5x improvement)
  • Retains full dynamic capabilities (reflection, dynamic proxies, etc.)
  • Works with existing applications without code changes
  • Supports dynamic class loading
  • Falls back to standard JIT compilation for uncached methods

3.2 Performance Comparison

For a typical Spring Boot application (approximately 200 classes, moderate dependency graph):

Standard JVM: 8-12 seconds to first request
Java 25 AOT Cache: 2-4 seconds to first request
GraalVM Native Image: 50-200ms to first request

Memory consumption at startup:

Standard JVM: 150-300MB RSS
Java 25 AOT Cache: 120-250MB RSS
GraalVM Native Image: 30-80MB RSS

The AOT cache provides a middle ground: significant startup improvements without the complexity and limitations of native compilation.

3.3 When to Choose Each Approach

Use GraalVM Native Image when:

  • Startup time is critical (serverless, CLI tools)
  • Memory footprint must be minimal
  • Application is relatively static with well-defined entry points
  • You can invest in build time configuration

Use Java 25 AOT Cache when:

  • You need significant startup improvements but not extreme optimization
  • Dynamic features are essential (heavy reflection, dynamic proxies)
  • Application compatibility is paramount
  • You want a simpler deployment model
  • Framework support for native compilation is limited

4. Implementing AOT Cache in Build Pipelines

4.1 Basic AOT Cache Generation

The simplest implementation uses the -XX:AOTCache flag to specify the cache file location:

# Training run: generate the cache
java -XX:AOTCache=app.aot \
     -XX:AOTMode=record \
     -jar myapp.jar

# Production run: use the cache  
java -XX:AOTCache=app.aot \
     -XX:AOTMode=load \
     -jar myapp.jar

The AOTMode parameter controls behavior:

  • record: Generate a new cache file
  • load: Use an existing cache file
  • auto: Load if available, record if not (useful for development)

4.2 Docker Multi-Stage Build Integration

A production ready Docker build separates training from the final image:

# Stage 1: Build the application
FROM eclipse-temurin:25-jdk-alpine AS builder
WORKDIR /build
COPY . .
RUN ./mvnw clean package -DskipTests

# Stage 2: Training run
FROM eclipse-temurin:25-jdk-alpine AS trainer
WORKDIR /app
COPY --from=builder /build/target/myapp.jar .

# Set up training environment
ENV JAVA_TOOL_OPTIONS="-XX:AOTCache=/app/cache/app.aot -XX:AOTMode=record"

# Run training workload
RUN mkdir -p /app/cache && \
    timeout 120s java -jar myapp.jar & \
    PID=$! && \
    sleep 10 && \
    # Execute representative requests
    curl -X POST http://localhost:8080/api/initialize && \
    curl http://localhost:8080/api/warmup && \
    for i in {1..50}; do \
        curl http://localhost:8080/api/common-operation; \
    done && \
    # Graceful shutdown to flush cache
    kill -TERM $PID && \
    wait $PID || true

# Stage 3: Production image
FROM eclipse-temurin:25-jre-alpine
WORKDIR /app
COPY --from=builder /build/target/myapp.jar .
COPY --from=trainer /app/cache/app.aot /app/cache/

ENV JAVA_TOOL_OPTIONS="-XX:AOTCache=/app/cache/app.aot -XX:AOTMode=load"
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "myapp.jar"]

4.3 Training Workload Strategy

The quality of the AOT cache depends entirely on the training workload. A comprehensive training strategy includes:

#!/bin/bash
# training-workload.sh

APP_URL="http://localhost:8080"
WARMUP_REQUESTS=100

echo "Starting training workload..."

# 1. Health check and initialization
curl -f $APP_URL/actuator/health || exit 1

# 2. Execute all major code paths
endpoints=(
    "/api/users"
    "/api/products" 
    "/api/orders"
    "/api/reports/daily"
    "/api/search?q=test"
)

for endpoint in "${endpoints[@]}"; do
    for i in $(seq 1 20); do
        curl -s "$APP_URL$endpoint" > /dev/null
    done
done

# 3. Trigger common business operations
curl -X POST "$APP_URL/api/orders" \
     -H "Content-Type: application/json" \
     -d '{"product": "TEST", "quantity": 1}'

# 4. Exercise error paths
curl -s "$APP_URL/api/nonexistent" > /dev/null
curl -s "$APP_URL/api/orders/99999" > /dev/null

# 5. Warmup most common paths heavily
for i in $(seq 1 $WARMUP_REQUESTS); do
    curl -s "$APP_URL/api/users" > /dev/null
done

echo "Training workload complete"

4.4 CI/CD Pipeline Integration

A complete Jenkins pipeline example:

pipeline {
    agent any

    environment {
        DOCKER_REGISTRY = 'myregistry.io'
        APP_NAME = 'myapp'
        AOT_CACHE_PATH = '/app/cache/app.aot'
    }

    stages {
        stage('Build') {
            steps {
                sh './mvnw clean package'
            }
        }

        stage('Generate AOT Cache') {
            steps {
                script {
                    // Start app in recording mode
                    sh """
                        java -XX:AOTCache=\${WORKSPACE}/app.aot \
                             -XX:AOTMode=record \
                             -jar target/myapp.jar &
                        APP_PID=\$!

                        # Wait for startup
                        sleep 30

                        # Execute training workload
                        ./scripts/training-workload.sh

                        # Graceful shutdown
                        kill -TERM \$APP_PID
                        wait \$APP_PID || true
                    """
                }
            }
        }

        stage('Build Docker Image') {
            steps {
                sh """
                    docker build \
                        --build-arg AOT_CACHE=app.aot \
                        -t ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER} \
                        -t ${DOCKER_REGISTRY}/${APP_NAME}:latest \
                        .
                """
            }
        }

        stage('Validate Performance') {
            steps {
                script {
                    // Test startup time with cache
                    def startTime = System.currentTimeMillis()
                    sh """
                        docker run --rm \
                            ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER} \
                            timeout 60s java -jar myapp.jar &
                    """
                    def elapsed = System.currentTimeMillis() - startTime

                    if (elapsed > 5000) {
                        error("Startup time ${elapsed}ms exceeds threshold")
                    }
                }
            }
        }

        stage('Push') {
            steps {
                sh "docker push ${DOCKER_REGISTRY}/${APP_NAME}:${BUILD_NUMBER}"
                sh "docker push ${DOCKER_REGISTRY}/${APP_NAME}:latest"
            }
        }
    }
}

4.5 Kubernetes Deployment with Init Containers

For Kubernetes environments, you can generate the cache using init containers:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 3
  template:
    spec:
      initContainers:
      - name: aot-cache-generator
        image: myapp:latest
        command: ["/bin/sh", "-c"]
        args:
          - |
            java -XX:AOTCache=/cache/app.aot \
                 -XX:AOTMode=record \
                 -XX:+UnlockExperimentalVMOptions \
                 -jar /app/myapp.jar &
            PID=$!
            sleep 30
            /scripts/training-workload.sh
            kill -TERM $PID
            wait $PID || true
        volumeMounts:
        - name: aot-cache
          mountPath: /cache

      containers:
      - name: app
        image: myapp:latest
        env:
        - name: JAVA_TOOL_OPTIONS
          value: "-XX:AOTCache=/cache/app.aot -XX:AOTMode=load"
        volumeMounts:
        - name: aot-cache
          mountPath: /cache

      volumes:
      - name: aot-cache
        emptyDir: {}

5. Spring Framework Optimization

5.1 Spring Startup Analysis

Spring applications are particularly good candidates for AOT optimization due to their extensive use of:

  • Component scanning and classpath analysis
  • Annotation processing and reflection
  • Proxy generation (AOP, transactions, security)
  • Bean instantiation and dependency injection
  • Auto configuration evaluation

A typical Spring Boot 3.x application with 150 beans and standard dependencies spends startup time as follows:

Standard JVM (no AOT):
- Class loading and verification: 2.5s (25%)
- Spring context initialization: 4.5s (45%)
- Bean instantiation: 2.0s (20%)
- JIT compilation warmup: 1.0s (10%)
Total: 10.0s

With AOT Cache:
- Class loading (from cache): 0.5s (20%)
- Spring context initialization: 1.5s (60%)
- Bean instantiation: 0.3s (12%)
- JIT compilation (pre-compiled): 0.2s (8%)
Total: 2.5s (75% improvement)

5.2 Spring Specific Configuration

Spring Boot 3.0+ includes native AOT support. Enable it in your build configuration:

<!-- pom.xml -->
<build>
    <plugins>
        <plugin>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-maven-plugin</artifactId>
            <configuration>
                
            </configuration>
            <executions>
                <execution>
                    <id>process-aot</id>
                    <goals>
                        <goal>process-aot</goal>
                    </goals>
                </execution>
            </executions>
        </plugin>
    </plugins>
</build>

Configure AOT processing in your application:

@Configuration
public class AotConfiguration {

    @Bean
    public RuntimeHintsRegistrar customHintsRegistrar() {
        return hints -> {
            // Register reflection hints for runtime-discovered classes
            hints.reflection()
                .registerType(MyDynamicClass.class, 
                    MemberCategory.INVOKE_DECLARED_CONSTRUCTORS,
                    MemberCategory.INVOKE_DECLARED_METHODS);

            // Register resource hints
            hints.resources()
                .registerPattern("templates/*.html")
                .registerPattern("data/*.json");

            // Register proxy hints
            hints.proxies()
                .registerJdkProxy(MyService.class, TransactionalProxy.class);
        };
    }
}

5.3 Measured Performance Improvements

Real world measurements from a medium complexity Spring Boot application (e-commerce platform with 200+ beans):

Cold Start (no AOT cache):

Application startup time: 11.3s
Memory at startup: 285MB RSS
Time to first request: 12.1s
Peak memory during warmup: 420MB

With AOT Cache (trained):

Application startup time: 2.8s (75% improvement)
Memory at startup: 245MB RSS (14% improvement)
Time to first request: 3.2s (74% improvement)
Peak memory during warmup: 380MB (10% improvement)

Savings Breakdown:

  • Eliminated 8.5s of initialization overhead
  • Saved 40MB of temporary objects during startup
  • Reduced GC pressure during warmup by ~35%
  • First meaningful response 8.9s faster

For a 10 instance deployment, this translates to:

  • 85 seconds less total startup time per rolling deployment
  • Faster autoscaling response (new pods ready in 3s vs 12s)
  • Reduced CPU consumption during startup phase by ~60%

5.4 Spring Boot Actuator Integration

Monitor AOT cache effectiveness via custom metrics:

@Component
public class AotCacheMetrics {

    private final MeterRegistry registry;

    public AotCacheMetrics(MeterRegistry registry) {
        this.registry = registry;
        exposeAotMetrics();
    }

    private void exposeAotMetrics() {
        Gauge.builder("aot.cache.enabled", this::isAotCacheEnabled)
            .description("Whether AOT cache is enabled and loaded")
            .register(registry);

        Gauge.builder("aot.cache.hit.ratio", this::getCacheHitRatio)
            .description("Percentage of methods loaded from cache")
            .register(registry);
    }

    private double isAotCacheEnabled() {
        String aotCache = System.getProperty("XX:AOTCache");
        String aotMode = System.getProperty("XX:AOTMode");
        return (aotCache != null && "load".equals(aotMode)) ? 1.0 : 0.0;
    }

    private double getCacheHitRatio() {
        // Access JVM internals via JMX or internal APIs
        // This is illustrative - actual implementation depends on JVM exposure
        return 0.85; // Placeholder
    }
}

6. Caveats and Limitations

6.1 Cache Invalidation Challenges

The AOT cache contains compiled code and metadata that depends on:

Class file checksums: If any class file changes, the corresponding cache entries are invalid. Even minor code changes invalidate cached compilation results.

JVM version: Cache files are not portable across Java versions. A cache generated with Java 25.0.1 cannot be used with 25.0.2 if internal JVM structures changed.

JVM configuration: Heap sizes, GC algorithms, and other flags affect compilation decisions. The cache must match the production configuration.

Dependency versions: Changes to any dependency class files invalidate portions of the cache, potentially requiring full regeneration.

This means:

  • Every application version needs a new AOT cache
  • Caches should be generated in CI/CD, not manually
  • Cache generation must match production JVM flags exactly

6.2 Training Data Quality

The AOT cache is only as good as the training workload. Poor training leads to:

Incomplete coverage: Methods not executed during training remain uncached. First execution still pays JIT compilation cost.

Suboptimal optimizations: If training load doesn’t match production patterns, the compiler may make wrong inlining or optimization decisions.

Biased compilation: Over-representing rare code paths in training can waste cache space and lead to suboptimal production performance.

Best practices for training:

  • Execute all critical business operations
  • Include authentication and authorization paths
  • Trigger database queries and external API calls
  • Exercise error handling paths
  • Match production request distribution as closely as possible

6.3 Memory Overhead

The AOT cache file is memory mapped and consumes address space:

Small applications: 20-50MB cache file
Medium applications: 50-150MB cache file
Large applications: 150-400MB cache file

This is additional overhead beyond normal heap requirements. For memory constrained environments, the tradeoff may not be worthwhile. Calculate whether startup time savings justify the persistent memory consumption.

6.4 Build Time Implications

Generating AOT caches adds time to the build process:

Typical overhead: 60-180 seconds per build
Components:

  • Application startup for training: 20-60s
  • Training workload execution: 30-90s
  • Cache serialization: 10-30s

For large monoliths, this can extend to 5-10 minutes. In CI/CD pipelines with frequent builds, this overhead accumulates. Consider:

  • Generating caches only for release builds
  • Caching AOT cache files between similar builds
  • Parallel cache generation for microservices

6.5 Debugging Complications

Pre-compiled code complicates debugging:

Stack traces: May reference optimized code that doesn’t match source line numbers exactly
Breakpoints: Can be unreliable in heavily optimized cached methods
Variable inspection: Compiler optimizations may eliminate intermediate variables

For development, disable AOT caching:

# Development environment
java -XX:AOTMode=off -jar myapp.jar

# Or simply omit the AOT flags entirely
java -jar myapp.jar

6.6 Dynamic Class Loading

Applications that generate classes at runtime face challenges:

Dynamic proxies: Generated proxy classes cannot be pre-cached
Bytecode generation: Libraries like ASM that generate code at runtime bypass the cache
Plugin architectures: Dynamically loaded plugins don’t benefit from main application cache

While the AOT cache handles core application classes well, highly dynamic frameworks may see reduced benefits. Spring’s use of CGLIB proxies and dynamic features means some runtime generation is unavoidable.

6.7 Profile Guided Optimization Drift

Over time, production workload patterns may diverge from training workload:

New features: Added endpoints not in training data
Changed patterns: User behavior shifts rendering training data obsolete
Seasonal variations: Holiday traffic patterns differ from normal training scenarios

Mitigation strategies:

  • Regenerate caches with each deployment
  • Update training workloads based on production telemetry
  • Monitor cache hit rates and retrain if they degrade
  • Consider multiple training scenarios for different deployment contexts

7. Autoscaling Benefits

7.1 Kubernetes Horizontal Pod Autoscaling

AOT cache dramatically improves HPA responsiveness:

Traditional JVM scenario:

1. Load spike detected at t=0
2. HPA triggers scale out at t=10s
3. New pod scheduled at t=15s
4. Container starts at t=20s
5. JVM starts, application initializes at t=32s
6. Pod marked ready, receives traffic at t=35s
Total response time: 35 seconds

With AOT cache:

1. Load spike detected at t=0
2. HPA triggers scale out at t=10s
3. New pod scheduled at t=15s
4. Container starts at t=20s
5. JVM starts with cached data at t=23s
6. Pod marked ready, receives traffic at t=25s
Total response time: 25 seconds (29% improvement)

The 10 second improvement means the system can handle load spikes more effectively before performance degrades.

7.2 Readiness Probe Configuration

Optimize readiness probes for AOT cached applications:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-aot
spec:
  template:
    spec:
      containers:
      - name: app
        readinessProbe:
          httpGet:
            path: /actuator/health/readiness
            port: 8080
          # Reduced delays due to faster startup
          initialDelaySeconds: 5  # vs 15 for standard JVM
          periodSeconds: 2
          failureThreshold: 3

        livenessProbe:
          httpGet:
            path: /actuator/health/liveness
            port: 8080
          initialDelaySeconds: 10  # vs 30 for standard JVM
          periodSeconds: 10

This allows Kubernetes to detect and route to new pods much faster, reducing the window of degraded service during scaling events.

7.3 Cost Implications

Faster scaling means better resource utilization:

Example scenario: Peak traffic requires 20 pods, baseline traffic needs 5 pods.

Standard JVM:

  • Scale out takes 35s, during which 5 pods handle peak load
  • Overprovisioning required: maintain 8-10 pods minimum to handle sudden spikes
  • Average pod count: 7-8 pods during off-peak

AOT Cache:

  • Scale out takes 25s, 10 second improvement
  • Can operate closer to baseline: 5-6 pods off-peak
  • Average pod count: 5-6 pods during off-peak

Monthly savings (assuming $0.05/pod/hour):

  • 2 fewer pods * 730 hours * $0.05 = $73/month
  • Extrapolated across 10 microservices: $730/month
  • Annual savings: $8,760

Beyond direct cost, faster scaling improves user experience and reduces the need for aggressive overprovisioning.

7.4 Serverless and Function Platforms

AOT cache enables JVM viability for serverless platforms:

AWS Lambda cold start comparison:

Standard JVM (Spring Boot):

Cold start: 8-12 seconds
Memory required: 512MB minimum
Timeout concerns: Need generous timeout values
Cost per invocation: High due to long init time

With AOT Cache:

Cold start: 2-4 seconds (67% improvement)
Memory required: 384MB sufficient
Timeout concerns: Standard timeouts acceptable
Cost per invocation: Reduced due to faster execution

This makes Java competitive with Go and Node.js for latency sensitive serverless workloads.

7.5 Cloud Native Density

Faster startup enables higher pod density and more aggressive bin packing:

Resource request optimization:

# Standard JVM resource requirements
resources:
  requests:
    cpu: 500m    # Need headroom for JIT warmup
    memory: 512Mi
  limits:
    cpu: 2000m   # Spike during initialization
    memory: 1Gi

# AOT cache resource requirements  
resources:
  requests:
    cpu: 250m    # Lower CPU needs at startup
    memory: 384Mi # Reduced memory footprint
  limits:
    cpu: 1000m   # Smaller spike
    memory: 768Mi

This allows 50-60% more pods per node, significantly improving cluster utilization and reducing infrastructure costs.

8. Compiler Options and Advanced Configuration

8.1 Essential JVM Flags

Complete set of recommended flags for AOT cache:

java \
  # AOT cache configuration
  -XX:AOTCache=/path/to/cache.aot \
  -XX:AOTMode=load \

  # Enable experimental AOT features
  -XX:+UnlockExperimentalVMOptions \

  # Optimize for AOT cache
  -XX:+UseCompressedOops \
  -XX:+UseCompressedClassPointers \

  # Memory configuration (must match training)
  -Xms512m \
  -Xmx2g \

  # GC configuration (must match training)
  -XX:+UseZGC \
  -XX:+ZGenerational \

  # Compilation tiers for optimal caching
  -XX:TieredStopAtLevel=4 \

  # Cache diagnostics
  -XX:+PrintAOTCache \

  -jar myapp.jar

8.2 Cache Size Tuning

Control cache file size and content:

# Limit cache size
-XX:AOTCacheSize=200m

# Adjust method compilation threshold for caching
-XX:CompileThreshold=1000

# Include/exclude specific packages
-XX:AOTInclude=com.mycompany.*
-XX:AOTExclude=com.mycompany.experimental.*

8.3 Diagnostic and Monitoring Flags

Enable detailed cache analysis:

java \
  # Detailed cache loading information
  -XX:+PrintAOTCache \
  -XX:+VerboseAOT \

  # Log cache hits and misses
  -XX:+LogAOTCacheAccess \

  # Output cache statistics on exit
  -XX:+PrintAOTStatistics \

  -XX:AOTCache=app.aot \
  -XX:AOTMode=load \
  -jar myapp.jar

Example output:

AOT Cache loaded: /app/cache/app.aot (142MB)
Classes loaded from cache: 2,847
Methods pre-compiled: 14,235
Cache hit rate: 87.3%
Cache miss reasons:
  - Class modified: 245 (1.9%)
  - New classes: 89 (0.7%)
  - Optimization conflict: 12 (0.1%)

8.4 Profile Directed Optimization

Combine AOT cache with additional PGO data:

# First: Record profiling data
java -XX:AOTMode=record \
     -XX:AOTCache=base.aot \
     -XX:+UnlockDiagnosticVMOptions \
     -XX:+ProfileInterpreter \
     -XX:ProfileLogOut=profile.log \
     -jar myapp.jar

# Run training workload

# Second: Generate optimized cache using profile data  
java -XX:AOTMode=record \
     -XX:AOTCache=optimized.aot \
     -XX:ProfileLogIn=profile.log \
     -jar myapp.jar

# Production: Use optimized cache
java -XX:AOTMode=load \
     -XX:AOTCache=optimized.aot \
     -jar myapp.jar

8.5 Multi-Tier Caching Strategy

For complex applications, layer multiple cache levels:

# Generate JDK classes cache (shared across all apps)
java -Xshare:dump \
     -XX:SharedArchiveFile=jdk.jsa

# Generate framework cache (shared across Spring apps)
java -XX:ArchiveClassesAtExit=framework.jsa \
     -XX:SharedArchiveFile=jdk.jsa \
     -cp spring-boot.jar

# Generate application specific cache  
java -XX:AOTCache=app.aot \
     -XX:AOTMode=record \
     -XX:SharedArchiveFile=framework.jsa \
     -jar myapp.jar

# Production: Load all cache layers
java -XX:SharedArchiveFile=framework.jsa \
     -XX:AOTCache=app.aot \
     -XX:AOTMode=load \
     -jar myapp.jar

9. Practical Implementation Checklist

9.1 Prerequisites

Before implementing AOT cache:

  1. Java 25 Runtime: Verify Java 25 or later installed
  2. Build Tool Support: Maven 3.9+ or Gradle 8.5+
  3. Container Base Image: Use Java 25 base images
  4. Training Environment: Isolated environment for cache generation
  5. Storage: Plan for cache file storage (100-400MB per application)

9.2 Implementation Steps

Step 1: Baseline Performance

# Measure current startup time
time java -jar myapp.jar
# Record time to first request
curl -w "@curl-format.txt" http://localhost:8080/health

Step 2: Create Training Workload

# Document all critical endpoints
# Create comprehensive test script
# Ensure script covers 80%+ of production code paths

Step 3: Add AOT Cache to Build

<!-- Add to pom.xml -->
<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <executions>
        <execution>
            <id>generate-aot-cache</id>
            <phase>package</phase>
            <goals>
                <goal>java</goal>
            </goals>
            <configuration>
                <mainClass>com.mycompany.Application</mainClass>
                <arguments>
                    <argument>-XX:AOTCache=${project.build.directory}/app.aot</argument>
                    <argument>-XX:AOTMode=record</argument>
                </arguments>
            </configuration>
        </execution>
    </executions>
</plugin>

Step 4: Update Container Image

FROM eclipse-temurin:25-jre-alpine
COPY target/myapp.jar /app/
COPY target/app.aot /app/cache/
ENV JAVA_TOOL_OPTIONS="-XX:AOTCache=/app/cache/app.aot -XX:AOTMode=load"
ENTRYPOINT ["java", "-jar", "/app/myapp.jar"]

Step 5: Test and Validate

# Build with cache
docker build -t myapp:aot .

# Measure startup improvement
time docker run myapp:aot

# Verify functional correctness
./integration-tests.sh

Step 6: Monitor in Production

// Add custom metrics
@Component
public class StartupMetrics implements ApplicationListener<ApplicationReadyEvent> {

    @Override
    public void onApplicationEvent(ApplicationReadyEvent event) {
        long startupTime = System.currentTimeMillis() - event.getTimestamp();
        metricsRegistry.gauge("app.startup.duration", startupTime);
    }
}

10. Conclusion and Future Outlook

Java 25’s AOT cache represents a pragmatic middle ground between traditional JVM startup characteristics and the extreme optimizations of native compilation. For enterprise Spring applications, the 60-75% startup time improvement comes with minimal code changes and full compatibility with existing frameworks and libraries.

The technology is particularly valuable for:

  • Cloud native microservices requiring rapid scaling
  • Kubernetes deployments with frequent pod churn
  • Cost sensitive environments where resource efficiency matters
  • Applications that cannot adopt GraalVM native image due to dynamic requirements

As the Java ecosystem continues to evolve, AOT caching will likely become a standard optimization technique, much like how JIT compilation became ubiquitous. The relatively simple implementation path and significant performance gains make it accessible to most development teams.

Future enhancements to watch for include:

  • Improved cache portability across minor Java versions
  • Automatic training workload generation
  • Cloud provider managed cache distribution
  • Integration with service mesh for distributed cache management

For Spring developers specifically, the combination of Spring Boot 3.x native hints, AOT processing, and Java 25 cache support creates a powerful optimization stack that maintains the flexibility of the JVM while approaching native image performance for startup characteristics.

The path forward is clear: as containerization and cloud native architectures become universal, startup time optimization transitions from a nice to have feature to a fundamental requirement. Java 25’s AOT cache provides production ready capability that delivers on this requirement without the complexity overhead of alternative approaches.

0
0

Macbook: Setup wireshark packet capture MCP for Antropic Claude Desktop

If you’re like me, the idea of doing anything twice will make you break out in a cold shiver. For my Claude desktop, I often need network pcap (packet capture) to unpack something that I am doing. So the script below installs wireshark, and then the wireshark mcp and then configures Claude to use it. Then I got it to work with zscaler (note, I just did a process grep – you could also check utun/port 9000/9400).

I also added example scripts to test its working and so prompts to help you test in Claude.

cat > ~/setup_wiremcp_simple.sh << 'EOF'
#!/bin/bash

# Simplified WireMCP Setup with Zscaler Support

echo ""
echo "============================================"
echo "   WireMCP Setup with Zscaler Support"
echo "============================================"
echo ""

# Detect Zscaler
echo "[INFO] Detecting Zscaler..."
ZSCALER_DETECTED=false
ZSCALER_INTERFACE=""

# Check for Zscaler process
if pgrep -f "Zscaler" >/dev/null 2>&1; then
    ZSCALER_DETECTED=true
    echo "[ZSCALER] ✓ Zscaler process is running"
fi

# Find Zscaler tunnel interface
UTUN_INTERFACES=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_INTERFACES; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_INTERFACE="$iface"
        ZSCALER_DETECTED=true
        echo "[ZSCALER] ✓ Zscaler tunnel found: $iface (IP: $IP)"
        break
    fi
done

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[ZSCALER] ✓ Zscaler environment confirmed"
else
    echo "[INFO] No Zscaler detected - standard network"
fi

echo ""

# Check existing installations
echo "[INFO] Checking installed software..."

if command -v tshark >/dev/null 2>&1; then
    echo "[✓] Wireshark/tshark is installed"
else
    echo "[!] Wireshark not found - install with: brew install --cask wireshark"
fi

if command -v node >/dev/null 2>&1; then
    echo "[✓] Node.js is installed: $(node --version)"
else
    echo "[!] Node.js not found - install with: brew install node"
fi

if [[ -d "$HOME/WireMCP" ]]; then
    echo "[✓] WireMCP is installed at ~/WireMCP"
else
    echo "[!] WireMCP not found"
fi

echo ""

# Configure SSL decryption for Zscaler
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[INFO] Configuring SSL/TLS decryption..."
    
    SSL_KEYLOG="$HOME/.wireshark-sslkeys.log"
    touch "$SSL_KEYLOG"
    chmod 600 "$SSL_KEYLOG"
    
    if ! grep -q "SSLKEYLOGFILE" ~/.zshrc 2>/dev/null; then
        echo "" >> ~/.zshrc
        echo "# Wireshark SSL/TLS decryption for Zscaler" >> ~/.zshrc
        echo "export SSLKEYLOGFILE=\"$SSL_KEYLOG\"" >> ~/.zshrc
        echo "[✓] Added SSLKEYLOGFILE to ~/.zshrc"
    else
        echo "[✓] SSLKEYLOGFILE already in ~/.zshrc"
    fi
    
    echo "[✓] SSL key log file: $SSL_KEYLOG"
fi

echo ""

# Update WireMCP for Zscaler
if [[ -d "$HOME/WireMCP" ]]; then
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[INFO] Creating Zscaler-aware wrapper..."
        
        cat > "$HOME/WireMCP/start_zscaler.sh" << 'WRAPPER'
#!/bin/bash
echo "=== WireMCP (Zscaler Mode) ==="

# Set SSL decryption
export SSLKEYLOGFILE="$HOME/.wireshark-sslkeys.log"

# Find Zscaler interface
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        export CAPTURE_INTERFACE="$iface"
        echo "✓ Zscaler tunnel: $iface ($IP)"
        echo "✓ All proxied traffic flows through this interface"
        break
    fi
done

if [[ -z "$CAPTURE_INTERFACE" ]]; then
    export CAPTURE_INTERFACE="en0"
    echo "! Using default interface: en0"
fi

echo ""
echo "Configuration:"
echo "  SSL Key Log: $SSLKEYLOGFILE"
echo "  Capture Interface: $CAPTURE_INTERFACE"
echo ""
echo "To capture: sudo tshark -i $CAPTURE_INTERFACE -c 10"
echo "===============================\n"

cd "$(dirname "$0")"
node index.js
WRAPPER
        
        chmod +x "$HOME/WireMCP/start_zscaler.sh"
        echo "[✓] Created ~/WireMCP/start_zscaler.sh"
    fi
    
    # Create test script
    cat > "$HOME/WireMCP/test_zscaler.sh" << 'TEST'
#!/bin/bash

echo "=== Zscaler & WireMCP Test ==="
echo ""

# Check Zscaler process
if pgrep -f "Zscaler" >/dev/null; then
    echo "✓ Zscaler is running"
else
    echo "✗ Zscaler not running"
fi

# Find tunnel
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        echo "✓ Zscaler tunnel: $iface ($IP)"
        FOUND=true
        break
    fi
done

[[ "$FOUND" != "true" ]] && echo "✗ No Zscaler tunnel found"

echo ""

# Check SSL keylog
if [[ -f "$HOME/.wireshark-sslkeys.log" ]]; then
    SIZE=$(wc -c < "$HOME/.wireshark-sslkeys.log")
    echo "✓ SSL key log exists ($SIZE bytes)"
else
    echo "✗ SSL key log not found"
fi

echo ""
echo "Network interfaces:"
tshark -D 2>/dev/null | head -5

echo ""
echo "To capture Zscaler traffic:"
echo "  sudo tshark -i ${iface:-en0} -c 10"
TEST
    
    chmod +x "$HOME/WireMCP/test_zscaler.sh"
    echo "[✓] Created ~/WireMCP/test_zscaler.sh"
fi

echo ""

# Configure Claude Desktop
CLAUDE_CONFIG="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
if [[ -d "$(dirname "$CLAUDE_CONFIG")" ]]; then
    echo "[INFO] Configuring Claude Desktop..."
    
    # Backup existing
    if [[ -f "$CLAUDE_CONFIG" ]]; then
        BACKUP_FILE="${CLAUDE_CONFIG}.backup.$(date +%Y%m%d_%H%M%S)"
        cp "$CLAUDE_CONFIG" "$BACKUP_FILE"
        echo "[✓] Backup created: $BACKUP_FILE"
    fi
    
    # Check if jq is installed
    if ! command -v jq >/dev/null 2>&1; then
        echo "[INFO] Installing jq for JSON manipulation..."
        brew install jq
    fi
    
    # Create temp capture directory
    TEMP_CAPTURE_DIR="$HOME/.wiremcp/captures"
    mkdir -p "$TEMP_CAPTURE_DIR"
    echo "[✓] Capture directory: $TEMP_CAPTURE_DIR"
    
    # Prepare environment variables
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        ENV_JSON=$(jq -n \
            --arg ssllog "$HOME/.wireshark-sslkeys.log" \
            --arg iface "${ZSCALER_INTERFACE:-en0}" \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"SSLKEYLOGFILE": $ssllog, "CAPTURE_INTERFACE": $iface, "ZSCALER_MODE": "true", "CAPTURE_DIR": $capdir}')
    else
        ENV_JSON=$(jq -n \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"CAPTURE_DIR": $capdir}')
    fi
    
    # Add or update wiremcp in config, preserving existing servers
    if [[ -f "$CLAUDE_CONFIG" ]] && [[ -s "$CLAUDE_CONFIG" ]]; then
        echo "[INFO] Merging WireMCP into existing config..."
        jq --arg home "$HOME" \
           --argjson env "$ENV_JSON" \
           '.mcpServers.wiremcp = {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}' \
           "$CLAUDE_CONFIG" > "${CLAUDE_CONFIG}.tmp" && mv "${CLAUDE_CONFIG}.tmp" "$CLAUDE_CONFIG"
    else
        echo "[INFO] Creating new Claude config..."
        jq -n --arg home "$HOME" \
              --argjson env "$ENV_JSON" \
              '{"mcpServers": {"wiremcp": {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}}}' \
              > "$CLAUDE_CONFIG"
    fi
    
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[✓] Claude configured with Zscaler mode"
    else
        echo "[✓] Claude configured"
    fi
    echo "[✓] Existing MCP servers preserved"
fi

echo ""
echo "============================================"
echo "             Summary"
echo "============================================"
echo ""

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "Zscaler Environment:"
    echo "  ✓ Detected and configured"
    [[ -n "$ZSCALER_INTERFACE" ]] && echo "  ✓ Tunnel interface: $ZSCALER_INTERFACE"
    echo "  ✓ SSL decryption ready"
    echo ""
    echo "Next steps:"
    echo "  1. Restart terminal: source ~/.zshrc"
    echo "  2. Restart browsers for HTTPS decryption"
else
    echo "Standard Network:"
    echo "  • No Zscaler detected"
    echo "  • Standard configuration applied"
fi

echo ""
echo "For Claude Desktop:"
echo "  1. Restart Claude Desktop app"
echo "  2. Ask Claude to analyze network traffic"
echo ""
echo "============================================"

exit 0
EOF
chmod +x ~/setup_wiremcp_simple.sh

To test if the script worked:

cat > ~/test_wiremcp_claude.sh << 'EOF'
#!/bin/bash

# WireMCP Claude Desktop Interactive Test Script

echo "╔════════════════════════════════════════════════════════╗"
echo "║     WireMCP + Claude Desktop Testing Tool             ║"
echo "╚════════════════════════════════════════════════════════╝"
echo ""

# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'

# Check prerequisites
echo -e "${BLUE}[1/4]${NC} Checking prerequisites..."

if ! command -v tshark >/dev/null 2>&1; then
    echo "   ✗ tshark not found"
    exit 1
fi

if [[ ! -d "$HOME/WireMCP" ]]; then
    echo "   ✗ WireMCP not found at ~/WireMCP"
    exit 1
fi

if [[ ! -f "$HOME/Library/Application Support/Claude/claude_desktop_config.json" ]]; then
    echo "   ⚠ Claude Desktop config not found"
fi

echo -e "   ${GREEN}✓${NC} All prerequisites met"
echo ""

# Detect Zscaler
echo -e "${BLUE}[2/4]${NC} Detecting network configuration..."

ZSCALER_IF=""
for iface in $(ifconfig -l | grep -o 'utun[0-9]*'); do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_IF="$iface"
        echo -e "   ${GREEN}✓${NC} Zscaler tunnel: $iface ($IP)"
        break
    fi
done

if [[ -z "$ZSCALER_IF" ]]; then
    echo "   ⚠ No Zscaler tunnel detected (will use en0)"
    ZSCALER_IF="en0"
fi

echo ""

# Generate test traffic
echo -e "${BLUE}[3/4]${NC} Generating test network traffic..."

# Background network requests
(curl -s https://api.github.com/zen > /dev/null 2>&1) &
(curl -s https://httpbin.org/get > /dev/null 2>&1) &
(curl -s https://www.google.com > /dev/null 2>&1) &
(ping -c 3 8.8.8.8 > /dev/null 2>&1) &

sleep 2
echo -e "   ${GREEN}✓${NC} Test traffic generated (GitHub, httpbin, Google, DNS)"
echo ""

# Show test prompts
echo -e "${BLUE}[4/4]${NC} Test prompts for Claude Desktop"
echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}📋 Copy these prompts into Claude Desktop:${NC}"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 1: Basic Connection Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Can you see the WireMCP tools? List all available network analysis capabilities you have access to.
EOF
echo ""
echo "Expected: Claude should list 7 tools (capture_packets, get_summary_stats, etc.)"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 2: Simple Packet Capture"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 20 network packets and show me a summary including:
- Source and destination IPs
- Protocols used
- Port numbers
- Any interesting patterns
EOF
echo ""
echo "Expected: Packets from $ZSCALER_IF with IPs in 100.64.x.x range"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 3: Protocol Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 50 packets and show me:
1. Protocol breakdown (TCP, UDP, DNS, HTTP, TLS)
2. Which protocol is most common
3. Protocol hierarchy statistics
EOF
echo ""
echo "Expected: Protocol percentages and hierarchy tree"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 4: Connection Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 100 packets and show me network conversations:
- Top 5 source/destination pairs
- Number of packets per conversation
- Bytes transferred
EOF
echo ""
echo "Expected: Conversation statistics with packet/byte counts"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 5: Threat Detection"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture traffic for 30 seconds and check all destination IPs against threat databases. Tell me if any malicious IPs are detected.
EOF
echo ""
echo "Expected: List of IPs and threat check results (should show 'No threats')"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 6: HTTPS Decryption (Advanced)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "⚠️  First: Restart your browser after running this:"
echo "    source ~/.zshrc && echo \$SSLKEYLOGFILE"
echo ""
cat << 'EOF'
Capture 30 packets while I browse some HTTPS websites. Can you see any HTTP hostnames or request URIs from the HTTPS traffic?
EOF
echo ""
echo "Expected: If SSL keys are logged, Claude sees decrypted HTTP data"
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}🔧 Manual Verification Commands:${NC}"
echo ""
echo "  # Test manual capture:"
echo "  sudo tshark -i $ZSCALER_IF -c 10"
echo ""
echo "  # Check SSL keylog:"
echo "  ls -lh ~/.wireshark-sslkeys.log"
echo ""
echo "  # Test WireMCP server:"
echo "  cd ~/WireMCP && timeout 3 node index.js"
echo ""
echo "  # Check Claude config:"
echo "  cat \"\$HOME/Library/Application Support/Claude/claude_desktop_config.json\""
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${GREEN}✅ Test setup complete!${NC}"
echo ""
echo "Next steps:"
echo "  1. Open Claude Desktop"
echo "  2. Copy/paste the test prompts above"
echo "  3. Verify Claude can access WireMCP tools"
echo "  4. Check ~/WIREMCP_TESTING_EXAMPLES.md for more examples"
echo ""

# Keep generating traffic in background
echo "Keeping test traffic active for 2 minutes..."
echo "(You can Ctrl+C to stop)"
echo ""

# Generate continuous light traffic
for i in {1..24}; do
    (curl -s https://httpbin.org/delay/1 > /dev/null 2>&1) &
    sleep 5
done

echo ""
echo "Traffic generation complete!"
echo ""

EOF

chmod +x ~/test_wiremcp_claude.sh

Now that you have tested everything is fine… the below just gives you a few example tests to carry out.

# Try WireMCP Right Now! 🚀

## 🎯 3-Minute Quick Start

### Step 1: Restart Claude Desktop (30 seconds)
```bash
# Kill and restart Claude
killall Claude
sleep 2
open -a Claude
```

### Step 2: Create a script to Generate Some Traffic (30 seconds)

cat > ~/network_activity_loop.sh << 'EOF'
#!/bin/bash

# Script to generate network activity for 30 seconds
# Useful for testing network capture tools

echo "Starting network activity generation for 30 seconds..."
echo "Press Ctrl+C to stop early if needed"

# Record start time
start_time=$(date +%s)
end_time=$((start_time + 30))

# Counter for requests
request_count=0

# Loop for 30 seconds
while [ $(date +%s) -lt $end_time ]; do
    # Create network activity to capture
    echo -n "Request set #$((++request_count)) at $(date +%T): "
    
    # GitHub API call
    curl -s https://api.github.com/users/octocat > /dev/null 2>&1 &
    
    # HTTPBin JSON endpoint
    curl -s https://httpbin.org/json > /dev/null 2>&1 &
    
    # IP address check
    curl -s https://ifconfig.me > /dev/null 2>&1 &
    
    # Wait for background jobs to complete
    wait
    echo "completed"
    
    # Small delay to avoid overwhelming the servers
    sleep 0.5
done

echo ""
echo "Network activity generation completed!"
echo "Total request sets sent: $request_count"
echo "Duration: 30 seconds"
EOF

chmod +x ~/network_activity_loop.sh

# Call the script
./network_activity_loop.sh 

Time to play!

Now open Claude Desktop and we can run a few tests…

  1. Ask Claude:

Can you see the WireMCP tools? List all available network analysis capabilities.

Claude should list 7 tools:
– capture_packets
– get_summary_stats
– get_conversations
– check_threats
– check_ip_threats
– analyze_pcap
– extract_credentials

2. Ask Claude:

Capture 20 network packets and tell me:
– What IPs am I talking to?
– What protocols are being used?
– Anything interesting?

3. In terminal run:

```bash
curl -v https://api.github.com/users/octocat
```

Ask Claude:

I just called api.github.com. Can you capture my network traffic
for 10 seconds and tell me:
1. What IP did GitHub resolve to?
2. How long did the connection take?
3. Were there any errors?

4. Ask Claude:

Monitor my network for 30 seconds and show me:
– Top 5 destinations by packet count
– What services/companies am I connecting to?
– Any unexpected connections?

5. Developer Debugging Examples – Debug Slow API. Ask Claude:

I’m calling myapi.company.com and it feels slow.
Capture traffic for 30 seconds while I make a request and tell me:
– Where is the latency coming from?
– DNS, TCP handshake, TLS, or server response?
– Any retransmissions?

6. Developer Debugging Examples – Debug Connection Timeout. Ask Claude:

I’m getting timeouts to db.example.com:5432.
Capture for 30 seconds and tell me:
1. Is DNS resolving?
2. Are SYN packets being sent?
3. Do I get SYN-ACK back?
4. Any firewall blocking?

7. TLS Handshake failures (often happen with zero trust networks and cert pinning). Ask Claude:

Monitor my network for 2 mins and look for abnormal TLS handshakes, in particular shortlived TLS handshakes, which can occur due to cert pinning issues.

8. Check for Threats. Ask Claude:

Monitor my network for 60 seconds and check all destination
IPs against threat databases. Tell me if anything suspicious.

9. Monitor Background Apps. Ask Claude:

Capture traffic for 30 seconds while I’m idle.
What apps are calling home without me knowing? Only get conversation statistics to show the key connections and the amount of traffic through each. Show any failed traffic or unusual traffic patterns

10. VPN Testing. Ask Claude:

Capture packets for 60 seconds, during which time i will enable my VPN. Compare the difference and see if you can see exactly when my VPN was enabled.

11. Audit traffic. Ask Claude:

Monitor for 5 minutes and tell me:
– Which service used most bandwidth?
– Any large file transfers?
– Unexpected data usage?

12. Looking for specific protocols. Ask Claude:

Monitor my traffic for 30 seconds and see if you can spot any traffic using QUIC and give me statistics on it.

(then go open a youtube website)

13. DNS Queries. Ask Claude:

As a network troubleshooter, analyze all DNS queries for 30 seconds and provide potential causes for any errors. Show me detailed metrics on any calls, especially failed calls or unusual DNS patterns (like NXDOMAIN, PTR or TXT calls)

14. Certificate Issues. Ask Claude:

Capture TLS handshakes for the next minute and show me the certificate chain. Look out for failed/short live TLS sessions

What Makes This Powerful?

The tradition way used to be:

“`bash
sudo tcpdump -i utun5 -w capture.pcap
# Wait…
# Stop capture
# Open Wireshark
# Apply filters
# Analyze packets manually
# Figure out what it means
“`
Time: 10-30 minutes!

With WireMCP + Claude:


“Capture my network traffic and tell me
what’s happening in plain English”

Time: 30 seconds

Claude automatically:
– Captures on correct interface (utun5)
– Filters relevant packets
– Analyzes protocols
– Identifies issues
– Explains in human language
– Provides recommendations

0
0