Migrating a Raspberry Pi 5 WordPress Stack from SD Card to NVMe
I run a full WordPress stack on a Raspberry Pi 5 sitting on my desk: MariaDB, PHP-FPM, Nginx, and Redis, all inside Docker containers, served publicly through a Cloudflare tunnel. The setup is documented on GitHub.
After fitting an NVMe SSD via the Pi 5 HAT+, the obvious next step was getting Docker off the SD card entirely. The SD card was always the weak link: slow random I/O for InnoDB, limited write endurance, and a growing FastCGI cache that was steadily grinding down the card’s write cycle budget. Moving Docker’s data-root to the NVMe in a single step relocates every volume automatically, with no changes to docker-compose.yml required.
This post documents exactly what the migration script does, the kernel tuning that comes with it, and the before and after benchmark numbers from a 512 GB WD Black SN770M.
1. Why bother: the numbers
The raw I/O gap between a microSD card and a PCIe NVMe on the Pi 5 is not subtle. A typical Class 10 or A2-rated card delivers roughly the following figures against the SN770M in the same enclosure:
| Operation | SD card | NVMe (Pi 5 HAT+) | Gain |
|---|---|---|---|
| Sequential read | 40-90 MB/s | 800-900 MB/s | ~10x |
| Sequential write | 20-40 MB/s | 700-800 MB/s | ~20x |
| Random 4K read | 4-8 MB/s | 300-500 MB/s | ~50x |
| Random 4K write | 2-5 MB/s | 200-400 MB/s | ~60x |
The column that matters most for a WordPress database is random 4K write. MariaDB’s InnoDB engine is built around random I/O, because every page fault, every redo-log flush, and every wp_options autoload query is a tiny random write or read. On an SD card these queue up and cause latency spikes that compound under any real traffic. On NVMe they are essentially instantaneous.
Endurance matters as much as speed. SD cards are rated at roughly 10,000 to 100,000 write cycles per cell for budget-tier cards, and a WordPress site with a non-trivial Redis miss rate and a 10 GB Nginx FastCGI cache can exhaust that budget in months. NVMe drives are rated in TBW (terabytes written), which is orders of magnitude more forgiving for this workload.
Measured results on my Pi 5 (512 GB WD Black SN770M)
| fio test | SD card | NVMe | Improvement |
|---|---|---|---|
| Sequential read (128K blocks) | ~85 MB/s | ~820 MB/s | ~10x |
| Random 4K write (QD1) | ~3.2 MB/s | ~180 MB/s | ~56x |
QD1 random 4K write is the most representative test for MariaDB single-threaded writes, which is the dominant pattern on a low-concurrency WordPress instance. In practice, WordPress page loads dropped from roughly 120 ms median (cold cache) to roughly 45 ms on the same hardware. With the Nginx FastCGI cache warm the difference is invisible to visitors, but cache build time after a restart drops dramatically.
Benchmark script
Run these on the Pi to reproduce the numbers above, installing fio first with sudo apt install fio. The script is designed to be run before migration (pointing at the SD card) and again after migration (pointing at /mnt/nvme) to get a direct comparison without needing to wait for the full migration to complete.
cat > /home/pi/benchmark-nvme.sh <<'EOF'
#!/usr/bin/env bash
# benchmark-nvme.sh -- fio I/O benchmark for SD vs NVMe comparison
#
# Usage:
# bash benchmark-nvme.sh # tests current Docker root
# TEST_DIR=/mnt/nvme bash benchmark-nvme.sh # tests NVMe directly
set -euo pipefail
TEST_DIR="${TEST_DIR:-/var/lib/docker}"
RUNTIME=30
SIZE="512M"
echo ""
echo "Pi 5 NVMe benchmark -- $(date '+%Y-%m-%d %H:%M')"
echo ""
echo " Test directory : ${TEST_DIR}"
echo " Device : $(df -h "${TEST_DIR}" | awk 'NR==2{print $1}')"
echo " Free space : $(df -h "${TEST_DIR}" | awk 'NR==2{print $4}')"
echo ""
run_fio() {
local name="$1" rw="$2" bs="$3" iodepth="$4"
echo -n " ${name}... "
fio --name="${name}" \
--directory="${TEST_DIR}" \
--size="${SIZE}" \
--rw="${rw}" \
--bs="${bs}" \
--iodepth="${iodepth}" \
--numjobs=1 \
--runtime="${RUNTIME}" \
--time_based \
--ioengine=libaio \
--direct=1 \
--group_reporting \
--output-format=terse \
--terse-version=3 2>/dev/null \
| awk -F';' '
/^3;/ {
bw_read = $6 / 1024;
bw_write = $47 / 1024;
iops_read = $7;
iops_write = $48;
if (bw_read > 0) printf "read %6.0f MB/s IOPS %6.0f\n", bw_read, iops_read;
if (bw_write > 0) printf "write %6.0f MB/s IOPS %6.0f\n", bw_write, iops_write;
}'
rm -f "${TEST_DIR}/${name}.*"
}
echo " Sequential"
run_fio "seq-read" "read" "128k" 8
run_fio "seq-write" "write" "128k" 8
echo ""
echo " Random 4K (QD1 -- MariaDB single-threaded pattern)"
run_fio "rand4k-read-qd1" "randread" "4k" 1
run_fio "rand4k-write-qd1" "randwrite" "4k" 1
echo ""
echo " Random 4K (QD32 -- parallel I/O)"
run_fio "rand4k-read-qd32" "randread" "4k" 32
run_fio "rand4k-write-qd32" "randwrite" "4k" 32
echo ""
EOF
chmod +x /home/pi/benchmark-nvme.sh 2. What gets moved
The Pi continues to boot from the SD card, and that is deliberately left as-is. The boot partition is read-only after startup, so there is nothing to gain from moving it and something to lose in complexity. What we move is Docker’s data-root, which defaults to /var/lib/docker and contains every volume the stack depends on:
| Docker volume | Contents | I/O character |
|---|---|---|
db_data | MariaDB InnoDB files | Most I/O-intensive: random 4K reads and writes continuously |
wp_data | WordPress core and wp-content | PHP file reads on every uncached request |
nginx_cache | FastCGI page cache (up to 10 GB) | High-churn sequential writes during cache build; reads on hits |
rclone_config | OAuth tokens | Tiny; read once at startup |
By redirecting Docker’s data-root in /etc/docker/daemon.json, all four volumes follow automatically. There are no docker-compose.yml changes, no volume rebinding, and no data loss risk from path mismatches because the entire overlay2 filesystem simply lives in a different place on disk.
After migration, what remains on the SD card is essentially read-only: /boot/firmware (kernel, device tree, config.txt), the OS root filesystem under /etc and /usr, and /var/log which is already absorbed into RAM by log2ram and only synced to SD on a clean shutdown.
3. Prerequisites
Before running the script, confirm the following are in place on the Pi:
- Raspberry Pi 5 with an NVMe SSD installed via HAT+ or NVMe Base
- Pi OS 64-bit Bookworm running on the SD card
- Docker stack already deployed and healthy (verify with
docker compose ps) - The NVMe drive is blank or acceptable to erase, since the script will partition and format it
- Passwordless sudo configured for the
piuser
Check the drive is visible before starting:
lsblk
# Expect: nvme0n1 listed with your drive size If only nvme1n1 appears, override the device by prefixing the script invocation: NVME_DEVICE=/dev/nvme1n1 bash setup-nvme.sh.
4. The full migration script
The entire migration is handled by a single idempotent script. Every step checks whether it has already run and skips if so, making it safe to re-run after a partial failure or interruption. On a system where everything is already configured, the whole script completes in seconds with each step reporting as skipped.
cat > /home/pi/setup-nvme.sh <<'SCRIPT'
#!/usr/bin/env bash
# setup-nvme.sh -- Mount NVMe SSD and migrate Docker data-root to it
#
# Steps:
# 1. Detect the NVMe device
# 2. Partition and format (ext4, GPT, single partition)
# 3. Mount at /mnt/nvme with NVMe-optimised fstab entry
# 4. I/O scheduler udev rule (set to 'none')
# 5. Sysctl tuning (dirty page ratios for NVMe)
# 6. Migrate Docker data-root (stop, rsync, verify, clean up)
# 7. Summary
#
# Override device:
# NVME_DEVICE=/dev/nvme1n1 bash setup-nvme.sh
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cd "${REPO_ROOT}"
NVME_DEVICE="${NVME_DEVICE:-/dev/nvme0n1}"
NVME_PARTITION="${NVME_DEVICE}p1"
NVME_MOUNT="/mnt/nvme"
DOCKER_NEW_ROOT="${NVME_MOUNT}/docker"
DOCKER_OLD_ROOT="/var/lib/docker"
DOCKER_BACKUP="${DOCKER_OLD_ROOT}.bak-sd"
log() { echo ""; echo "[$(date '+%H:%M:%S')] $*"; }
ok() { echo " ok $*"; }
skip() { echo " -> $* (already done, skipping)"; }
fail() { echo " FAIL: $*" >&2; exit 1; }
info() { echo " .. $*"; }
echo ""
echo "andrewbakerninja-pi -- NVMe setup and Docker migration"
echo ""
if [[ "$(uname -m)" != "aarch64" ]]; then
echo "WARNING: Not running on ARM64. Detected: $(uname -m)"
read -r -p "Continue anyway? [y/N] " _c
[[ "${_c,,}" == "y" ]] || exit 0
fi
# ----------------------------------------------------------------
# STEP 1 -- Detect NVMe device
# ----------------------------------------------------------------
log "Step 1: Detect NVMe device..."
if [[ ! -b "${NVME_DEVICE}" ]]; then
for candidate in /dev/nvme0n1 /dev/nvme1n1 /dev/nvme0n2; do
if [[ -b "${candidate}" ]]; then
NVME_DEVICE="${candidate}"
NVME_PARTITION="${NVME_DEVICE}p1"
break
fi
done
fi
[[ -b "${NVME_DEVICE}" ]] || \
fail "No NVMe device found. Run 'lsblk'. Override: NVME_DEVICE=/dev/nvmeXnY bash $0"
NVME_BLK="$(basename "${NVME_DEVICE}")"
NVME_MODEL="$(cat /sys/block/${NVME_BLK}/device/model 2>/dev/null | xargs || echo 'unknown')"
NVME_SIZE="$(lsblk -dn -o SIZE "${NVME_DEVICE}" 2>/dev/null || echo 'unknown')"
info "Device : ${NVME_DEVICE}"
info "Model : ${NVME_MODEL}"
info "Size : ${NVME_SIZE}"
if mount | grep -q "${NVME_DEVICE}.*on / "; then
echo ""
echo " WARNING: ${NVME_DEVICE} appears to be the root filesystem."
echo " Run 'docker info | grep Root' -- if it shows /mnt/nvme/docker, nothing to do."
read -r -p " Continue anyway? [y/N] " _c
[[ "${_c,,}" == "y" ]] || exit 0
fi
# ----------------------------------------------------------------
# STEP 2 -- Partition and format (idempotent)
# ----------------------------------------------------------------
log "Step 2: Partition and format NVMe..."
if [[ -b "${NVME_PARTITION}" ]]; then
FS_TYPE="$(blkid -o value -s TYPE "${NVME_PARTITION}" 2>/dev/null || echo "")"
if [[ "${FS_TYPE}" == "ext4" ]]; then
skip "${NVME_PARTITION} is already ext4"
else
fail "${NVME_PARTITION} exists but is '${FS_TYPE:-unknown}'. Inspect with 'lsblk -f'."
fi
else
echo ""
echo " WARNING: ${NVME_DEVICE} has no partitions. ALL DATA WILL BE ERASED."
read -r -p " Confirm erase ${NVME_DEVICE}? Type 'yes' to proceed: " _confirm
[[ "${_confirm}" == "yes" ]] || { echo "Aborted."; exit 0; }
info "Creating GPT partition table..."
sudo parted -s "${NVME_DEVICE}" mklabel gpt
sudo parted -s "${NVME_DEVICE}" mkpart primary ext4 0% 100%
sudo partprobe "${NVME_DEVICE}"
sleep 2
info "Formatting as ext4 (label: nvme-data)..."
sudo mkfs.ext4 -L nvme-data -q "${NVME_PARTITION}"
ok "${NVME_PARTITION} formatted as ext4"
fi
# ----------------------------------------------------------------
# STEP 3 -- Mount at /mnt/nvme
# ----------------------------------------------------------------
log "Step 3: Mount NVMe at ${NVME_MOUNT}..."
sudo mkdir -p "${NVME_MOUNT}"
NVME_UUID="$(blkid -o value -s UUID "${NVME_PARTITION}")"
[[ -n "${NVME_UUID}" ]] || fail "Could not read UUID from ${NVME_PARTITION}"
info "UUID: ${NVME_UUID}"
if mountpoint -q "${NVME_MOUNT}"; then
skip "${NVME_MOUNT} already mounted"
else
sudo mount -o noatime "${NVME_PARTITION}" "${NVME_MOUNT}"
ok "Mounted ${NVME_PARTITION} -> ${NVME_MOUNT}"
fi
if grep -q "${NVME_UUID}" /etc/fstab; then
skip "fstab entry already present"
else
{
echo ""
echo "# NVMe data drive -- added by setup-nvme.sh"
echo "UUID=${NVME_UUID} ${NVME_MOUNT} ext4 defaults,noatime,commit=60 0 2"
} | sudo tee -a /etc/fstab > /dev/null
ok "fstab: UUID=${NVME_UUID} -> ${NVME_MOUNT} (noatime, commit=60)"
fi
info "NVMe free space: $(df -h "${NVME_MOUNT}" | awk 'NR==2{print $4}')"
# ----------------------------------------------------------------
# STEP 4 -- I/O scheduler: 'none' for NVMe
# ----------------------------------------------------------------
log "Step 4: NVMe I/O scheduler (udev rule)..."
UDEV_RULE_FILE="/etc/udev/rules.d/60-nvme-scheduler.rules"
UDEV_RULE='ACTION=="add|change", SUBSYSTEM=="block", KERNEL=="nvme[0-9]*n[0-9]*", ATTR{queue/scheduler}="none"'
if [[ -f "${UDEV_RULE_FILE}" ]] && grep -q 'scheduler.*none' "${UDEV_RULE_FILE}" 2>/dev/null; then
skip "NVMe udev scheduler rule already in place"
else
echo "${UDEV_RULE}" | sudo tee "${UDEV_RULE_FILE}" > /dev/null
sudo udevadm control --reload-rules
sudo udevadm trigger --subsystem-match=block 2>/dev/null || true
ok "udev rule written: ${UDEV_RULE_FILE}"
fi
SCHEDULER_PATH="/sys/block/${NVME_BLK}/queue/scheduler"
if [[ -f "${SCHEDULER_PATH}" ]]; then
echo "none" | sudo tee "${SCHEDULER_PATH}" > /dev/null 2>&1 || true
CURRENT="$(grep -o '\[.*\]' "${SCHEDULER_PATH}" 2>/dev/null | tr -d '[]' || echo '?')"
info "Current scheduler: ${CURRENT}"
fi
# ----------------------------------------------------------------
# STEP 5 -- Sysctl tuning for NVMe
# ----------------------------------------------------------------
log "Step 5: NVMe sysctl tuning..."
NVME_SYSCTL="/etc/sysctl.d/99-pi-nvme.conf"
if [[ -f "${NVME_SYSCTL}" ]]; then
skip "${NVME_SYSCTL} already present"
else
sudo tee "${NVME_SYSCTL}" > /dev/null <<'EOF'
# andrewbakerninja-pi -- NVMe I/O sysctl tuning
# Allow more dirty pages before blocking writes (NVMe flushes them quickly)
vm.dirty_ratio = 20
vm.dirty_background_ratio = 5
# Flush dirty pages every 15s (default 5s; NVMe handles bursts efficiently)
vm.dirty_writeback_centisecs = 1500
vm.dirty_expire_centisecs = 3000
EOF
sudo sysctl -p "${NVME_SYSCTL}" > /dev/null
ok "${NVME_SYSCTL} written and applied"
fi
# ----------------------------------------------------------------
# STEP 6 -- Migrate Docker data-root to NVMe
# ----------------------------------------------------------------
log "Step 6: Migrate Docker data-root to NVMe..."
CURRENT_ROOT="$(docker info --format '{{.DockerRootDir}}' 2>/dev/null || echo "")"
if [[ "${CURRENT_ROOT}" == "${DOCKER_NEW_ROOT}" ]]; then
skip "Docker already using ${DOCKER_NEW_ROOT}"
else
echo ""
echo " Docker data will be migrated:"
printf " From: %s (SD card)\n" "${DOCKER_OLD_ROOT}"
printf " To: %s (NVMe)\n" "${DOCKER_NEW_ROOT}"
echo ""
read -r -p " Proceed? [y/N] " _m
[[ "${_m,,}" == "y" ]] || { echo "Migration skipped."; exit 0; }
# 6a. Stop the WordPress stack cleanly
log " 6a. Stopping WordPress stack..."
docker compose down --timeout 30 2>&1 | sed 's/^/ /' || true
ok "Stack stopped"
# 6b. Stop the Docker daemon so no files are modified during copy
log " 6b. Stopping Docker daemon..."
sudo systemctl stop docker
sudo systemctl stop docker.socket 2>/dev/null || true
ok "Docker daemon stopped"
# 6c. Create destination directory
sudo mkdir -p "${DOCKER_NEW_ROOT}"
# 6d. Copy all Docker data to NVMe
# -a preserves permissions, links, and timestamps
# -H preserves hard links (critical for Docker overlay2 layer sharing)
# -A preserves ACLs, -X preserves extended attributes
log " 6d. Copying Docker data to NVMe..."
info "Source : ${DOCKER_OLD_ROOT}"
info "Dest : ${DOCKER_NEW_ROOT}"
sudo rsync -aHAX --info=progress2 \
"${DOCKER_OLD_ROOT}/" "${DOCKER_NEW_ROOT}/"
SRC_SIZE="$(sudo du -sh "${DOCKER_OLD_ROOT}" 2>/dev/null | cut -f1)"
DST_SIZE="$(sudo du -sh "${DOCKER_NEW_ROOT}" 2>/dev/null | cut -f1)"
ok "Copy complete -- source: ${SRC_SIZE} destination: ${DST_SIZE}"
# 6e. Update daemon.json with the new data-root path
# Merges into existing JSON to preserve log rotation settings
log " 6e. Updating /etc/docker/daemon.json..."
if [[ -f /etc/docker/daemon.json ]]; then
sudo python3 - <<PYEOF
import json
path = '/etc/docker/daemon.json'
with open(path) as f:
cfg = json.load(f)
cfg['data-root'] = '${DOCKER_NEW_ROOT}'
with open(path, 'w') as f:
json.dump(cfg, f, indent=2)
f.write('\n')
PYEOF
else
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "${DOCKER_NEW_ROOT}"
}
EOF
fi
ok "daemon.json updated -- data-root: ${DOCKER_NEW_ROOT}"
# 6f. Rename the old SD directory as a safety net (not yet deleted)
log " 6f. Renaming SD card Docker dir to .bak-sd..."
sudo mv "${DOCKER_OLD_ROOT}" "${DOCKER_BACKUP}"
ok "${DOCKER_BACKUP} created"
# 6g. Restart Docker from the new NVMe path
log " 6g. Starting Docker daemon (NVMe path)..."
sudo systemctl start docker
sleep 5
# Verify Docker reports the correct root before proceeding
NEW_ROOT="$(docker info --format '{{.DockerRootDir}}' 2>/dev/null || echo "unknown")"
if [[ "${NEW_ROOT}" != "${DOCKER_NEW_ROOT}" ]]; then
echo " FAIL: Docker root mismatch, rolling back..."
sudo systemctl stop docker
sudo mv "${DOCKER_BACKUP}" "${DOCKER_OLD_ROOT}"
sudo python3 - <<PYEOF
import json
path = '/etc/docker/daemon.json'
with open(path) as f:
cfg = json.load(f)
cfg.pop('data-root', None)
with open(path, 'w') as f:
json.dump(cfg, f, indent=2)
f.write('\n')
PYEOF
sudo systemctl start docker
fail "Migration failed and was rolled back. Check: sudo journalctl -u docker --since today"
fi
ok "Docker root confirmed: ${NEW_ROOT}"
# 6h. Bring up the stack and wait for a healthy response
log " 6h. Starting WordPress stack..."
docker compose up -d
info "Waiting for Nginx health check (up to 120s)..."
MAX_WAIT=120; ELAPSED=0; HEALTHY=false
until docker compose exec -T nginx wget -qO- http://localhost/nginx-health &>/dev/null 2>&1; do
if [[ ${ELAPSED} -ge ${MAX_WAIT} ]]; then
echo " WARNING: stack not healthy within ${MAX_WAIT}s. Check: docker compose logs"
break
fi
printf " Waiting... %ds\r" "${ELAPSED}"
sleep 5; ELAPSED=$((ELAPSED + 5))
done
if docker compose exec -T nginx wget -qO- http://localhost/nginx-health &>/dev/null 2>&1; then
HEALTHY=true
ok "Stack is healthy"
fi
# 6i. Remove SD card backup only after the user confirms the stack is healthy
echo ""
if [[ "${HEALTHY}" == "true" ]]; then
echo " Stack verified healthy on NVMe."
echo " SD backup: ${DOCKER_BACKUP} ($(sudo du -sh "${DOCKER_BACKUP}" 2>/dev/null | cut -f1))"
echo ""
read -r -p " Delete the SD card backup now? [y/N] " _del
if [[ "${_del,,}" == "y" ]]; then
sudo rm -rf "${DOCKER_BACKUP}"
ok "SD card backup removed"
else
echo " Kept. Remove manually when satisfied:"
echo " sudo rm -rf ${DOCKER_BACKUP}"
fi
else
echo " SD backup retained at: ${DOCKER_BACKUP}"
echo " Remove manually once confirmed healthy:"
echo " sudo rm -rf ${DOCKER_BACKUP}"
fi
fi
# ----------------------------------------------------------------
# STEP 7 -- Summary
# ----------------------------------------------------------------
log "NVMe setup complete!"
DOCKER_ROOT="$(docker info --format '{{.DockerRootDir}}' 2>/dev/null || echo 'unknown')"
SCHED="$(grep -o '\[.*\]' "/sys/block/${NVME_BLK}/queue/scheduler" 2>/dev/null | tr -d '[]' || echo '?')"
echo ""
echo "NVMe status"
echo ""
df -h "${NVME_MOUNT}" | awk 'NR==2 {
printf " Mount : %s\n Size : %s total | %s used | %s free\n", $6, $2, $3, $4
}'
echo " Docker : ${DOCKER_ROOT}"
echo " Scheduler : ${SCHED} (optimal for NVMe)"
echo ""
echo " Container status:"
docker compose ps --format "table {{.Name}}\t{{.Status}}" 2>/dev/null | sed 's/^/ /'
echo ""
echo " Verify site:"
echo " curl -sk -o /dev/null -w '%{http_code}' https://andrewbaker.ninja/"
echo ""
if [[ -d "${DOCKER_BACKUP}" ]]; then
echo " WARNING: SD backup still present: ${DOCKER_BACKUP}"
echo " Remove when confident: sudo rm -rf ${DOCKER_BACKUP}"
echo ""
fi
SCRIPT
chmod +x /home/pi/setup-nvme.sh 5. Step-by-step walkthrough
Step 1: Device detection
The script scans for /dev/nvme0n1 and falls back through nvme1n1 and nvme0n2 automatically, reading the model name from /sys/block/nvme0n1/device/model so you can confirm the correct drive before anything is erased. If the detected device turns out to be the current root filesystem (meaning the Pi has already been configured to boot from NVMe), the script prompts before continuing, since migration may not be necessary and pointing Docker at an already-active partition would create a conflict.
Step 2: Partition and format
A single GPT partition covering the full drive, formatted ext4 with the label nvme-data. The script demands the literal string yes as confirmation before erasing, not just y, as a safeguard against an accidental keypress. If the partition already exists as ext4, this step skips entirely. If it exists with an unexpected filesystem type, the script halts and asks you to inspect manually rather than silently overwriting something that may contain data.
Step 3: Mount with fstab persistence
Mounted at /mnt/nvme with two important options. The noatime flag suppresses access-time updates on reads, which is a meaningful write reduction for a busy cache volume. The commit=60 option extends the journal commit interval from the default 5 seconds to 60. The fstab entry uses the partition UUID rather than the device path, which survives device enumeration changes across reboots since NVMe device naming can shift depending on the order in which the kernel initialises controllers.
Step 4: I/O scheduler
Linux’s block I/O schedulers (mq-deadline, bfq, kyber) exist to reorder and merge requests for devices that benefit from sequential access patterns, primarily spinning disks and slow NAND flash. NVMe drives have their own internal command queue (NVMe NCQ) and process random I/O natively, so running a software scheduler on top adds latency without any compensating benefit. Setting the scheduler to none passes I/O directly to the NVMe driver. A udev rule persists this setting across reboots, and the script also applies it immediately to the live sysfs path so no reboot is required.
Step 5: Sysctl dirty page tuning
The kernel’s dirty page parameters control how aggressively it flushes modified memory to disk. On an SD card you want frequent small flushes to avoid building up a burst the card cannot absorb quickly. On NVMe the opposite holds: the drive can flush large bursts extremely fast, so batching more dirty pages before writeback reduces syscall overhead and improves throughput, particularly for MariaDB’s redo log and wp-content writes during plugin updates.
| Parameter | Default | NVMe value | Effect |
|---|---|---|---|
vm.dirty_ratio | 10% | 20% | Block writes at 20% dirty pages, giving more room to batch |
vm.dirty_background_ratio | 5% | 5% | Start background writeback at 5% (unchanged) |
vm.dirty_writeback_centisecs | 500 (5s) | 1500 (15s) | Flush interval tripled for larger, less frequent bursts |
vm.dirty_expire_centisecs | 3000 (30s) | 3000 (30s) | Pages dirty longer than 30s must be flushed (unchanged) |
Step 6: Docker data migration
The migration sequence is designed to be safe at each stage. The stack is stopped cleanly with a 30-second timeout before Docker itself is halted, so no writes are in flight during the copy. The copy uses rsync -aHAX, which preserves hard links (critical for Docker’s overlay2 layer sharing, since layers reference each other through hard-linked directory trees), ACLs, and extended attributes. If Docker fails to start from the new path, confirmed by comparing docker info --format '{{.DockerRootDir}}' against the expected value, the script automatically reverts daemon.json, renames the backup directory back to its original path, and restarts Docker in the original configuration. The SD card backup is only deleted after you explicitly confirm the stack is healthy on the new path.
Step 7: Summary output
The final summary prints mount statistics, the confirmed Docker root, the active I/O scheduler, container health status from docker compose ps, and the curl command to verify the site is responding. If the SD card backup still exists because you declined the deletion prompt, it surfaces that too with the removal command.
6. MariaDB tuning
With the NVMe confirmed and running, it is worth revisiting the InnoDB I/O capacity settings in mariadb/my.cnf. These parameters govern how many I/O operations per second InnoDB budgets for background flushing of the buffer pool and the redo log. Setting them too low causes InnoDB to throttle background writes, which allows dirty pages to accumulate and eventually produces write stalls on busy tables.
# Before (conservative -- written for "SSD/NVMe" generically)
innodb_io_capacity = 2000
innodb_io_capacity_max = 4000
# After (confirmed NVMe on Pi 5 HAT+)
innodb_io_capacity = 4000
innodb_io_capacity_max = 8000 A Pi 5 with a mid-range NVMe comfortably sustains 4000+ random IOPS under a WordPress workload. These values are still conservative relative to the drive’s raw capability; the bottleneck at low concurrency is the single-threaded query pattern rather than storage throughput. Apply with docker compose restart mariadb.
7. What stays on the SD card
After migration, the SD card’s write workload drops to near zero during normal operation. The only paths that remain active are listed below:
| Path | Write behaviour |
|---|---|
/boot/firmware | Read-only after boot; written only during rpi-update |
OS root (/etc, /usr) | Reads on service startup; writes only on apt upgrades |
/var/log | Absorbed into RAM by log2ram; synced to SD on clean shutdown only |
| systemd journal | Volatile (RAM only), never touches SD |
log2ram is worth keeping even after this migration. The OS itself (sshd, systemd, cron, apt) still generates log traffic that would otherwise write directly to the SD card. log2ram intercepts that traffic and batches it into a single flush on shutdown, preserving write cycle budget for the card’s remaining workload.
8. Running the script
Clone the repo onto the Pi and execute the script from the repo root. The bootstrap script integrates this step automatically if an NVMe device is detected at setup time:
cd ~/andrewbakerninja-pi
bash scripts/setup-nvme.sh Via bootstrap, the step appears as part of the automatic provisioning sequence:
bash bootstrap.sh
# Step 9e/9 -- NVMe (optional)
# NVMe drive detected at /dev/nvme0n1.
# Running setup-nvme.sh to mount and migrate Docker data... Once migration is complete, run the verification script to confirm everything landed correctly:
cat > /home/pi/verify-nvme.sh <<'EOF'
#!/usr/bin/env bash
# verify-nvme.sh -- confirm NVMe migration is healthy
set -euo pipefail
echo ""
echo "Docker root:"
docker info | grep "Docker Root Dir"
# Expected: Docker Root Dir: /mnt/nvme/docker
echo ""
echo "Container status:"
docker compose ps
echo ""
echo "NVMe mount:"
df -h /mnt/nvme
echo ""
echo "I/O scheduler:"
cat /sys/block/nvme0n1/queue/scheduler
# Expected: [none]
echo ""
echo "Site health:"
HTTP_CODE="$(curl -sk -o /dev/null -w '%{http_code}' https://andrewbaker.ninja/)"
echo "HTTP ${HTTP_CODE}"
[[ "${HTTP_CODE}" == "200" ]] && echo "Site responding" || echo "WARNING: unexpected status"
echo ""
EOF
chmod +x /home/pi/verify-nvme.sh The migration script is idempotent, so re-running it at any point is safe. If the NVMe is already mounted and Docker already points at /mnt/nvme/docker, the entire script completes in a few seconds with every step reported as skipped.
9. Source
Full setup, including docker-compose.yml, Nginx configuration, and the Cloudflare tunnel integration, is at github.com/andrewbakercloudscale/andrewbakerninja-pi. The migration script lives at scripts/setup-nvme.sh in that repository.