Open Source  ·  MIT License

The AMI for your
Raspberry Pi

Block-level nightly backup of your Pi to AWS S3. Restore a complete, bootable system to new hardware in one command — no manual setup, no secrets to re-enter.

View on GitHub Get started ↓
pi@raspberrypi ~ backup
$ bash pi-image-backup.sh --force
# Stops Docker, images each partition with partclone,
# compresses in parallel with pigz, streams to S3.
 
✔ Partition table saved
✔ nvme0n1p1 → S3 (2.1 GB, 3m 12s)
✔ nvme0n1p2 → S3 (1.4 GB, 2m 08s)
✔ Docker restarted
✔ Manifest uploaded [2026-04-16]
 
$ # Push notification sent via ntfy.sh

Architecture

How it works

Two shell scripts: one runs nightly on the Pi, one restores everything to new hardware. No agents, no daemons, no cloud accounts beyond AWS.

BACKUP · STEP 01

Stop Docker

Containers halt so databases flush all writes to disk. No dirty pages, no recovery needed on restore. Downtime: 5–15 min at 2am.

BACKUP · STEP 02

Image with partclone

Reads only used blocks from each partition — not empty sectors. 954 GB NVMe at 28% full: partclone reads 267 GB, dd would read 954 GB.

BACKUP · STEP 03

Compress & stream

pigz compresses in parallel using all Pi 5 cores. Output streams directly to S3 — no local temp file needed, no second disk required.

BACKUP · STEP 04

Notify & verify

Docker restarts, manifest JSON uploaded, push notification sent via ntfy.sh. Optional SHA-256 verification confirms every file in S3.

RESTORE · STEP 01

Replay partition table

sfdisk restores the saved GPT layout to the new device. Partitions are recreated exactly — same sizes, same order.

RESTORE · STEP 02

Stream from S3

Each partition streams S3 → gunzip → partclone.restore. No local download. Works from any Linux machine attached to the target drive.

RESTORE · STEP 03

Boot & expand

Insert storage into the new Pi, power on. Raspberry Pi OS auto-expands the root filesystem on first boot. No extra config.

RESTORE · STEP 04

Verify

test-recovery.sh --post-boot checks OS, NVMe, Docker containers, Cloudflare tunnel, cron jobs, MariaDB, and HTTP — PASS/FAIL per check.

~35 min
Pi death to fully operational
3–5 GB
Compressed backup size
~$3/mo
60-day retention (STANDARD_IA)
20×
Less data than dd on typical NVMe

Efficiency

partclone, not dd

dd reads every sector regardless of whether it contains data. partclone reads the filesystem allocation bitmap and skips unallocated blocks. Same result, a fraction of the work.

dd partclone
What it reads Every sector (used + empty) Used blocks only
Speed on 954 GB NVMe (28% full) ~90 min ~5 min
S3 upload size ~10 GB (compressed zeros) ~3–5 GB
Restore gunzip | dd partclone per partition
Docker downtime 60–90 min 5–15 min

Coverage

Everything. Block-level.

Because it's a block-level image of the full device, there's nothing to configure. Every file, database, container, service, and SSH key is included automatically.

Operating System

OS, kernel, packages, systemd services — including cloudflared, custom watchdogs, and any compiled binaries.

Docker Runtime

All images, volumes, networks, and compose configs. MariaDB data, WordPress uploads, application files — everything in /var/lib/docker.

Configuration & Secrets

.env files, config.env, credentials, authorized_keys, cron jobs, logrotate rules — all restored exactly as-is.

Boot Firmware

config.txt, cmdline.txt, and the full /boot/firmware partition. The restored Pi boots identically to the original.

Partition Table

GPT layout saved separately as a sfdisk dump and applied first on restore. Works across different NVMe sizes.

NVMe Tuning

Custom I/O scheduler settings, udev rules, and performance tuning survive the restore intact.


Quick start

Up and running in minutes

One install script handles dependencies, AWS verification, lifecycle policy, cron scheduling, and a dry-run test.

1

Clone on the Pi

SSH into your Pi and clone the repo.

git clone https://github.com/andrewbakercloudscale/pi2s3.git ~/pi2s3
cd ~/pi2s3
2

Run the installer

Prompts for your S3 bucket, region, and ntfy URL. Installs partclone, pigz, AWS CLI v2, sets up lifecycle policy and cron.

bash install.sh
3

First backup

Force an immediate backup to confirm everything works end-to-end. You'll get a push notification when done.

bash ~/pi2s3/pi-image-backup.sh --force
4

List your backups

bash ~/pi2s3/pi-image-backup.sh --list

2026-04-16/  3.4 GB  raspberrypi  (nvme0n1, mmcblk0)
2026-04-15/  3.3 GB  raspberrypi
2026-04-14/  3.3 GB  raspberrypi
5

Named AWS profile

If you use multiple AWS accounts, set a profile in config.env:

AWS_PROFILE="pi-backup"   # in config.env
6

Check status any time

bash ~/pi2s3/install.sh --status   # cron, log tail, dependency versions
bash ~/pi2s3/install.sh --upgrade  # git pull + redeploy

Disaster recovery

Restore to a new Pi

When your Pi dies: validate from your Mac, flash a minimal SD card, attach the target NVMe, SSH in, and restore. Full runbook in RECOVERY.md.

1

Validate from Mac (before touching hardware)

Confirms S3 image exists, reads the manifest, estimates flash time, prints the exact restore command.

bash ~/pi2s3/test-recovery.sh --pre-flash

✔ AWS access OK
✔ Image exists: 2026-04-16/  (3.4 GB)
✔ Manifest: raspberrypi · Pi 5 · Bookworm · nvme0n1
  Estimated restore time: ~12 min
  Run: pi-image-restore.sh --date 2026-04-16 --device /dev/nvme0n1
2

Restore (on Linux / new Pi)

Interactive: pick backup date and target device. Or non-interactive for scripted recovery. Streams directly from S3 — no local storage needed.

bash ~/pi2s3/pi-image-restore.sh

# Or non-interactively:
bash ~/pi2s3/pi-image-restore.sh --date 2026-04-16 --device /dev/nvme0n1 --yes
3

Boot & clear stale SSH key

Raspberry Pi OS auto-expands the root filesystem. Clear the old host key on your Mac before connecting.

ssh-keygen -R raspberrypi.local
ssh pi@raspberrypi.local
4

Post-boot verification

Checks filesystem expansion, NVMe mount, all Docker containers, Cloudflare tunnel, cron jobs, MariaDB, HTTP, memory, and load.

bash ~/pi2s3/test-recovery.sh --post-boot

✔ OS: Debian GNU/Linux 12 (bookworm) aarch64
✔ Filesystem expanded (954 GB)
✔ NVMe mounted at /mnt/nvme
✔ Docker: 6/6 containers running
✔ Cloudflare tunnel: active (2 ha_connections)
✔ Cron: pi2s3 backup + app-layer backup present
✔ MariaDB: responding, 42 tables
✔ HTTP: 200 OK on localhost

Self-healing

Cloudflare tunnel watchdog

An optional monitor that runs every 5 minutes. If your site or tunnel goes down, it automatically recovers through three escalating phases before rebooting the Pi as a last resort.

root cron · every 5 min
Check 1: Any Docker containers stopped?
Check 2: HTTP probe on localhost — 5xx or connection failure?
Check 3: cloudflared ha_connections > 0?
 
All OK → log and exit
 
Phase 1 (0–20 min) → start containers + restart cloudflared
Phase 2 (20–40 min) → docker compose down/up (full stack restart)
Phase 3 (40+ min) → dump diagnostics + reboot Pi (max once/6 h)

Enable

Set CF_WATCHDOG_ENABLED=true in config.env and run bash install.sh --watchdog. That's it.

Push notifications

ntfy.sh alerts at every stage: first failure, each phase escalation, recovery, and "manual needed" if rate-limited from rebooting.

Rate-limited reboots

Phase 3 reboots are capped to once every 6 hours. If the rate limit is hit, an alert fires instead and the watchdog exits cleanly.


Cost

~$3/month for 60 days of history

At 3–5 GB per compressed image using S3 STANDARD_IA. Costs vary by region — af-south-1 (Cape Town) is slightly higher than us-east-1.

Retention S3 storage Monthly cost (STANDARD_IA)
7 images~25 GB<$1/month
30 images~120 GB~$2/month
60 images~240 GB~$3/month

S3 lifecycle policy is installed automatically by install.sh --setup. Images beyond MAX_IMAGES (default: 60) are deleted automatically. Switch to GLACIER_IR for long-term cold storage at ~80% less cost.