Firmware CI/CD Pipeline
Unified CI/CD system with Git tag-based triggers automating ESP-IDF firmware, Debian packages, Docker images, and cloud infrastructure across 6 pipelines
Background
- System: Rovothome product line (Ceily, Wally) - ESP32 firmware + RPi5 ROS2 system + cloud monitoring
- Requirements: Secure Boot signed firmware, GPG signed packages, multi-architecture Docker images deployed to respective repositories
- Constraints: 2-person team, secure key handling in CI, managing 5+ deployment channels
Core Problem
A single codebase must produce and deploy 5 different artifact types:
firmware/
├── v1/ → ESP-IDF firmware → S3 (Secure Boot signed)
├── v2/
│ ├── ros/ → Docker images → GHCR (amd64 + arm64)
│ └── deb/ → Debian packages → APT repository (GPG signed)
├── cloud/ → Lambda, EC2 → AWS infrastructure
└── monitor/ → Python package → PyPI
The Challenge: Each target requires completely different build tools, signing methods, and deployment processes. ESP-IDF builds in Docker containers and signs with espsecure, Debian packages build with dpkg-deb then manage APT repos with Aptly, Docker images need Buildx for cross-compilation. Manual handling risks Secure Boot key leaks and human errors.
Key Idea
Git tag patterns explicitly express release intent. The tag format determines which pipeline runs:
| Tag Pattern | Triggered Pipeline | Deployment Target |
|---|---|---|
v1-2.0.0 | v1-firmware-build | S3 (signed binaries) |
v1-factory-1.0.0 | v1-factory-build | S3 (production line) |
rvt-system-v1.2.3 | rvt-deb-publish | APT repository |
rvt-monitor-v0.1.0 | rvt-monitor-publish | PyPI |
Cloud deployment triggers on main branch push + path filters.
Approach
1) Secure Boot Key Handling Architecture
Secure Boot keys cannot be stored in the repo, but CI needs them for firmware signing:
┌─────────────────┐ OIDC ┌─────────────────┐
│ GitHub Actions │───────────→│ AWS IAM Role │
└────────┬────────┘ └────────┬────────┘
│ │
│ assume role │ GetSecretValue
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Temp Credentials│ │ Secrets Manager │
└────────┬────────┘ │ (signing key) │
│ └────────┬────────┘
└──────────────┬───────────────┘
▼
┌─────────────────┐
│ Create temp file │
│ (build only) │
└────────┬────────┘
│
┌─────────────┼─────────────┐
▼ ▼ ▼
[Build] [Sign] [Upload]
│
▼
┌─────────────────┐
│ rm -rf keys/ │ ← always() condition
└─────────────────┘
Core Code:
- name: Get signing keys from Secrets Manager
run: |
pip install boto3
python v1/keys/fetch_secure_boot_key.py
- name: Sign firmware
run: |
espsecure.py sign_data \
--keyfile ../keys/secure_boot_signing_key.pem \
--version 2 \
--output build/${{ matrix.device }}-signed.bin \
build/${{ matrix.device }}.bin
- name: Cleanup keys
if: always() # Runs even on build failure
run: rm -rf v1/keys/
The if: always() condition ensures keys are deleted regardless of build success/failure.
2) ESP-IDF Multi-Device Build
Building firmware for both Ceily and Wally devices from the same codebase:
strategy:
matrix:
device: [ceily, wally]
sdkconfig Merge Strategy:
sdkconfig.defaults (common settings)
+
sdkconfig.${device} (device-specific pins, features)
+
sdkconfig.release (optimization, debug disabled)
=
sdkconfig (final build config)
This structure manages common settings in one place while separating device-specific differences.
Version Directory Structure:
s3://rvt-v1-firmware/dev/
├── 2.0.0-a1b2c3d/
│ ├── ceily.bin
│ └── wally.bin
├── 2.0.1-b2c3d4e/
│ ├── ceily.bin
│ └── wally.bin
└── latest.txt → "2.0.1-b2c3d4e"
Version + git short hash combination distinguishes different builds of the same version.
3) Debian Package Deployment (Aptly + S3)
Operating a self-hosted APT repository so Raspberry Pi can install via apt install rvt-system:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ dpkg-deb │────→│ Aptly │────→│ S3 APT │
│ (build) │ │ (repo mgmt) │ │ (hosting) │
└─────────────┘ └──────┬──────┘ └─────────────┘
│
Add GPG signature
Aptly Workflow:
# 1. Add package to local repository
aptly repo add rvt-stable rvt-system_1.2.3_arm64.deb
# 2. Create snapshot (version tracking)
aptly snapshot create rvt-system-20250204-153000 from repo rvt-stable
# 3. GPG sign and publish to S3
aptly publish switch \
-gpg-key="Rovothome" \
stable s3:rvt-apt: rvt-system-20250204-153000
Raspberry Pi Client Setup:
# Add GPG key
curl -fsSL https://rvt-apt.s3.amazonaws.com/rvt-apt.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/rvt.gpg
# Add APT source
echo "deb [signed-by=/etc/apt/keyrings/rvt.gpg] https://rvt-apt.s3.amazonaws.com stable main" | sudo tee /etc/apt/sources.list.d/rvt.list
# Install/update
sudo apt update && sudo apt install rvt-system
4) Docker Multi-Stage + Multi-Architecture Build
Separating development (dev) and production (prod) images while supporting both amd64 (dev PC) and arm64 (RPi5):
# Base: Common dependencies (ros2-control, can-utils, etc.)
FROM ros:jazzy-ros-base AS base
RUN apt-get install -y ros-${ROS_DISTRO}-ros2-control can-utils ...
# Dev: Source mounted, no build
FROM base AS dev
# Runtime mount with -v $(pwd):/ws/src
# Builder: Copy source and build
FROM base AS builder
COPY ros/src /ws/src
RUN colcon build --merge-install
# Prod: Copy only build artifacts (no source)
FROM base AS prod
COPY --from=builder /ws/install /ws/install
Why Multi-Stage?
| Stage | Purpose | Image Size | Source Included |
|---|---|---|---|
| dev | Development (source mounted) | ~2.1GB | Runtime mount |
| prod | Deployment (build results only) | ~1.8GB | None |
Production images exclude source code, reducing size and protecting intellectual property.
Cross-Compile with Buildx + QEMU:
- uses: docker/setup-qemu-action@v3 # arm64 emulation
- uses: docker/build-push-action@v5
with:
platforms: linux/amd64,linux/arm64 # Build both architectures
push: true
tags: ghcr.io/rovothomeDev/rvt-ros2:prod
5) Selective Cloud Infrastructure Deployment
Deploy only changed components:
on:
push:
branches: [main]
paths:
- 'cloud/lambda/**'
- 'cloud/grafana/**'
- 'cloud/scripts/**'
jobs:
detect-changes:
steps:
- uses: dorny/paths-filter@v3
with:
filters: |
lambda:
- 'cloud/lambda/**'
grafana:
- 'cloud/grafana/**'
deploy-lambda:
needs: detect-changes
if: needs.detect-changes.outputs.lambda == 'true'
Post-Deployment Health Check:
- name: Health check
run: |
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://$EC2_HOST:3000/api/health)
if [ "$HTTP_CODE" != "200" ]; then
echo "Health check failed"
exit 1
fi
Immediate health checks after deployment detect problems instantly.
6) OIDC vs Long-Lived Credentials
OIDC Authentication Flow:
GitHub Actions AWS
│ │
│ ──(1) Request JWT token───────→│
│ │ (2) Verify IAM Role
│ ←──(3) Temp credentials (15m)──│
│ │
│ ──(4) Call S3/Lambda/etc──────→│
│ │
Benefits:
- No access keys stored in secrets
- Credentials auto-expire after 15 minutes
- Audit logs track which workflow accessed what
permissions:
id-token: write # Permission to request OIDC token
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::xxx:role/github-actions
Tradeoffs
| Decision | Rationale | Cost |
|---|---|---|
| GitHub Actions | GitHub repo integration, OIDC support, 2000 free min/month | GitHub lock-in |
| Tag-based triggers | Explicit release intent, automatic version parsing | Manual tagging required |
| AWS Secrets Manager | Easy key rotation, IAM integration | $0.05/call, AWS dependency |
| OIDC authentication | No long-lived credentials, 15min TTL | Complex initial IAM setup |
| Aptly + S3 | Self-hosted APT repo, version management | Aptly learning curve, S3 costs |
| Docker multi-stage | dev/prod separation, smaller images | Dockerfile complexity |
| Buildx + QEMU | Single workflow for 2 architectures | Slow arm64 builds (~15min) |
| Path-based deployment | Prevent unnecessary redeployments | paths-filter action dependency |
Results
6 Pipelines:
| Pipeline | Trigger | Build Time | Target |
|---|---|---|---|
| v1-firmware-build | v1-* tag | ~8min | S3 |
| v1-factory-build | v1-factory-* tag | ~6min | S3 |
| rvt-deb-publish | rvt-system-v* tag | ~3min | APT (S3) |
| docker-build | Manual | ~25min | GHCR |
| cloud-deploy | main push + path | ~2min | Lambda/EC2 |
| rvt-monitor-publish | rvt-monitor-v* tag | ~1min | PyPI |
Security:
- Secure Boot keys: Secrets Manager → temp file → immediate deletion
- APT packages: GPG signed, public key hosted on S3
- AWS auth: OIDC (no long-lived credentials)
- SSH keys: base64 encoded secret → temp file → immediate deletion
Developer Experience:
git tag v1-2.0.0 && git push --tags→ automatic build/sign/upload- GitHub Step Summary shows deployment results
- GitHub notifications on build failure
Key Takeaways
Git tag pattern-based triggers explicitly express “which artifact to release.” v1-* means firmware, rvt-system-v* means Debian package—the tag format alone conveys intent.
The OIDC + Secrets Manager combination enables secure key handling in CI while maintaining full automation. Keys never exist in plaintext in the repo or secrets—they exist only temporarily during builds then get deleted.
For small teams managing multiple deployment channels, separating pipelines per channel while unifying the trigger mechanism (tag patterns) proves effective for maintenance.