Backups & migrations
Vaultbase ships two distinct mechanisms for persistence portability:
- Backup & restore — full database snapshot (data + schema). Settings → Backup & restore. For disaster recovery and DB-level rollback.
- Migrations — JSON snapshot of the schema (no data). Settings → Migrations. For shipping schemas between environments.
Backup & restore (data + schema)
Section titled “Backup & restore (data + schema)”GET /api/admin/backup ← downloads the live data.dbPOST /api/admin/restore ← multipart upload, replaces data.dbThe download is a binary .db file — copy it to your backup target. Restore
replaces all current data; the JWT signing key is unchanged so existing
tokens stay valid.
What’s NOT included:
<dataDir>/uploads/(file uploads — back up the filesystem separately)<dataDir>/logs/(rotate via your usual log infrastructure)<dataDir>/.secret(don’t restore across hosts unless you mean to)
Automating backups
Section titled “Automating backups”Cron the GET endpoint with an admin token:
#!/usr/bin/env bashset -euo pipefailDATE=$(date -u +%Y-%m-%dT%H-%M-%SZ)TOKEN="$ADMIN_JWT"
curl -fsS -o "/backups/vaultbase-$DATE.db" \ -H "Authorization: Bearer $TOKEN" \ http://localhost:8091/api/admin/backup
# Keep last 30ls -1t /backups/vaultbase-*.db | tail -n +31 | xargs -r rmFor uploads, snapshot the filesystem with restic / borgbackup /
rsync — they’re just files.
Schema migrations (no data)
Section titled “Schema migrations (no data)”Designed for: dev → staging → prod schema sync. Round-tripping schemas via git.
Export
Section titled “Export”Settings → Migrations → Download snapshot in the admin, or:
GET /api/admin/migrations/snapshot → JSON body, downloads as vaultbase-snapshot-YYYY-MM-DD.jsonSnapshot shape:
{ "generated_at": "2026-04-27T12:34:56.000Z", "version": 1, "collections": [ { "name": "posts", "type": "base", "fields": [ { "name": "title", "type": "text", "required": true }, { "name": "body", "type": "text" } ], "list_rule": null, "view_rule": null, "create_rule": "@request.auth.id != \"\"" }, ... ]}Drops the DB-only fields (id, created_at, updated_at) since name is
the cross-environment identifier.
Preview the diff before applying
Section titled “Preview the diff before applying”POST /api/admin/migrations/diff{ "snapshot": { ...JSON snapshot... } } → { "data": { "added": [ "audits" ], "modified": [ { "name": "posts", "changes": ["+ field tags (text)", "view_rule changed"] } ], "unchanged": [ "users" ], "removed": [ "drafts" ] } }Returns admin-friendly change strings — what fields would be added / removed, which rules differ, and so on. Nothing is applied; this is purely read-only.
The Admin Migrations tab shows this preview before Apply runs:
chips for added / modified / unchanged / removed plus an expandable
<details> block listing the per-collection changes for everything in
modified. The Apply button stays disabled until the diff has loaded
— you can’t accidentally apply a snapshot you haven’t reviewed.
Settings → Migrations → Upload & apply with mode selector, or:
POST /api/admin/migrations/apply{ "snapshot": { ...JSON snapshot... }, "mode": "additive" // default; "sync" updates existing too}Modes:
| Mode | What it does |
|---|---|
additive (default, safe) | Creates missing collections. Skips existing ones — never modifies them. |
sync | Also updates existing collections to match the snapshot (fields, rules, view query). Removed fields drop their column and data. Confirm before running. |
Neither mode ever deletes a collection. Drop manually if needed.
Response:
{ "data": { "created": ["posts", "users"], "updated": [], "skipped": ["existing_collection"], "errors": [] }}Recommended workflow
Section titled “Recommended workflow”- Build your schema in dev via the admin UI.
- Download snapshot → commit
schema.jsonto git. - CI / deploy script:
- Spin up prod with empty
<dataDir>(or existing). - Apply the snapshot (additive on first deploy, sync on follow-ups — gated by manual confirm).
- Spin up prod with empty
- App data flows in via the records API — no schema work in prod.
Limitations
Section titled “Limitations”- Type changes within a field are blocked (would require data conversion). Drop + re-add the field to change a type.
- Collection type changes (e.g. base → auth) are blocked in sync mode — drop the collection manually first.
CSV import / export (data, base collections)
Section titled “CSV import / export (data, base collections)”For bulk data, not schemas:
- Records page → Export — downloads all rows of a base collection as CSV. Excludes password fields. Object/array values JSON-encoded into a single cell.
- Records page → Import — upload a CSV; rows go through validation; per-row errors returned in the response summary.
GET /api/admin/export/<collection>POST /api/admin/import/<collection> ← Content-Type: text/csv