Migration-service developers usage instruction
The migration-service project from webdevelop-pro is a Go-based database migration that tracks schema state in Postgres tables (migration_services for per-“service” versioning and migration_service_logs for per-file SQL+hash auditing).
Migration-service organizes migrations by service name and service priority, and then orders each service’s SQL files by a numeric prefix. The tool discovers .sql files recursively under MIGRATION_DIR, parses naming conventions to derive ordering metadata, and applies SQL while recording hashes and executed SQL for drift detection (--check) and remediation (--check-apply).
How migration-service works
To simplify migration-service workflow lets split it on three layers: discovery & ordering, execution, and state recording.
Discovery and ordering rules
The tool recursively scans a MIGRATION_DIR (provided via env configuration) and ignores non-.sql files.
├── app
├── migrations
│ ├── 01_user_users
│ │ ├── 01_init.sql
│ │ ├── 02_add_email.sqlIt derives three key fields from paths and filenames:
- Service priority: taken from a folder name of form
<service_index>_<service_name>(e.g.,01_user_users). - Service name: derived from the folder name (everything after
<service_index>_). - Migration version (per file): taken from the SQL filename prefix
<sql_index>_<title>.sql(e.g.,02_add_email.sql).
Within a run, ordering is deterministic:
- Services are processed by ascending service priority (numeric).
- Within the same priority, services are processed in alphabetical order by name.
- Within each service, migration versions are processed in ascending numeric order.
In-file configuration: allow_error and required_env
-- required_env: !master
-- allow_error: true
insert into user_users(name) values('test_user1')migration-service can read “configuration” from leading SQL comment lines -specifically consecutive starting lines beginning with --. It stops parsing configs at the first non--- line.
Supported keys (as implemented in code and documented in the repo) include:
allow_error: true|false– if true, SQL errors from that migration do not fail the run.required_env: <regex>– only execute the migration ifENV_NAMEmatches (or does not match) the regex.
There is also a negation convention: if required_env begins with !, the match is inverted (execute when it does not match).
Practical caveat: the parser strips spaces and hyphens from the comment string before it splits into key/value pairs. That can unintentionally change regexes that rely on -.
Execution and transaction semantics
Execution is performed by calling a Postgres repository method that wraps SQL execution in a transaction via pgx.BeginFunc: each Exec call executes tx.Exec(...) and commits/rolls back depending on error.
Since migration-service drives each migration file by calling repository Exec with the file’s SQL content, each migration file is effectively atomic (commit if success; rollback if error), unless you explicitly set allow_error and choose to ignore failures.
An important consequence: because the tool always executes inside a transaction block, SQL operations that are forbidden “inside a transaction block” (for example CREATE INDEX CONCURRENTLY) will fail if you try to run them through migration-service.
State recording: versions and hash logs
migration-service persists two core tables (created by --init or automatically when first needed):
migration_services: one row per logical service name (name), storing the latest applied integerversion.migration_service_logs: one row per applied migration file, storing(migration_services_name, priority, version, file_name)plus the executed SQL and its hash, uniquely constrained on that tuple.
Each migration file is hashed (MD5 of the SQL string) when loaded, and that hash is persisted into migration_service_logs.
The --check command compares the currently computed file hash with the stored migration_service_logs.hash and reports differences; --check-apply attempts to apply the “different” migrations (internally by force-applying those file paths).
> ./migration_app --initExecution and transaction semantics
Migration-service drives each migration file by calling repository Exec with the file’s SQL content, each migration file is effectively atomic (commit if success; rollback if error), unless you explicitly set allow_error and choose to ignore failures.
An important consequence: because the tool always executes inside a transaction block, SQL operations that are forbidden “inside a transaction block” (for example CREATE INDEX CONCURRENTLY) will fail if you try to run them through migration-service.
State recording: versions and hash logs
migration-service persists two core tables (created by --init or automatically when first needed):
migration_services: one row per logical service name (name), storing the latest applied integerversion.migration_service_logs: one row per applied migration file, storing(migration_services_name, priority, version, file_name)plus the executed SQL and its hash, uniquely constrained on that tuple.
Each migration file is hashed (MD5 of the SQL string) when loaded, and that hash is persisted into migration_service_logs.
The --check command compares the currently computed file hash with the stored migration_service_logs.hash and reports differences; --check-apply attempts to apply the “different” migrations (internally by force-applying those file paths).
The “optional folder becomes part of the service”
Useful feature is how migration-service interprets one level of subfoldering under a service directory.
If the immediate parent folder of a .sql file does not contain an underscore, migration-service treats it as a “subfolder label” and appends it to the service name (as _<subfolder>). Concretely, migrations/01_user_users/seeds/01_seed.sql becomes the logical service name user_users_seeds (where seeds is appended).
This matches the repository’s example structure (functions, seeds, views), but it also means:
- Your “seeds” or “views” are tracked independently in
migration_services(separatenamerows). - You can intentionally version and apply “seed” migrations independently of “schema” migrations
├── migration_app
├── migrations
│ ├── 01_user_users
│ │ ├── 01_init.sql
│ │ ├── 02_add_email.sql
│ │ ├── functions
│ │ │ └── 01_email.sql
│ │ ├── seeds
│ │ │ └── 01_seed.sql
│ │ └── views
│ │ └── 01_users_with_email.sql
│ └── 02_email_emails
│ ├── 01_create.sql
│ └── 02_add_id.sql
└── readme.mdUsage example
Installing and configuring migration-service
Installation options
- download latest release from github
- run docker from docker.io/webdeveloppro/migration-service
Environment variables
The repository’s .example.env documents the key variables migration-service expects, including the migration directory and DB connection parameters.
| Variable | What it controls | Description |
|---|---|---|
MIGRATION_DIR | Root path scanned for migrations | Must point at a directory containing the migrations/<priority>_<service>/<version>_<title>.sql structure. |
ENV_NAME | Environment label used by required_env | Enables branch/environment-specific migrations (e.g., seeds only on dev). |
DB_HOST, DB_PORT, DB_USER, DB_PASSWORD, DB_NAME | Database connection parameters | Map these from your database direct/session connection info. |
DB_SSL_MODE | Whether SSL is enabled (repo-specific meaning) | Recommends using SSL. |
HOST, PORT | HTTP server binding (when not using apply-only) | Dockerfile exposes a port; for CI, you typically won’t run the HTTP server. |
The variable set above is drawn from .example.env and the app config structs requiring ENV_NAME.
Application flags and safe operational meaning
The server entrypoint exposes a set of flags that define how the tool behaves:
| Flag | Primary action | Where it’s safest to use |
|---|---|---|
--init | Create migration tables (migration_services, migration_service_logs) | First run on a new DB; also useful for CI bootstrap. |
--apply-only | Apply migrations then exit (do not start web server) | CI/CD and one-shot migration jobs. |
--final-sql <service> | Print SQL that would be applied for a service, without applying | “Preview” what will change; can support review pipelines. |
--check [paths...] | Compare file hashes to DB logs | Drift detection (after migrations have been applied and logged). |
--check-apply [paths...] | Check hashes then force-apply mismatches | Emergency drift repair tool (risky). |
--fake [paths...] | Mark versions as applied without executing SQL | Exceptional cases (e.g., manual hotfix already applied). |
--force [paths...] | Apply migrations without version check | Exceptional repair tool; can re-run old migrations. |
Example project structure
my-app/
migrations/
01_app/
01_create_tasks.sql
02_add_task_index.sql
02_seed/
dev/
01_seed_dev_tasks.sql
.env.migrationWhy this layout works:
- Service folders have the
<priority>_<service_name>shape (01_app,02_seed). - Migration files have
<version>_<title>.sqlshape (01_create_tasks.sql). - The
02_seed/dev/...path intentionally placesdevas an “optional folder” without underscore, which will be appended to the service name (turningseedintoseed_dev), letting you track and run dev seeds separately.
Step two: create .env.migration for migration-service
Below is a simple example of mapping Supabase direct connection info into migration-service’s environment variables. (Exact SSL flags depend on your DB library wiring; Supabase recommends SSL wherever possible.) citeturn47view0turn4view0
# .env.migration
ENV_NAME=production
MIGRATION_DIR=./migrations
# Supabase direct connection (example mapping)
DB_HOST=db.<project-ref>.supabase.co
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=<your-db-password>
DB_NAME=postgres
DB_SSL_MODE=trueThe existence and naming of these variables is shown in .example.env, and ENV_NAME is required by the app’s config struct. Supabase’s guidance about percent-encoding passwords in connection URLs is documented in their Postgres roles guide.
Step three: write a migration file
migrations/01_app/01_create_tasks.sql
--- required_env: production|staging
--- allow_error: false
CREATE TABLE IF NOT EXISTS public.tasks (
id bigserial PRIMARY KEY,
title text NOT NULL,
is_done boolean NOT NULL DEFAULT false,
created_at timestamptz NOT NULL DEFAULT now()
);Why these headers work:
- migration-service parses leading comment lines starting with
--and supportsrequired_envandallow_error. required_envuses the runtimeENV_NAMEand regex matching; migrations that do not match are skipped.
Note: even though the examples use ---, the parser keys off the first two characters -- and strips hyphens/spaces before parsing key/value pairs.
Step four: add a second migration (and mind transaction-only constraints)
migrations/01_app/02_add_task_index.sql
--- required_env: production|staging
--- allow_error: false
CREATE INDEX IF NOT EXISTS tasks_is_done_idx ON public.tasks (is_done);This uses a “normal” CREATE INDEX. Avoid CREATE INDEX CONCURRENTLY here because migration-service executes migration SQL inside a transaction (via pgx.BeginFunc), while PostgreSQL documents that CREATE INDEX CONCURRENTLY cannot be executed inside a transaction block.
Step five: initialize and apply migrations
From your project root:
set -a
source .env.migration
set +a
# Download binary
curl https://github.com/webdevelop-pro/migration-service/releases/download/v0.4.7/app-v0.4.7-`uname`-`uname -m`.tar.gz | tar xz > app
chmod +x app
# Create required tables
app --init
# Apply migrations and exit (CI-friendly)
app --apply-onlyThe README demonstrates --init and describes required tracking tables (migration_services and migration_service_logs). The --apply-only flag is implemented in the entrypoint as “apply and shutdown migration service” mode.
Step six: verify migration state
Use psql or the SQL editor to inspect:
SELECT name, version
FROM migration_services
ORDER BY name;
SELECT migration_services_name, priority, version, file_name, hash, created_at
FROM migration_service_logs
ORDER BY created_at DESC
LIMIT 20;These table names, key columns, and unique constraints are defined by the repository’s CreateMigrationTable function.
GitHub Actions CI/CD workflow for safe migrations
This section shows a CI/CD design that is aligned with migration-service’s capabilities (apply-only mode, per-file transactional execution, hash logs) and GitHub’s secret-handling security model.
Key safety properties to build around
- Per-file atomicity: migration execution is wrapped in a transaction (
pgx.BeginFunc), so a failed migration file should rollback automatically, and the run fails unlessallow_erroris enabled. - Deterministic ordering: services and migrations are sorted by priority and numeric prefixes, with service names alphabetically sorted within the same priority.
- Auditability: each applied migration is logged with the SQL string and hash in
migration_service_logsand compared by--check. - No built-in down migrations: the tool is built around version increments and logging; operational rollback is typically “forward fix” by adding a new migration version (and/or restoring from backups).
Recommended secrets strategy on GitHub Actions
Use GitHub encrypted secrets, preferably scoped to environments (e.g., staging, production), and require reviewers for the production environment so that a workflow job cannot access production secrets until approval is granted.
Example workflow: staged migrations with approval gates
name: db-migrations
on:
push:
branches: [ "main", "stage", "dev" ]
jobs:
migrate-staging:
runs-on: ubuntu-latest
environment: staging
steps:
- uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: "1.24"
- name: Apply migrations to staging
env:
ENV_NAME: staging
MIGRATION_DIR: ./migrations
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_NAME }}
DB_SSL_MODE: "true"
run: |
./migration_app --init
./migration_app --apply-only
- name: Verify hash logs
env:
ENV_NAME: staging
MIGRATION_DIR: ./migrations
DB_HOST: ${{ secrets.DB_HOST }}
DB_PORT: ${{ secrets.DB_PORT }}
DB_USER: ${{ secrets.DB_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
DB_NAME: ${{ secrets.DB_NAME }}
DB_SSL_MODE: "true"
run: |
./migration_app --checkThis workflow pattern aligns with:
- GitHub’s model where secrets are only readable when explicitly included and can be protected via environment rules/approvals.
- migration-service’s
--init,--apply-only, and--checkbehaviors.
Rollback and “safe failure” strategies
Because each migration file is executed in a transaction and should rollback on error, the immediate failure mode is “database unchanged by that file” (unless you used allow_error).
However, production rollback typically requires design-time planning:
- Prefer migrations that are idempotent using constructs already used by migration-service’s own bootstrap SQL, such as
CREATE TABLE IF NOT EXISTSandDROP CONSTRAINT IF EXISTS. - Avoid transactional-block-incompatible operations (e.g.,
CREATE INDEX CONCURRENTLY) in migration-service runs; if you need them, isolate them into a separate operational step that does not wrap SQL in a transaction block, since PostgreSQL disallows these operations inside transaction blocks. - Treat
--forceand--check-applyas emergency tools:--forceintentionally disables version checks and can re-run already-applied SQL, and--check-applywill attempt to apply drifted files. These can be destructive if drift represents an irreversible migration in production.
CI/CD migration flow diagram
flowchart TD
A[Push to main] --> B[GitHub Actions: migrate-staging job]
B --> C[Checkout code]
C --> D[Load env + secrets for staging]
D --> E[Run migration-service --init]
E --> F[Run migration-service --apply-only]
F --> G[Each SQL file executed in its own transaction]
G --> H[Update migration_services version]
H --> I[Upsert migration_service_logs with sql + hash]
I --> J[Run migration-service --check]
J --> K{Staging OK?}
K -- No --> L[Fail pipeline]
K -- Yes --> M[GitHub environment gate: production approval]
M --> N[GitHub Actions: migrate-production job]
N --> O[Repeat init + apply-only with production ENV_NAME]
O --> P[Done]The transactional execution and state writes shown above reflect the repository’s Exec transaction wrapper, version updates, and log upserts. citeturn35view0turn36view2turn36view0
Best practices, troubleshooting, and security
Best practices for authoring migrations
Design migrations so that re-runs and partial environment promotion behave predictably:
- Keep migrations small and ordered; migration-service’s ordering is numeric and deterministic, so treat numeric prefixes as an API.
- Use
required_envfor seeds and dev-only objects, but remember:- It is a regex match against
ENV_NAME. - Prefixing with
!inverts the match.
- It is a regex match against
- If you truly need
allow_error: true, prefer putting it on its own comment line (and test carefully), because the parser’s “break-on-first-match” behavior can stop parsing additional key/value pairs on the same line when certain keys are present.
Troubleshooting: common failure patterns
“relation migration_services does not exist”
The repository implementation detects missing migration tables during version reads and attempts to create them automatically, and --init explicitly creates them.
Migrations not running when expected
Check ENV_NAME and any required_env headers, especially if you used a negation (!). Migration-service will skip execution when the regex does not match the intended environment name.
Supabase pooled connection errors (especially on port 6543)
If you are using the transaction pooler, remember Supabase’s warning that prepared statements are unsupported in transaction mode, and cross-check this with pgx’s default modes. Prefer direct/session pooler for migrations.
Security considerations
- Use GitHub Actions environment secrets (not plaintext) and gate production migrations via environment approvals; GitHub documents both encryption-at-rest/in-transit semantics for secrets and approval gates for environment secrets.
- Apply the “minimum permissions” principle to credentials: GitHub recommends limiting credential permissions, and Supabase highlights that privileged keys/roles (like
service_role) require careful handling. - Prefer DB credentials scoped to database administration workflows (e.g., a dedicated migration role) over reusing application runtime identities - while ensuring the role still has the DDL permissions needed for schema changes.


