The official postgres image on Docker Hub is one of the most-pulled images in the registry. It covers every major Postgres version, handles first-run initialization automatically, and gets a local database running with a single command. For development, CI pipelines, and integration testing it is hard to beat. For production, the story is more complicated.
Getting a Container Running
The minimum viable invocation requires exactly one environment variable:
docker run -d \
--name pg \
-e POSTGRES_PASSWORD=mysecretpassword \
-p 5432:5432 \
postgres:17
POSTGRES_PASSWORD is the only required variable. It sets the password for the postgres superuser. Without it the container refuses to start.
Two optional variables are worth setting from the start. POSTGRES_USER changes the superuser name away from postgres, and POSTGRES_DB creates a named database on initialization rather than defaulting to a database named after the user. Neither has any effect once the data directory is already populated - the Docker-specific environment variables only apply when the container sees an empty PGDATA directory.
docker run -d \
--name pg \
-e POSTGRES_USER=appuser \
-e POSTGRES_PASSWORD=mysecretpassword \
-e POSTGRES_DB=appdb \
-p 5432:5432 \
postgres:17
Pin to a specific major version. As of late 2025, postgres:18 is the current latest release; postgres:17 is a well-supported stable choice. The :latest tag follows the current Postgres release and will silently pull a new major version when one ships. Postgres major version upgrades require a pg_upgrade or dump-and-restore because the on-disk data format is not forward-compatible. Discovering this mid-incident is painful.
Persisting Data with Volumes
By default, the container stores data at /var/lib/postgresql/data (the PGDATA path). Without a volume, that data disappears the moment the container is removed. This is fine for throwaway test runs; it is a problem everywhere else.
Named volumes are the right default for most cases:
docker volume create pgdata
docker run -d \
--name pg \
-e POSTGRES_PASSWORD=mysecretpassword \
-v pgdata:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:17
Named volumes live in Docker's storage area on the host (/var/lib/docker/volumes/ on Linux) and survive container removal. Bind mounts (-v /some/host/path:/var/lib/postgresql/data) give you direct host filesystem visibility, which can help with manual inspection or backup scripts, but they expose you to host permission issues - the container runs as uid 999 by default, and mismatched permissions on the host path will cause the container to fail on startup.
One change to be aware of: in PostgreSQL 18+, the image changed the default volume mount point from /var/lib/postgresql/data to /var/lib/postgresql (dropping the /data suffix), and PGDATA became version-specific - for postgres:18 the default PGDATA is /var/lib/postgresql/18/docker. This is a breaking change if you are upgrading from an older image: a volume mounted at /var/lib/postgresql/data will not be picked up correctly by a postgres:18 container without explicit configuration. If you are migrating to PG 18 or mixing image versions in scripts or Compose files, verify the PGDATA value for the exact image tag you are using and update your volume mount path accordingly.
Docker Compose for Development
Running Postgres in isolation rarely reflects real usage. In development you typically want a database alongside an application server. Docker Compose handles that cleanly, with one important detail: Compose depends_on by default only waits for the container to start, not for Postgres to be ready to accept connections. The database process takes a second or two to initialize after the container starts, which causes connection errors in application containers that start too quickly.
The fix is a healthcheck combined with condition: service_healthy:
services:
db:
image: postgres:17
environment:
POSTGRES_USER: appuser
POSTGRES_PASSWORD: mysecretpassword
POSTGRES_DB: appdb
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U appuser -d appdb"]
interval: 5s
timeout: 5s
retries: 5
start_period: 10s
app:
image: myapp:latest
depends_on:
db:
condition: service_healthy
environment:
DATABASE_URL: postgres://appuser:mysecretpassword@db:5432/appdb
volumes:
pgdata:
pg_isready is a lightweight binary included in the Postgres image that checks whether the server is accepting connections at the protocol level. It does not execute a query and never authenticates - it simply attempts a connection handshake and exits with a status code. Passing -U and -d flags tells pg_isready which host and port to probe, but it does not verify that those credentials are valid; even wrong credentials will not cause it to fail. This means a passing healthcheck confirms that the server process is up and accepting connections, but not that a specific user or database is accessible. The start_period gives Postgres time to complete initdb on a fresh volume before retries begin counting against the limit.
Initialization scripts are another useful feature of the official image. Any .sql, .sql.gz (gzip-compressed SQL), or .sh file placed in /docker-entrypoint-initdb.d/ runs once, in alphabetical order, when the data directory is first created. This is a clean way to seed schema and test data without baking them into application startup code.
Connecting and Inspecting
The fastest way to get a psql session against a running container is docker exec:
docker exec -it pg psql -U appuser -d appdb
This drops you directly into the Postgres interactive terminal. From there, \dt lists tables, \dn lists schemas, and \conninfo confirms what you are connected to. No need to expose the port externally for quick inspection.
When granting access to application users, a common pattern is:
GRANT ALL PRIVILEGES ON DATABASE myapp TO appuser;
Be aware that this only grants database-level privileges (connect, create schemas, etc.), not access to tables. To allow the user to read and write tables in the public schema, you also need:
GRANT ALL ON ALL TABLES IN SCHEMA public TO appuser;
For new tables created in the future, set a default privilege as well: ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO appuser;.
External clients - pgAdmin, DBeaver, psql on your host machine - connect via the published port (5432 by default) using localhost as the host. If you did not publish the port in your docker run command, you either need to exec in or restart the container with -p 5432:5432.
Production Considerations
This is where the trade-offs become real. The Postgres Docker image works well for development and CI. Running it in production requires solving several problems that are not the image's concern.
Volume lifecycle management is the first issue. Docker volumes are not backed up automatically. If you use a named volume and the host node fails, or if someone runs docker volume prune, the data is gone. On bare metal or VMs this usually means scripting pg_dump on a cron schedule and shipping the output offsite, or using a sidecar container that calls pg_dump and uploads to object storage. Volume-level snapshots (e.g., LVM snapshots, EBS snapshots on AWS) are faster for large databases but require filesystem quiescing to be consistent.
High availability is not provided by the image. Postgres HA requires replication - typically streaming replication with a tool like Patroni, Repmgr, or Stolon managing leader election and failover. Note that Stolon has had minimal maintenance since 2021 and is considered largely inactive; Patroni is the most actively maintained option for VM or bare-metal deployments. Orchestrating HA on top of Docker on bare hosts is non-trivial. On Kubernetes it is more tractable using operators like CloudNativePG (the recommended choice for Kubernetes) or Zalando's Postgres Operator, which manage StatefulSets, persistent volumes, and failover. Both abstract most of the HA complexity but add their own operational surface area.
For many teams running Postgres in production, a managed service (RDS, Cloud SQL, Supabase, Neon) is worth the cost compared to self-managing replication, backups, and upgrades inside containers. The Docker image is an excellent tool for the development-to-staging path. The further you move toward production, the more the surrounding infrastructure matters relative to the container itself.