PostgreSQL uses a process-per-connection model: every client connection causes the postmaster to fork a new backend process. At small connection counts this is fine. At scale it becomes a bottleneck — each backend consumes 5–10 MB of memory, connection setup involves authentication and process forking, and the max_connections limit caps how many clients can be active simultaneously. Connection pooling sits between your application and PostgreSQL, maintaining a fixed pool of real database connections that many clients share.
PgBouncer is the most widely used PostgreSQL connection pooler. It is a lightweight, single-process proxy written in C that adds roughly 2 kB overhead per client connection — compared to 5–10 MB for a real PostgreSQL backend. The entire PgBouncer process typically uses 2–5 MB of memory regardless of how many clients are connected.
Why PostgreSQL Needs a Connection Pooler
PostgreSQL's max_connections default is 100. That limit includes superuser connections, background workers, and replication slots — not just application traffic. Superusers get a reserved slice (superuser_reserved_connections, default: 3), so regular users can use at most max_connections - 3 slots.
Increasing max_connections has a direct shared memory cost: PostgreSQL pre-allocates shared memory structures for all possible connections at startup. Beyond the memory cost, each active backend runs its own process with its own copy of query caches, catalog caches, and session state. At 500+ direct connections, context-switching and memory pressure degrade query throughput even on otherwise idle connections.
A stateless web application (one that opens a connection per request) makes this worse: every HTTP request pays the cost of a TCP handshake, PostgreSQL authentication negotiation, and a process fork. Under load this generates measurable latency spikes.
Connection pooling solves two distinct problems:
- Connection reuse: Keeps connections open and passes them between clients, eliminating per-request setup overhead.
- Connection multiplexing (in transaction mode): Lets many more clients than PostgreSQL backends be active simultaneously, provided most clients are idle between transactions.
How PgBouncer Works
PgBouncer listens on a port (default: 6432) and speaks the PostgreSQL wire protocol. Applications connect to PgBouncer as if it were a PostgreSQL server. PgBouncer maintains a pool of real server connections to the backend, and assigns those connections to clients according to the configured pooling mode.
Pools are scoped to (database, user) pairs. A pool for app_user@mydb is separate from reporting_user@mydb. Each pool has its own size limits.
PgBouncer supports online configuration reload (RELOAD on the admin console, or SIGHUP) without dropping active connections. It also supports online restart using Unix socket handoff.
Pooling Modes
The pooling mode is the most consequential configuration decision. It determines when a server connection is returned to the pool, and consequently which PostgreSQL features remain available.
Session Pooling
A server connection is assigned to a client for the entire duration of the client session. The connection is returned to the pool only when the client disconnects.
This mode preserves full PostgreSQL feature compatibility: prepared statements, temporary tables, advisory locks, LISTEN/NOTIFY, SET commands, WITH HOLD cursors — everything works. The pooling benefit is limited to eliminating connection setup and teardown overhead, not reducing the number of concurrent server connections. If you have 200 long-lived application connections, you still need 200 real backends.
Use session mode when: your application maintains persistent connections; you use session-level features; or you cannot audit the application for transaction mode compatibility.
Transaction Pooling
A server connection is assigned to a client only for the duration of a single transaction. When the transaction commits or rolls back, the connection immediately returns to the pool. The client retains its PgBouncer connection but holds no real backend process between transactions.
This is the most effective mode for typical web workloads. An application with 1,000 PgBouncer connections may need only 20–50 real PostgreSQL backends if most clients are idle between transactions. The reduction in server connection count is the primary reason to run PgBouncer.
The tradeoff is that any PostgreSQL feature that outlives a transaction boundary breaks in this mode. See Limitations below for the full list.
Statement Pooling
A server connection is returned to the pool after each individual SQL statement. This forces autocommit semantics — clients cannot use BEGIN/COMMIT for multi-statement transactions.
Statement mode is rarely used in practice. It breaks transaction semantics entirely and is only appropriate for single-statement, read-only queries against a reporting system. It carries all the limitations of transaction mode and more.
Feature Compatibility by Mode
| Feature | Session | Transaction | Statement |
|---|---|---|---|
LISTEN / NOTIFY |
Works | Broken | Broken |
| Session advisory locks | Works | Broken | Broken |
WITH HOLD cursors |
Works | Broken | Broken |
SQL-level PREPARE |
Works | Broken | Broken |
| Protocol-level prepared statements | Works | Works (1.21.0+) | Limited |
| Temporary tables (persistent across txns) | Works | Broken | Broken |
SET session parameters |
Works | Partially | Broken |
SAVEPOINT |
Works | Works | Broken |
Limitations in Transaction Mode
These are the most common sources of breakage when migrating to transaction mode.
LISTEN / NOTIFY: LISTEN is a session-scoped command. In transaction mode, notifications may arrive on a server connection that has been returned to the pool and reassigned to another client. LISTEN/NOTIFY requires session mode.
Session-level advisory locks: pg_advisory_lock() holds a lock for the session. In transaction mode, the lock is held on whichever server connection acquired it, but subsequent transactions may run on a different connection. Use pg_advisory_xact_lock() (transaction-scoped) instead.
SET commands: SET search_path = myschema affects the current session. In transaction mode, the session may switch server connections between transactions, losing the setting. Workarounds: use track_extra_parameters to have PgBouncer track and restore specific parameters; set defaults at the role level with ALTER ROLE user SET search_path = myschema; or use SET LOCAL (which is already transaction-scoped).
SQL-level prepared statements: PREPARE foo AS SELECT ... creates a named statement for the session. Since the server connection changes between transactions, the prepared statement is lost. This is distinct from protocol-level prepared statements (see below).
Protocol-level prepared statements (resolved in 1.21.0): Before PgBouncer 1.21.0, the binary protocol's Parse/Bind/Execute flow — used by virtually every ORM and database driver — was broken in transaction mode. Since 1.21.0, PgBouncer tracks protocol-level prepared statements per client and ensures they are available on whatever server connection is used for the next transaction. This is controlled by max_prepared_statements (default: 200 as of 1.24.0). Most ORMs work correctly in transaction mode with a recent PgBouncer version.
Migrations: Many migration tools use session advisory locks internally (Rails uses pg_advisory_lock). Run migrations with a direct connection to PostgreSQL, bypassing PgBouncer.
Installation
On Debian/Ubuntu, the PGDG repository provides the latest version:
sudo apt-get install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
sudo apt-get install -y pgbouncer
On RHEL/CentOS/Fedora, after adding the PGDG repo:
sudo dnf install pgbouncer
Start and enable via systemd:
sudo systemctl enable --now pgbouncer
Configuration
PgBouncer's configuration file is typically /etc/pgbouncer/pgbouncer.ini, with two main sections: [databases] and [pgbouncer].
[databases] Section
Maps the database name clients use when connecting to PgBouncer to a real backend:
[databases]
; Basic mapping
myapp = host=127.0.0.1 port=5432 dbname=myapp
; Override pool size for a specific database
analytics = host=127.0.0.1 port=5432 dbname=analytics pool_size=5
; Wildcard: any database name routes to the same backend
; (client-specified dbname is forwarded as-is)
* = host=127.0.0.1 port=5432
Key [pgbouncer] Parameters
Networking:
| Parameter | Default | Description |
|---|---|---|
listen_addr |
127.0.0.1 |
IP to listen on; use * for all interfaces |
listen_port |
6432 |
Port PgBouncer listens on |
unix_socket_dir |
/tmp |
Directory for Unix domain socket |
Pool sizing:
| Parameter | Default | Description |
|---|---|---|
pool_mode |
session |
Pooling mode: session, transaction, or statement |
max_client_conn |
100 |
Maximum total client connections PgBouncer accepts |
default_pool_size |
20 |
Server connections per (database, user) pool |
min_pool_size |
0 |
Minimum server connections to keep open when idle; 0 = disabled |
reserve_pool_size |
0 |
Extra server connections allowed when pool is exhausted; 0 = disabled |
reserve_pool_timeout |
5.0 |
Seconds a client waits before the reserve pool activates |
max_db_connections |
0 |
Total server connections per database across all pools; 0 = unlimited |
max_user_connections |
0 |
Total server connections per user across all databases; 0 = unlimited |
Timeouts:
| Parameter | Default | Description |
|---|---|---|
server_idle_timeout |
600.0 |
Seconds before an idle server connection is closed |
server_lifetime |
3600.0 |
Maximum age of any server connection (prevents stale connections) |
server_connect_timeout |
15.0 |
Seconds to wait for a new server connection to authenticate |
client_idle_timeout |
0.0 |
Seconds before an idle client is disconnected; 0 = disabled |
query_timeout |
0.0 |
Max seconds for a single query; 0 = disabled |
query_wait_timeout |
120.0 |
Max seconds a client can wait for a server connection before error |
idle_transaction_timeout |
0.0 |
Seconds before aborting a client stuck in an idle transaction; 0 = disabled |
Prepared statements (1.21.0+):
| Parameter | Default | Description |
|---|---|---|
max_prepared_statements |
200 |
Protocol-level prepared statements tracked per client connection; 0 = disabled |
Session parameter tracking (1.20.0+):
| Parameter | Default | Description |
|---|---|---|
track_extra_parameters |
IntervalStyle |
Additional session parameters PgBouncer restores across server connections in transaction mode |
Server reset:
| Parameter | Default | Description |
|---|---|---|
server_reset_query |
DISCARD ALL |
SQL run before returning a server connection to the pool (session mode) |
server_reset_query_always |
0 |
Run server_reset_query in all modes, not just session mode |
Example Configuration
A typical transaction-mode setup for a web application:
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_addr = 127.0.0.1
listen_port = 6432
auth_type = scram-sha-256
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
server_idle_timeout = 600
server_lifetime = 3600
client_idle_timeout = 0
max_prepared_statements = 200
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
admin_users = pgbouncer_admin
stats_users = pgbouncer_stats
With this configuration, up to 1,000 application connections share 25 real PostgreSQL backends per (database, user) pool, with 5 additional reserve connections available under load.
The OS file descriptor limit must exceed max_client_conn + (pool_size × databases × users). Set this via systemd's LimitNOFILE or ulimit -n.
Authentication
PgBouncer authenticates clients using its own credential store, then connects to PostgreSQL using credentials configured in the [databases] section or per-user settings.
auth_type Options
scram-sha-256— Recommended. Uses SCRAM-SHA-256, the modern PostgreSQL default.md5— MD5 hash; still widely supported but weaker.trust— No password required. Only for local development.hba— Delegates to anpg_hba.conf-style file specified byauth_hba_file, allowing per-client-IP authentication rules.ldap— LDAP authentication (added in 1.25.0).cert— TLS client certificate authentication.
Static userlist.txt
"appuser" "password123"
"reporting" "SCRAM-SHA-256$4096:base64salt=$storedkey:serverkey"
Generate SCRAM entries from PostgreSQL:
SELECT CONCAT('"', usename, '" "', passwd, '"')
FROM pg_shadow
WHERE usename IN ('appuser', 'reporting');
Set permissions: chmod 600 /etc/pgbouncer/userlist.txt.
Dynamic Authentication via auth_query
For environments with many users, query PostgreSQL at login time instead of maintaining a static file:
auth_user = pgbouncer_auth
auth_query = SELECT rolname, rolpassword FROM pg_authid WHERE rolname=$1 AND rolcanlogin
Because pg_authid is only accessible to superusers, the recommended approach is to wrap the lookup in a SECURITY DEFINER function that runs as a privileged owner, and grant only EXECUTE on that function to the auth user:
CREATE USER pgbouncer_auth WITH PASSWORD 'strongpassword';
CREATE OR REPLACE FUNCTION pgbouncer_get_auth(p_usename TEXT)
RETURNS TABLE(username TEXT, password TEXT)
LANGUAGE sql SECURITY DEFINER AS $$
SELECT usename, passwd FROM pg_shadow WHERE usename = p_usename;
$$;
GRANT EXECUTE ON FUNCTION pgbouncer_get_auth(TEXT) TO pgbouncer_auth;
Update auth_query to match:
auth_user = pgbouncer_auth
auth_query = SELECT username, password FROM pgbouncer_get_auth($1)
PgBouncer connects to PostgreSQL as pgbouncer_auth to look up credentials for each new client login. This keeps credentials in sync with PostgreSQL without manual userlist.txt updates.
Monitoring
Connect to the PgBouncer admin console as a user listed in admin_users:
psql -h 127.0.0.1 -p 6432 -U pgbouncer_admin pgbouncer
Users listed in stats_users can run read-only SHOW commands.
SHOW POOLS
The most operationally useful view. Shows the current state of each (database, user) pool:
database | user | cl_active | cl_waiting | sv_active | sv_idle | maxwait
---------+---------+-----------+------------+-----------+---------+--------
myapp | appuser | 12 | 0 | 12 | 8 | 0
Key columns:
cl_active— Clients actively linked to a server connection executing queriescl_waiting— Clients queued waiting for a server connectionsv_active— Server connections currently in use by a clientsv_idle— Server connections available in the poolmaxwait— Longest time (seconds) any client has been waiting for a connection
If cl_waiting > 0 or maxwait is consistently above zero, default_pool_size is too small for the current load.
SHOW STATS
Aggregate metrics per database since process start, plus per-second averages updated every stats_period (default: 60s):
total_xact_count— Total transactions pooledtotal_query_count— Total SQL statements executedtotal_wait_time— Microseconds clients spent waiting for a server connectionavg_wait_time— Per-second average wait time
SHOW CLIENTS / SHOW SERVERS
SHOW CLIENTS lists individual client connections with state (active, idle, waiting), current wait time, and number of tracked prepared statements.
SHOW SERVERS lists individual backend connections with state (active, idle, tested), PostgreSQL backend PID, and age.
Admin Commands
RELOAD -- Reload pgbouncer.ini without restart
PAUSE [database] -- Stop dispatching queries (for maintenance)
RESUME [database] -- Resume after PAUSE
RECONNECT [database] -- Request reconnection on next server release
KILL database -- Drop all connections to a database
SUSPEND -- Flush and pause all connections
SHUTDOWN -- Graceful shutdown
KILL_CLIENT id -- Force-disconnect a specific client (1.24.0+)
Alternatives
pgpool-II
pgpool-II is a multi-process connection pooler with a much broader feature scope: read/write splitting, query result caching, automatic failover, online replica recovery, and its own watchdog for HA. It also supports session, transaction, and statement pooling.
The breadth comes with cost: pgpool-II is heavier and more complex than PgBouncer. Each child process handles one client connection, and configuration is substantially more involved. For applications that need only connection pooling, PgBouncer consistently outperforms pgpool-II in benchmarks and is easier to operate. pgpool-II makes sense when you need load balancing, read/write splitting, or query caching from a single component and are willing to accept the operational overhead.
Odyssey
Odyssey, developed by Yandex and used in their production environment, is a multi-threaded connection pooler written in C. Its thread-per-core design addresses PgBouncer's single-threaded bottleneck on high-core-count servers. It supports session and transaction pooling, provides more granular per-route configuration, and logs per-client unique connection IDs for better observability. It also passes PostgreSQL error messages back to clients verbatim, where PgBouncer may mask them. Relevant for high-concurrency environments hitting PgBouncer's CPU ceiling.
PgCat
PgCat, written in Rust by the PostgresML team, is a multi-threaded pooler that adds sharding, replica load balancing, and failover. It routes queries to shards based on a sharding key, supports round-robin distribution across read replicas, and exposes Prometheus metrics. Useful for cloud-native deployments that need sharding or replica routing from the pooler layer.
Supavisor
Developed by Supabase, Supavisor runs on the Erlang/OTP BEAM runtime and is designed for multi-tenant SaaS environments with thousands of separate tenant databases. Open source.
Application-Level Pooling
Most database driver libraries include built-in connection pools (HikariCP for Java, pg-pool for Node.js, SQLAlchemy's QueuePool for Python, ActiveRecord's pool for Rails). Application-level pools eliminate per-request connection setup overhead within a single process, but do not reduce the total number of server-side PostgreSQL connections — each application server process maintains its own pool independently.
With 10 application servers each holding 25 connections, PostgreSQL sees 250 backends even if most are idle. PgBouncer is process-agnostic: all application servers share the same backend pool, giving true global control over server connection count.
Both approaches can coexist: application-level pooling prevents connection surge on a per-process basis, while PgBouncer enforces a global connection ceiling.
Sizing the Pool
The right default_pool_size depends on your workload, not a formula. General starting points:
- For CPU-bound queries, pool size close to the number of PostgreSQL CPU cores often works well. Beyond that, connections compete for CPU.
- For I/O-bound queries (waiting on disk reads), larger pools are reasonable since connections spend time waiting, not consuming CPU.
- Start with
default_pool_size= 10–25 per pool and increase based onSHOW POOLSdata. Ifcl_waitingis consistently nonzero, increase pool size or reduce application concurrency. - Set
max_client_connto match or exceed the total possible application connection count across all processes. Clients rejected by PgBouncer see a connection error.
Monitor maxwait in SHOW POOLS as your primary signal. A maxwait consistently above a few hundred milliseconds indicates pool exhaustion under load.
Current Version
The current stable release is PgBouncer 1.25.1 (December 2025), which patched CVE-2025-12819 — an authentication-phase SQL injection vulnerability. Upgrading to 1.25.1 is recommended for all deployments.
Notable recent milestones:
- 1.25.0 (November 2025): LDAP authentication support, client-side direct TLS, notice messages when clients queue.
- 1.24.0 (January 2025):
max_prepared_statementsdefault raised to 200 (enabling prepared statement support by default), per-user and per-database client connection limits,KILL_CLIENTadmin command. - 1.21.0 (October 2023): Introduced
max_prepared_statements— protocol-level prepared statement support in transaction mode. This resolved the most significant long-standing compatibility barrier for ORM-heavy applications.