Pulse 2025 Product Roundup: From Monitoring to AI-Native Control Plane

Read more

PostgreSQL Connection String: Formats, Parameters, and Security

The PostgreSQL connection string is the primary mechanism through which clients negotiate a session with the server. Two formats exist: the URI format (postgresql://user:password@host:port/dbname) and the keyword=value format (host=localhost port=5432 dbname=mydb). Both are parsed by libpq, the C client library that underpins virtually every PostgreSQL driver. Understanding the full parameter surface - not just host and password - is what separates a functional connection from one that is observable, secure, and correctly tuned.

URI vs Keyword=Value Format

The URI format follows RFC 3986 with PostgreSQL-specific extensions. The scheme can be either postgresql or the shorter postgres:

postgresql://[user[:password]@][host][:port][/dbname][?param=value&...]

Any component except the scheme is optional. Multiple hosts can be listed in the hostspec for failover:

postgresql://user:secret@host1:5432,host2:5432/mydb?sslmode=require&connect_timeout=5

Special characters in user names or passwords must be percent-encoded. A password containing @ must become %40, otherwise libpq misparses the authority component. IPv6 addresses require bracket notation: postgresql://[2001:db8::1]:5432/mydb.

The keyword=value format is often cleaner when constructing connection strings programmatically because it sidesteps percent-encoding entirely:

host=localhost port=5432 dbname=mydb user=app password='s3cr3t!' connect_timeout=5

Values containing whitespace or single quotes need quoting. A single quote inside a quoted value is escaped as \'. Both formats support the same parameter set; the choice is mostly a matter of what your runtime environment expects. Environment-supplied DATABASE_URL variables on platforms like Heroku and Railway use the URI form, while libpq-native C code often works directly with keyword=value strings.

Connection Parameters Worth Knowing

The obvious parameters - host, port, dbname, user, password - get used by everyone. Several others are routinely overlooked despite having real operational value.

connect_timeout caps how many seconds libpq will wait per host before failing. Without it, a misconfigured firewall that drops packets (instead of rejecting them) can hang an application for minutes. Setting it to 5-10 seconds is a reasonable floor in production. application_name appears in pg_stat_activity and in server logs, which matters when multiple services share the same database cluster and you need to attribute slow queries or idle connections to a specific service. options lets you pass server-side GUC variables at connection time: options=-c search_path=myschema -c statement_timeout=30000 is common in multi-tenant applications where each connection should have an isolated schema.

target_session_attrs is less known but genuinely useful in high-availability setups. Setting it to read-write causes libpq to reject standby replicas when iterating through a multi-host list, routing the connection only to a primary. prefer-standby does the inverse for read workloads. Combine this with a comma-separated host list and you get failover with role awareness without a proxy layer.

SSL Modes and What They Actually Guarantee

sslmode has six values, and the differences between them are not subtle:

Mode Encrypted CA Verified Hostname Verified
disable No No No
allow Maybe No No
prefer Maybe No No
require Yes No* No
verify-ca Yes Yes No
verify-full Yes Yes Yes

*require does verify the CA if ~/.postgresql/root.crt is present, but this is not guaranteed behavior to rely on.

The default is prefer, which PostgreSQL's own documentation acknowledges makes no sense from a security standpoint - it exists only for backward compatibility. It will negotiate SSL if the server offers it, but an active network attacker can strip the SSL offer and the client will proceed unencrypted. For anything beyond a development laptop, require is the minimum. For production traffic over untrusted networks, verify-full is the correct choice: it validates the certificate chain against a CA root and checks that the server hostname matches the certificate's Subject Alternative Name (SAN), which defeats man-in-the-middle attacks that verify-ca alone does not prevent.

To use verify-full, the CA certificate must be available to the client:

sslmode=verify-full&sslrootcert=/etc/ssl/certs/db-ca.crt

The sslcert and sslkey parameters add mutual TLS authentication - the client presents a certificate to the server rather than (or in addition to) a password. This is how certificate-based authentication works in PostgreSQL, which eliminates the password from the connection string entirely.

Credentials: .pgpass, Environment Variables, and the Trade-offs

Embedding plaintext passwords in connection strings that appear in application configs, logs, or shell history is a persistent source of credential leaks. PostgreSQL provides two alternatives: the ~/.pgpass file and the PGPASSWORD environment variable.

The .pgpass file stores credentials in a colon-separated format:

hostname:port:database:username:password

Wildcards are allowed: *:5432:*:app:secret matches all databases on any host. libpq consults this file only when no password appears in the connection string or in PGPASSWORD. File permissions must be 0600 - PostgreSQL silently ignores the file if group or world read bits are set.

PGPASSWORD is simpler but weaker. The PostgreSQL documentation's primary concern is that some operating systems allow non-root users to see process environment variables via ps, making the password visible to other users on the same machine. On Linux, environment variables are additionally readable from /proc/<pid>/environ by any process running as the same user (and by root), and they also appear in core dumps. The PostgreSQL documentation explicitly discourages this approach. .pgpass does not appear in process listings, but it is still a plaintext file on disk - better isolated, but not encrypted.

For production deployments, neither is ideal when compared to secretless authentication. AWS RDS IAM authentication, mutual TLS, or a secrets manager with dynamic credential injection (Vault, AWS Secrets Manager) all avoid storing long-lived passwords anywhere on the client filesystem. The connection string in these flows often contains no password field at all.

Library-Specific Examples

All major PostgreSQL libraries ultimately call into libpq or re-implement the same wire protocol, so the parameter semantics are consistent. The connection string syntax varies slightly by library convention.

psycopg2 accepts both the libpq keyword=value string and a dsn URI:

import psycopg2

# keyword=value
conn = psycopg2.connect(
    "host=db.example.com port=5432 dbname=mydb user=app "
    "password=secret sslmode=verify-full connect_timeout=5 "
    "application_name=my-service"
)

# URI
conn = psycopg2.connect("postgresql://app:secret@db.example.com:5432/mydb"
                        "?sslmode=verify-full&application_name=my-service")

SQLAlchemy wraps the psycopg2 DSN in its own engine URL syntax. The dialect prefix controls which DBAPI is loaded:

from sqlalchemy import create_engine

engine = create_engine(
    "postgresql+psycopg2://app:secret@db.example.com:5432/mydb",
    connect_args={
        "sslmode": "verify-full",
        "sslrootcert": "/etc/ssl/certs/db-ca.crt",
        "application_name": "my-service",
        "connect_timeout": 5,
    },
    pool_size=10,
    max_overflow=5,
    pool_pre_ping=True,
)

Note that pool_size, max_overflow, and pool_pre_ping are SQLAlchemy pool parameters, not PostgreSQL connection string parameters. They do not pass through to libpq.

node-postgres (pg) accepts a connectionString property that is a standard libpq URI:

const { Pool } = require('pg');

const pool = new Pool({
  connectionString: 'postgresql://app:secret@db.example.com:5432/mydb'
                  + '?sslmode=require&application_name=my-service',
  max: 20,
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 5000,
});

The max, idleTimeoutMillis, and connectionTimeoutMillis fields are pg-pool configuration, not libpq parameters. connect_timeout in the query string controls how long libpq waits during the TCP handshake and SSL negotiation; connectionTimeoutMillis is the number of milliseconds to wait before timing out when connecting a new client to the database. Both can fire independently.

Connection Pooling and Proxy Considerations

When routing connections through PgBouncer or a similar pooler, the connection string the application holds connects to the pooler, not to PostgreSQL directly. PgBouncer forwards a separate connection to the actual server. This has two practical consequences. First, sslmode should be configured for both hops if you need end-to-end encryption - the application-to-pooler leg and the pooler-to-server leg each have their own TLS settings. Second, in transaction-mode pooling, session-level state (prepared statements, session variables set via SET, advisory locks) does not persist across transactions because the pooler may hand off a different server connection. The options parameter in the connection string is a common workaround for search_path. For prepared statements, since PgBouncer 1.21 they are natively supported in transaction mode via the max_prepared_statements parameter, which sets the maximum number of prepared statements the pooler will track per client.

The application_name parameter passes through PgBouncer to PostgreSQL, so each client's application name remains visible in pg_stat_activity even when the connection is multiplexed. This is worth setting explicitly: it makes identifying which service is holding idle connections, blocking locks, or running slow queries straightforward without cross-referencing process tables.

Pulse - Elasticsearch Operations Done Right

Pulse can solve your Elasticsearch issues

Subscribe to the Pulse Newsletter

Get early access to new Pulse features, insightful blogs & exclusive events , webinars, and workshops.

We use cookies to provide an optimized user experience and understand our traffic. To learn more, read our use of cookies; otherwise, please choose 'Accept Cookies' to continue using our website.